doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
10.18653/v1/2021.acl-demo.39
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b15", "b4", "b1", "b7", "b10", "b12", "b14", "b5" ], "table_ref": [], "text": "Knowledge Base Question Answering (KBQA) is a task to answer natural language questions over KBs. Many existing methods have achieved good performances (Chen et al., 2021;Ye et al., 2021;Gu and Su, 2022) in this task. However, due to the large scale of the KB, the existing annotated training data can only cover a small portion of the information region of the KB. Although existing methods try to extend the QA model to a larger perceptual range as much as possible, the longtail information in the KG, which is difficult for existing methods to generalize the semantics of the questions, is still beyond the ability of existing methods. Long-tail information necessitates that the KBQA methods have stronger generalization abilities, which can be viewed as a low-resource environment for few-shot learning.\nRecently, large language models (LLMs) have shown strong comprehension and reasoning ability in many tasks via few-shot in-context learning (ICL) (Brown et al., 2020;Lewkowycz et al., 2022;Ma et al., 2023). We look forward to its effectiveness on KBQA.\nTherefore, how to apply LLMs to KBQA tasks is a topic worth discussing. (Omar et al., 2023) and (Tan et al., 2023) evaluate ChatGPT as a KBQA system by providing questions directly to LLMs. One challenge is that LLM outputs generative long text which is different from the gold answers of KBQA, consisting of phrases corresponding to entities in KB. Due to this difference, it is necessary to manually evaluate or adopt a strategy of matching answer phrases from the generative long text, which is costly or cumbersome to evaluate. Additionally, LLM fails to answer factual questions sometimes, which is called Hallucination (Ji et al., 2023). How to use the knowledge in KB to alleviate hallucination of LLM is another challenge.\nTo address the challenges above, we propose McL-KBQA, a framework to solve KBQA problems in the form of making choices. Based on an existing rank-based KBQA method, we transform the problem into a multiple-choice question format, and then construct prompts for LLM to make choices via ICL, as shown in Figure 1. The upper left part of the figure shows a rank-based KBQA method. Using a ranker to score logical forms from a pool of candidate logical forms which are enumerated by parsing questions and searching over KB, it selects the logical form with the highest score and fetches the answer. Based on scores from the ranker, we select a small set of candidate logical forms, fetch corresponding answers and convert the original question into a multiple-choice question form. Next, we randomly sample several examples and construct the ICL prompt input. After LLM inference, we match the option letter at the end of the generated text, using the logical form of this option letter as the result. However for complex questions with constraints, the performance of ICL still needs to be improved. Therefore, we add question explanations with chain-of-thought (CoT) to ICL prompt, identifying constraint information in questions. we expect LLM can choose the correct answer with the aid of constraint information obtained by CoT.\nThrough analyzing the results of preliminary experiments on 200 questions, we find that the overall performance of LLM is inferior to rank-based methods. However, there is still a considerable proportion of questions with higher accuracy than the rank-based methods. If the results of LLM can be effectively combined with existing KBQA methods, the performance will improve predictably. To complement the advantages of rank-based methods and LLM results, we adopt a result fusion strategy: evaluating the degree of certainty for the ranker in candidate ranking and using the LLM result as a substitute for questions with low certainty.\nIn summary, our contributions include:\n• We propose a KBQA framework with LLM via ICL to answer multiple-choice questions, which are transformed from the original question using an existing rank-based KBQA method. In addition, We use chain-of-thought to get question explanations for ICL examples, which achieve further improvement.\n• We adopt a simple but effective result fusion strategy to complement the advantages of existing method and LLM results.\n• Experiments on WebQSP and GrailQA datasets demonstrate the effectiveness of our framework, especially under the few-shot setting.\n2 Related Work" }, { "figure_ref": [], "heading": "Knowledge base question answering", "publication_ref": [ "b2", "b15", "b4", "b8" ], "table_ref": [], "text": "Most state-of-the-art KBQA methods are based on semantic parsing (Chen et al., 2021;Ye et al., 2021;Gu and Su, 2022). Specifically, they enumerate candidate logical forms based on the entity in the question and then apply a ranker to score every candidate, choosing one logical form with the highest score to find the answer. we refer to them as rankbased methods here. However, sufficient training data is necessary for rank-based methods to achieve competitive performance. (Li et al., 2023) first apply ICL for KBQA task in few-shot settings. It generates logical forms drafts with LLM, and then binds entities and schema items to KB iteratively until an executable one can be found." }, { "figure_ref": [], "heading": "In-Context Learning", "publication_ref": [ "b1", "b10", "b0", "b7", "b13", "b11", "b9", "b6" ], "table_ref": [], "text": "In-context learning (ICL) with LLMs (Brown et al., 2020) is about applying LLM to new tasks without updating the parameters, only providing a few demonstrations of input-output pairs at inference time. It has been found to be competitive in a broad range of tasks including information extraction (Ma et al., 2023), machine translation (Agrawal et al., 2022), numerical reasoning (Lewkowycz et al., 2022) and semantic parsing (Shin and Van Durme, 2022).\nMany studies focused on prompt construction to achieve better performance. (Min et al., 2022) shows the effectiveness of constructing prompts using an input-label pairing format, and (Liu et al., 2022) experiment with the number of examples provided, as well the idea of retrieving relevant examples to a test input to construct the prompt with. (Lampinen et al., 2022) suggests that incorporating explanatory task instructions in context can improve performance." }, { "figure_ref": [], "heading": "KBQA with In-Context Learning", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries and Ranker", "publication_ref": [ "b15" ], "table_ref": [], "text": "A knowledge base (KB) consists of a set of entities E and relations R. Knowledge can be represented as a set of fact triples (s, p, o), where s and o are entities from E and p is a relation from R. For question q, the goal of KBQA is to return a set of entities E A ⊂ E as the answer to q.\nWe select a wild used rank-based method to provide candidates for LLM, consisting of two steps.\nEnumerate Candidate Given entities detected in the question, we query the knowledge base starting from every entity for paths reachable within two hops. we save the paths into logical forms, which constitute a set of logical form candidates C = {c i } n i=1 .\nRanker Following the setting in Ye et al. (2021), We train a BERT-based ranker to score every logical form candidate. Given the question q and a logical form candidate c i ∈ C, we concatenate q and c i as the input of a BERT-based encoder, taking the output logit as the similarity between them:\ns(q, c i ) = Liner(BERT([q; c i ]))(1)\nwhere BERT denotes [CLS] representation of input; Liner is a linear layer reducing representation to similarity score. We randomly sample negative logical form candidates during training without using the bootstrap strategy. We select the logical form candidate with the highest score as ranker result lf ranker :\nlf ranker = arg max c i ∈C s(q, c i ) (2)" }, { "figure_ref": [], "heading": "Make Choice by LLM via ICL", "publication_ref": [], "table_ref": [], "text": "We reformulate question q into the form of multiple-choice question q choice . Using LLM via in-context learning to solve q choice by selecting one option A opt from given options, we can obtain the answer to q by returning the answer corresponding to A opt ." }, { "figure_ref": [], "heading": "Reduce the number of candidates", "publication_ref": [], "table_ref": [], "text": "Due to the huge number of candidates (up to thousands), it is impossible to use every candidate for building multiple-choice questions q choice . Hence We use Ranker to score every candidate c i ∈ C, select top k logical form candidate for the next step.\nWe mark this smaller candidate set as C k ." }, { "figure_ref": [ "fig_0" ], "heading": "Prompt Construction", "publication_ref": [], "table_ref": [], "text": "Based on question q and candidates C k , we formulate multiple-choice question q choice and construct prompts for LLM. The prompt has three parts: Task Description, In-Context Example, and Incomplete Entry. Figure 2 is an example of prompt.\nTask Description is a short description of the task. We draft a simple version without making too many attempts.\nIn-Context Example consists of a question, options, and answer. We select question q for incontext examples by random sampling from the development set and make sure the gold logic form lf g is in candidates set C k . Based on C k , we build options for q. For lf i ∈ C k , we shift lf i into query and get the corresponding entities set E i by execute the query in KB. We control the size of E i smaller than 5 for cost consideration. E i is consist of entity IDs like \"m.01428y\". To make good use of LLMs," }, { "figure_ref": [], "heading": "Prompt", "publication_ref": [], "table_ref": [], "text": "[ICL]: Here are the answers for the problems in the exam.\n[CoT]: Can you give an explaination to a question in my exam? I need to think step by step like this:\nStep 1. Identify the question and its main focus.\nStep 2. Identify constrain in the question, for example: time, type, …\nStep 3. take constrain into consideration, give the answer." }, { "figure_ref": [], "heading": "Task Description", "publication_ref": [], "table_ref": [], "text": "Question: what guitar did clapton play ?\nOptions: [A] England [B] Piano, Guitar, ... [C] Gibson SG, Fender Stratocaster ... [ICL]: The answer is therefore: [C] [CoT]: Explanation: Step1 … , Step2 … ,\nStep3. The answer is therefore: [C]" }, { "figure_ref": [], "heading": "In-Context Example", "publication_ref": [], "table_ref": [], "text": "Question: what does jamaican people speak ?\nOptions: [A] Kingston [B] Leroy Sibbles, ... [C] Jamaican\nCreole English Language, Jamaican English ..." }, { "figure_ref": [], "heading": "[ICL]:", "publication_ref": [], "table_ref": [], "text": "The answer is therefore:\n[CoT]: Explanation: we fetch the surface name set for e ∈ E i as N i like \"Jamaican English\" rather than IDs. We build the option context of lf i by joining n ∈ N i with comma and attaching it to an option letter opt l like \"[A]\"." }, { "figure_ref": [], "heading": "Incomplete Entry", "publication_ref": [], "table_ref": [], "text": "Combining the question q and all options, we get the multiple-choice question q choice . Finally, we use the option letter corresponding to lf g as the answer of q choice ." }, { "figure_ref": [], "heading": "Incomplete Entry", "publication_ref": [], "table_ref": [], "text": "The composition is similar to the in-context example. The questions here are those need to be answered. Options are built out of entity names fetched from KB based on lf i ∈ C k , similar to the in-context example. As the answer part, we leave it blank for LLM to complete, like \"the answer is therefore\"." }, { "figure_ref": [ "fig_0" ], "heading": "Question Explanation with Chain of Thought", "publication_ref": [ "b17" ], "table_ref": [], "text": "Inspired by previous work (Zhang et al., 2022) with question explanation, we propose a new form of prompt: use LLM to generate question explanation with CoT to analyze the question and choose one option letter in the end. As shown in Figure 2 (marked with prefix [CoT]), we adopt a new task description to generate an explanation for the question, following 3 steps as a guide:\nStep 1. identify the main focus of the question; Step 2. identify constrain in question; Step 3. give the selected answer. For the explanation to question in in-context example, we obtain it by LLM via zero-shot CoT, based on the task definition above. As to incomplete entry, it ends like \"explanation for problem\" instead for LLM to complete." }, { "figure_ref": [], "heading": "Inference", "publication_ref": [], "table_ref": [], "text": "We feed the prompt input prompt q to LLM and get the output text T. For the convenience of matching results, we set a stop flag \"]\" where LLM will stop generating further tokens. This makes T ends like \"the answer is therefore [A\". We match an option letter A opt for the multiple-choice question at the end of T. We obtain the logical form corresponding to A opt as the LLM result lf llm .\nT = LLM(prompt q )\n(3)\nlf llm = Match(T )(4)\nFor a small part of questions, LLM fail to give an available A opt , we use lf ranker as a substitute." }, { "figure_ref": [], "heading": "Result Fusion", "publication_ref": [ "b10" ], "table_ref": [], "text": "Inspired by (Ma et al., 2023), we fuse the ranker and LLM results. Given question q and its candidates C, we use ranker to score c i ∈ C as shown in (1) and adopt the maximum score among all candidates as confidence score s(q) of q. We set a threshold λ to s(q). For questions with s(q) lower than λ, we use LLM results, otherwise using Ranker results. \ns(q) = max c i ∈C s(q, c i ) (5) lf f use = lf llm s(q) < λ lf ranker else(" }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [], "table_ref": [], "text": "Metrics Consistent with previous work, we use F1 as the evaluation metric on WebQSP. On GrailQA, we use official metrics Exact Match (EM) and F1-score (F1)." }, { "figure_ref": [], "heading": "Few-Shot Setting", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We train Ranker with different percentages of the train set, selecting the checkpoint with the best F1 on local development set and evaluating on local test set. The percentage and size of the few-shot train set on two datasets are shown in Table 1. For few-shot settings on GrailQA, we only report overall F1 & EM.\nMethod We report the result of the following methods.\n• rank: The result of the rank-based method, marked it as rank for short. We use Ranker to score every candidate and select the top 1 candidate to fetch the answer.\n• ICL: Let LLM answer multiple-choice questions with the base prompt.\n• CoT: Similar to ICL, but use prompt in the form of question explain with CoT.\n• w/ fuse: do result fusion between rank and LLM (ICL, CoT). In the inference step, we leverage ChatGPT (gpt-3.5-turbo-0301) from OpenAI API as our LLM with the temperature setting to 0. In the result fusion step, we set λ based on the proportion of problems involved in the fusion which is set to 5%, in order to control the scale of result fusion. Specifically, if the overall performance of rank/LLM is better, we use the result of rank for 95%/5% questions with higher confidence scores and use the result of LLM for the remaining questions. For few-shot settings on two datasets, we report average results on 5 random seeds." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Main Result", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_4" ], "text": "Result on WebQSP are shown in Table 2. In two few-shot settings (5%, 10%), ICL can significantly superior to Ranker. Among them, ICL can surpass Ranker by 9.83 with 5% training data. However, when the training data is more sufficient (30%, 50%, 100%), ICL will be inferior to Ranker. Moreover, as the amount of training data increases, the gap between ICL and Ranker will become larger, increasing from 0.79 (30%) to 4.78(100%).\nAs for ICL w/ fuse (fusing Ranker and ICL), the result can achieve stable improvement, both for Ranker and ICL. In the three settings (30%, 50%, 100%) where ICL is inferior to Ranker, ICL w/fuse can surpass Ranker with an average improvement of 0.80. It is worth noting that we only select 5% of the total questions using ICL results in these three groups.\nCoT exceeds ICL in all settings, with an average improvement of 1.21, and the overall trend is similar to ICL. It is inferior to rank in settings where the training data is sufficient (50%, 100%). After result fusion (CoT w/ fuse), it can also exceed rank. The advantage of CoT w/fuse over ICL w/fuse is more obvious at lower ratios of training set, and the difference is smaller when the training data is sufficient.\nOverall results on GrailQA are shown in Table 3. Performance is consistent with our analysis on WebQSP. In order to evaluate the generalization ability, we report the results of three levels on the full train set (100%) of GrailQA in Table 4. Our method outperforms rank by 1.85 EM/1.94 F1 in Compositional level, and 2.68 EM/3.17 F1 in Zero-Shot level respectively. It is also competitive on I.I.D. level, slightly inferior to rank in EM but superior to it in F1. The above results prove that our method has a strong generalization ability and keeps competitive on I.I.D. questions at the same time." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Comparing Outputs of Ranker and LLMs", "publication_ref": [], "table_ref": [], "text": "We carefully compare the performance between rank and LLM outputs (ICL version) on two sets of WebQSP dataset (5% and 100%). Figure 3 shows the percentage of four types of output situations on test set of WebQSP:\n• top left: F1 of rank and LLM outputs are equal but not 0,\n• top right: rank is better,\n• bottom left: LLM is better,\n• bottom right: Their F1 are both 0.\nMost of the time, LLM outputs are consistent with those of rank. In few-shot setting, LLM is better than rank on 19.28% of questions. It is worth noting that, although F1 of LLM output is overall inferior to rank in full setting (100%), it still performs better than rank on 11.59% questions. It Output:\nStep 1: The main focus of the question is to identify the team that Jeremy Lin played for in 2013.\nStep 2: The question specifies the year 2013 as a constraint.\nStep Step 1: The main focus of the question is to identify the birthplace of George Lopez.\nStep 2: The question specifically asks for a location.\nStep Output:\nStep 1: The main focus of the question is to identify the location of Boeing headquarters.\nStep 2: The question specifically asks for a location.\nStep Output:\nStep 1: The main focus of the question is to identify the origin of the Kansas City Royals.\nStep 2: The question specifically asks for a location and a time period.\nStep 3: The answer is therefore [C is similar to rank output under few-shot settings, where the outputs of 8.60% questions are better than LLM. This indicates that combining the output of rank and LLM efficiently can improve performance. This indicates that our result fusion strategy can complement the advantages of rank and LLM, and select answers with higher performance between the two in more questions, achieving overall performance improvement." }, { "figure_ref": [ "fig_1" ], "heading": "Effectiveness of Result Fusion", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_2", "tab_5" ], "text": "The experiment results in Table 2 and Table 3 demonstrate that there is a stable improvement in result fusion under every group setting. We select We-bQSP 5%/100% as an example of few-shot/full setting to analyze the effect of result fusion in more detail. The specific settings are consistent with Table 2. As shown in Table 5, we statistics the proportion of questions in fused results using Higher/Lower performance results among rank and LLM. At the 5% setting, overall F1 of LLM is higher than rank, having 19.28% H question (\"LLM better\" in the left part of Figure 3). After result fusion, 0.61% (19.89% -19.28%) of H questions are increased. As to full setting, result fusion brings an increase of 2.01% (17.69% -15.68%) H questions. This indicates that our result fusion strategy can complement the advantages of rank and LLM, selecting answers with higher performance between the two in more questions, and achieving overall performance improvement." }, { "figure_ref": [], "heading": "Does CoT Make a Different?", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_8" ], "text": "We use two forms of prompts (ICL and CoT) for LLM to do multiple-choice questions. It can be seen in the main result (Table 2 andTable 3) that CoT shows stable improvements compared to ICL.\nTo clarify the strengths and weaknesses of CoT, We conduct a case study on the output of LLM using CoT prompt (Table 6). In example (a), the question explanation generated by CoT can mention \"2013\" as a time constraint to the question. CoT can also identify the answer type to help with selection: \"location\" is the answer type provided by CoT in (b), which can help to exclude option [B]. However, the information provided by CoT may not help selection in some cases. As shown in example (c), CoT provides the answer type \"location\", but each option is related to a location. It may also fail when CoT introduces wrong information such as \"time period\" in example (d) " }, { "figure_ref": [], "heading": "Match Option from LLM Output", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Given LLM output, we automatically match an option letter A opt from the end of output using a regular expression, and obtain the corresponding logical form as the LLM result based on A opt . We analyze states of our automatic option letter matching results on the test set of WebQSP dataset, as shown in Table 7. For ICL and CoT prompt inputs, we can match valid options (Match) with proportions of 96.36% and 93.52% respectively. This indicates that our matching method can extract valid results of LLM in most cases. For questions that failed to match the options (Fail), we check the Fail cases of ICL and CoT in the full train setting. Every case of ICL (a) regards the correct answer is not being provided, and giving alternative answers sometimes. As for cases of CoT can be divided into four types: type (a) which is introduced above (40.90%); (b) return more than one option letter (16.67%); (c) give option context rather than option letter (16.67%).\nIt is worth noting that there are a few questions that match an option letter not provided in the options list (OOL), with 0.14% in ICL and 0.06% in CoT. Due to the limited number of such cases, we can analyze each of them. With the aid of the question explanation generated by CoT, we believe it is another expression of LLM that all options provided are incorrect." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose McL-KBQA framework for knowledge base question answering. We transform the original question into a multiple-choice question with a rank-based existing KBQA method and use LLM via ICL to answer the question by making a choice. In addition, we use chain-ofthought to get question explanations for example in ICL. Finally, we adopt a simple but effective result fusion strategy to complement the advantages of the rank-based method and LLM results. The experimental results on two datasets, WebQSP and GrailQA, suggest the effectiveness of our framework, especially under the few-shot setting." } ]
Question answering over knowledge bases (KBQA) aims to answer factoid questions with a given knowledge base (KB). Due to the large scale of KB, annotated data is impossible to cover all fact schemas in KB, which poses a challenge to the generalization ability of methods that require a sufficient amount of annotated data. Recently, LLMs have shown strong few-shot performance in many NLP tasks. We expect LLM can help existing methods improve their generalization ability, especially in lowresource situations. In this paper, we present McL-KBQA, a framework that incorporates the few-shot ability of LLM into the KBQA method via ICL-based multiple choice and then improves the effectiveness of the QA tasks. Experimental results on two KBQA datasets demonstrate the competitive performance of McL-KBQA with strong improvements in generalization. We expect to explore a new way to QA tasks from KBQA in conjunction with LLM, how to generate answers normatively and correctly with strong generalization.
Make a Choice! Knowledge Base Question Answering with In-Context Learning
[ { "figure_caption": "Figure 2 :2Figure 2: An example of prompt. Consist of three parts: task description, in-context example, and incomplete entry. We have two forms of prompt, ICL and CoT. The different parts between them are marked with two prefixes [ICL] and [CoT].", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison between rank and LLM (ICL) result on WebQSP dataset with the percentage of four output situations. In few-shot setting (5%), LLM mostly improves or keeps the result, occasionally introducing errors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Examples of CoT Output(a) Question: What team does jeremy lin play for 2013? Options: [A] Houston Rockets [B] Vive Targi Kielce [C] New York Knicks, Houston Rockets [D] 762195", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3: The answer is therefore [A (b) Question: Where george lopez was born? Options: [A] San Fernando High School [B] 1961-04-23 [C] Mission Hills, Los Angeles [D] Mission Hills Output:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3: The answer is therefore [D (c) Question: Where are boeing headquarters? Options: [A] Seattle [B] United States of America [C] Chicago [D] King County", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Question: what does jamaican people speak ?Exist MethodMake Choice by LLMEnumerate Candidatem.01428y, m.04ygk0Jamaican English, ...m.059g4North America(JOIN (R location.country.languages_spoken))......(JOIN (R base.aareas.administrative_area.adjectival_form))(JOIN (R base.locations.countries.continent)) (JOIN (R location.country.official_language))What does jamaican people speak? Options: [A] Jamaican English, ...[B] North AmericaLLM...Number Reduce[C] ... [D] ... Answer: [C]… the answer is [C] InferenceRankerHere are the answers for the problems in the exam.(JOIN (0.12 (JOIN (R base.aareas.administrative_area.adjectival_form)) The answer is therefore ... Choose from the following options: [A]... 0.24 (JOIN (R location.country.official_language)) Problem n. What does jamaican people speak? 0.06 (JOIN (R base.locations.countries.continent)) Problem 1. What currency does brazil use ? Choose from the following options: [A] ... The answer is therefore [B] ... ...(JOIN (R location.country.official _language))", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Percentage and size of few-shot train sets on WebQSP and GrailQA, arranged by size from small to large. The percentage here is an approximate value.", "figure_data": "6)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "F1 scores on WebQSP Implementation Details We have 4 options for multiple-choice questions based on candidates provided by the rank-based method. For prompt construction, We randomly sample 2 exemplary questions from the development sets of WebQSP and GrailQA respectively to build in-context examples.", "figure_data": "5%10% 30% 50% 100%rank48.73 56.09 63.95 67.49 69.61ICL58.56 61.11 63.17 63.85 64.83w/ fuse 58.75 61.38 64.62 68.43 70.42CoT59.78 61.90 64.09 65.53 66.25w/ fuse 59.79 62.08 65.00 68.45 70.32", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Overall EM and F1 on GrailQA (local dev).", "figure_data": "Methods0.5%1%5%10%100%EMF1EMF1EMF1EMF1EMF1rank44.71 49.96 50.60 56.17 60.70 66.26 62.50 67.51 62.92 67.78ICL41.19 47.84 41.51 47.97 42.02 48.87 43.35 49.52 43.14 49.37w/ fuse 49.08 54.76 53.28 59.59 61.62 67.95 63.57 69.18 64.70 69.97CoT44.26 50.71 44.91 51.48 45.12 52.03 46.39 52.48 46.04 52.18w/ fuse 49.20 54.82 53.37 59.62 61.68 68.05 63.69 69.32 64.64 69.93OverallI.I.D.CompositionalZero-ShotEMF1EMF1EMF1EMF1rank62.92 67.78 71.27 74.57 56.27 60.45 62.04 67.85ICL43.14 49.37 48.43 53.38 37.65 42.89 43.11 50.31w/fuse 64.70 69.97 71.02 74.84 57.99 62.29 64.72 71.02CoT46.04 52.18 50.06 55.22 39.63 44.87 46.94 53.88w/fuse 64.64 69.93 70.89 74.86 58.12 62.39 64.61 70.90", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of three levels on GrailQA (local dev) with the full train set (100%).", "figure_data": "settingsHL5%19.89% 7.99%100% 17.69% 9.58%", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Statistics of result fusion on the test set of WebQSP dataset (ICL w/fuse), reporting proportion(%) of: H: the fused result is the one with higher F1 among rank and LLM. L: use the result with lower F1.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Case Study: Examples of LLM output with CoT. The best option is marked in bold. CoT is able to provide information such as constraint (a) or answer type (b) to help with selection. However, the provided information can not help selection if all options match it (c). Also, CoT may introduce wrong information sometimes (d).", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ", which leads to an incorrect selection of time-related option[C]. proportion(%) of different states of automatic option letter matching result from LLM outputs on test set of WebQSP, reporting average results of different settings. ICL and CoT: LLM output of two prompt input forms. Match: match a valid option letter, Fail: fail to match an option letter, OOL: match an option letter but not provided in the options list.", "figure_data": "Match Fail OOLICL 96.36 3.50 0.14CoT 93.52 6.43 0.06", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" } ]
Chuanyuan Tan; Yuehe Chen; Wenbiao Shao; Wenliang Chen; Zhefeng Wang; Baoxing Huai; Min Zhang
[ { "authors": "Sweta Agrawal; Chunting Zhou; Mike Lewis; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "", "ref_id": "b0", "title": "Incontext examples selection for machine translation", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Shuang Chen; Qian Liu; Zhiwei Yu; Chin-Yew Lin; Jian-Guang Lou; Feng Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "ReTraCk: A flexible and efficient framework for knowledge base question answering", "year": "2021" }, { "authors": "Yu Gu; Sue Kase; Michelle Vanni; Brian Sadler; Percy Liang; Xifeng Yan; Yu Su", "journal": "", "ref_id": "b3", "title": "Beyond iid: three levels of generalization for question answering on knowledge bases", "year": "2021" }, { "authors": "Yu Gu; Yu Su", "journal": "International Committee on Computational Linguistics", "ref_id": "b4", "title": "ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering", "year": "2022" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Computing Surveys", "ref_id": "b5", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Andrew Lampinen; Ishita Dasgupta; Stephanie Chan; Kory Mathewson; Mh Tessler; Antonia Creswell; James Mcclelland; Jane Wang; Felix Hill", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Can language models learn from explanations in context?", "year": "2022" }, { "authors": "Aitor Lewkowycz; Anders Andreassen; David Dohan; Ethan Dyer; Henryk Michalewski; Vinay Ramasesh; Ambrose Slone; Cem Anil; Imanol Schlag; Theo Gutman-Solo", "journal": "", "ref_id": "b7", "title": "Solving quantitative reasoning problems with language models", "year": "2022" }, { "authors": "Tianle Li; Xueguang Ma; Alex Zhuang; Yu Gu; Yu Su; Wenhu Chen", "journal": "", "ref_id": "b8", "title": "Few-shot in-context learning for knowledge base question answering", "year": "2023" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "What makes good in-context examples for GPT-3?", "year": "2022" }, { "authors": "Yubo Ma; Yixin Cao; Yongching Hong; Aixin Sun", "journal": "", "ref_id": "b10", "title": "Large language model is not a good few-shot information extractor, but a good reranker for hard samples!", "year": "2023" }, { "authors": "Sewon Min; Xinxi Lyu; Ari Holtzman; Mikel Artetxe; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Rethinking the role of demonstrations: What makes in-context learning work?", "year": "2022" }, { "authors": "Reham Omar; Omij Mangukiya; Panos Kalnis; Essam Mansour", "journal": "", "ref_id": "b12", "title": "Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots", "year": "2023" }, { "authors": "Richard Shin; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Fewshot semantic parsing with language models trained on code", "year": "2022" }, { "authors": "Yiming Tan; Dehai Min; Yu Li; Wenbo Li; Nan Hu; Yongrui Chen; Guilin Qi", "journal": "", "ref_id": "b14", "title": "Evaluation of chatgpt as a question answering system for answering complex questions", "year": "2023" }, { "authors": "Xi Ye; Semih Yavuz; Kazuma Hashimoto; Yingbo Zhou; Caiming Xiong", "journal": "", "ref_id": "b15", "title": "Rng-kbqa: Generation augmented iterative ranking for knowledge base question answering", "year": "2021" }, { "authors": "Wen-Tau Yih; Matthew Richardson; Christopher Meek; Ming-Wei Chang; Jina Suh", "journal": "", "ref_id": "b16", "title": "The value of semantic parse labeling for knowledge base question answering", "year": "2016" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b17", "title": "Automatic chain of thought prompting in large language models", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 343.89, 115.58, 181.26, 10.63 ], "formula_id": "formula_0", "formula_text": "s(q, c i ) = Liner(BERT([q; c i ]))(1)" }, { "formula_coordinates": [ 3, 354.66, 242.81, 170.48, 19.05 ], "formula_id": "formula_1", "formula_text": "lf ranker = arg max c i ∈C s(q, c i ) (2)" }, { "formula_coordinates": [ 4, 100.2, 223.06, 151.66, 37.25 ], "formula_id": "formula_2", "formula_text": "Options: [A] England [B] Piano, Guitar, ... [C] Gibson SG, Fender Stratocaster ... [ICL]: The answer is therefore: [C] [CoT]: Explanation: Step1 … , Step2 … ," }, { "formula_coordinates": [ 4, 98.11, 311.64, 149.71, 6.77 ], "formula_id": "formula_3", "formula_text": "Options: [A] Kingston [B] Leroy Sibbles, ... [C] Jamaican" }, { "formula_coordinates": [ 4, 373.64, 376.35, 151.5, 10.77 ], "formula_id": "formula_4", "formula_text": "lf llm = Match(T )(4)" }, { "formula_coordinates": [ 4, 342.24, 572.74, 182.9, 56.76 ], "formula_id": "formula_5", "formula_text": "s(q) = max c i ∈C s(q, c i ) (5) lf f use = lf llm s(q) < λ lf ranker else(" } ]
10.18653/v1/2022.acl-long.579
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b9", "b7", "b11", "b1", "b4", "b18", "b9", "b9", "b5", "b0", "b6", "b0" ], "table_ref": [], "text": "Open-domain conversation, which refers to conversing without any constraints on topic (e.g., chitchat), has been the subject of active research in recent years (Shuster et al., 2022b;Roller et al., 2020;Thoppilan et al., 2022;Freitas et al., 2020). A good open-domain conversational agent is expected to be engaging, knowledgeable, up-to-date, and personalized by remembering the user. Therefore it is key to seamlessly blend all desirable skills into a conversation system.\nTo address this, previous works utilized separate modules for internet search (Shuster et al., 2022a;Komeili et al., 2022) or memory generation (Zhong et al., 2022;Xu et al., 2022b,a). Recently, there have been studies aimed at unifying various conversation abilities into a single language model based on the modular approach (Shuster et al., 2022a,b). One notable recent work is BLENDERBOT3 (Shuster et al., 2022b), a modular system where a single transformer model is served for all modules. * *Work done during internship at Kakao Brain.\nRecently, there is a trend of providing personalized conversation experiences by memorizing individual user information (Xu et al., 2022b;Lu et al., 2022;Bae et al., 2022;Mazaré et al., 2018). However, as shown in Figure 1, the agent may encounter problems when the conversation lasts long, as information about a person is not static and changes over time. Therefore, managing memory based on the current state is one of key abilities for a good open domain conversational agent (Bae et al., 2022).\nIn this paper, we propose an effortless method to improve BLENDERBOT3 by integrating memory management capability into it. Since no general data exists for this purpose, we formally define a new task for memory management and present an automated method to create memory management datasets. Our method has following advantages:\n• Require little cost for data construction.\n• Do not affect BLENDERBOT3's performance in other tasks.\n• Need no additional costs for the external memory and model parameters, but rather reduces the costs.\nWe leverage publicly available datasets to construct memory management data, which can be easily scaled up and extended to other domains. Additionally, we report performance in other tasks of BLENDERBOT3 and external memory efficiency of our model. Experimental results show that our proposed model BLENDERBOT3-Mˆ3, which is multi-task trained with memory management, outperforms BLENDERBOT3 with a relative 4% performance gain in terms of F1 score. In addition, across all 67 tasks where BLENDERBOT3 is trained, we observe that the average PPL score of BLENDERBOT3-Mˆ3 increases 0.05 from the PPL of BLENDERBOT3, demonstrating the seamless integration of the new Without memory management, an agent cannot handle the situation of changing user's information. Also, the size of the memory is monotonically increased during the conversation. Right: With memory management, the agent can replaces the out-dated information with the new one. In addition, the size of the memory also tend to be suppressed. memory management task. Through these explorations, we demonstrate that the overall conversation performance is effortlessly, yet successfully improved by incorporating the memory management capability into the conversational agent.\n2 Related Work" }, { "figure_ref": [], "heading": "Unified Conversation Systems", "publication_ref": [ "b12", "b2", "b9" ], "table_ref": [], "text": "Recently, there have been studies for unifying different conversation skills, such as being engaging, factual, empathetic, civil and knowledgeable, into a single language model based on modular approach (Smith et al., 2020;Shuster et al., 2022a,b;Ung et al., 2021;Kim et al., 2022). Therefore, it is crucial to incorporate all desirable skills into a conversation system seamlessly. BLENDER-BOT3 (Shuster et al., 2022b) shows that equipping a single transformer model with all skills through multi-task training can be a promising direction." }, { "figure_ref": [], "heading": "Personalized Conversation Systems with Memory", "publication_ref": [ "b17", "b9", "b5", "b0", "b9" ], "table_ref": [], "text": "Providing personalized conversation experiences to users has been improved by memorizing their information. Conversational agents either keep the user's profile (Zhang et al., 2018) or extracted user information from conversation history (Xu et al., 2022b;Lu et al., 2022;Bae et al., 2022) to generate personalized response. Especially, Xu et al. (2022b) extract and store user information dynamically during conversation, allowing the agents to remember the user in long-term conversations." }, { "figure_ref": [], "heading": "Memory in Long-term Conversation Systems", "publication_ref": [ "b9", "b15", "b0" ], "table_ref": [], "text": "Xu et al. ( 2021) and Xu et al. (2022b) have tackled long-term conversation problem and released MSC and DuLeMon datasets, respectively. In MSC (Xu et al., 2021) dataset, sessions are annotated with summaries of previous sessions that may be useful in current conversations. It is intended to refer to the previous conversation with memory in longterm. However, MSC does not aim to reflect the dynamic feature of personal information. Work of Bae et al. (2022) represents the first attempt to address this problem, presenting the memory management task. However, since their approach classifies relationship of sentence pairs to compute memory management operation, the inference time increases as the length of memory increases." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "We improve BLENDERBOT3 by equipping it with memory management capability, which requires data of memory management for multi-task training along with other tasks. Since no general data exists for this purpose, we define a new task for memory management and present an effortless way to create the memory management dataset, which can be easily scaled up. In this paper, we apply this method to open-domain conversations, but it is generally extendable to memory management in other tasks." }, { "figure_ref": [], "heading": "Memory Management Task Definition", "publication_ref": [], "table_ref": [], "text": "During conversation, a conversational agent maintains natural language memory sentences M t = {m 1 , m 2 , • • • , m n } which consists of user information abstracted from the previous utterances. At time step t, we are given the memory M t and user information p t generated from utterance u t , each of which could be either user's utterance or bot's utterance. At the end of each turn, we aim to predict memory management operation op. We define the management operation set as O = {APPEND, PASS, REPLACE m i } where m i is an entry of the memory M t and define this task as Model(M t , p t ) → op where op ∈ O. Consequently, the model is required to determine whether to add p t , not add p t , or replace m i with p t by considering all memory M t and p t holistically." }, { "figure_ref": [], "heading": "Memory Management Data Curation", "publication_ref": [ "b13" ], "table_ref": [], "text": "For memory management training, we need ⟨M t , p t , op⟩ triples. As there is no existing dataset providing the triple, we construct it in an automated way with existing datasets. We reinterpret publicly available DNLI (Welleck et al., 2018) dataset, that is designed for detecting textual entailment in dialogs, for memory management operations. Given a DNLI triple ⟨s 1 , s 2 , relationship⟩, we utilize s1 as a memory sentence to be part of all memory M t and s 2 as p t , newly generated information that is to be added or not. Then, we reinterpret the relationship labels as memory operations as below:\n• Positive means s 1 and s 2 share relevant information. However, the relationship between s 1 and s 2 can be either s 1 entails s 2 , s 2 entails s 1 , or almost identical, depending on the amount of information. We classify DNLI data with positive labels into the three categories above and label them as PASS, REPLACE s 1 , and APPEND operations, respectively.\n• Negative means s 1 and s 2 are contradictory. We make up REPLACE s 1 operations with those data.\n• Neutral means s 1 and s 2 are not related. We construct data with APPEND operation using neutral data.\nThen, random memory sentences are collected to construct the memory M ′ of various lengths. We append s 1 to M ′ to comprise the entire memory M and finally obtain ⟨M, s2, op⟩ triples where M = M ′ ∪ {s1}." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the experimental setups in detail and show results of our model BLENDERBOT3-Mˆ3, which is multi-task trained with memory management data." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "We leverage self-reproduced BLENDERBOT3 based on huggingface library as the LM backbone. The proposed model BLENDERBOT3-Mˆ3 is multi-task trained with constructed memory management data along with data for other conversation skills, starting from the r2c2 checkpoint. We use the Adam optimizer (Kingma and Ba, 2014) with a cosine learning rate 5e-5 and batch size 64. For comparison purposes, we additionally fine-tuned BLENDERBOT3 on the memory management dataset. Experiments are performed with BLENDERBOT3 3B, BLENDERBOT3 3B + MM fine-tuning, and BLENDERBOT3-Mˆ3 3B." }, { "figure_ref": [], "heading": "Training Dataset", "publication_ref": [], "table_ref": [], "text": "In addition to the existing 67 training datasets of 11 particular tasks to train BLENDERBOT3, Memory Management (MM) dataset is built following the creation method introduced in Section 3.2. The created MM dataset consists of 90,000 examples, and its operation labels (PASS, APPEND, REPLACE m i ) are equally distributed." }, { "figure_ref": [], "heading": "Evaluation Dataset", "publication_ref": [], "table_ref": [], "text": "The baseline model and proposed models are evaluated on Multi Session Chat (MSC) in an end-to-end manner for measuring overall conversation ability in long-term conversation, and on all 67 datasets corresponding to the training datasets for measuring performance of each task (module)." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "End-to-End Evaluation The comparison of overall conversation ability on MSC dataset which has long dialogues is shown in Table 1. Models are evaluated in an end-to-end manner. As shown in Table 1, throughout all sessions, we observe an overall increase in performance of BLENDERBOT3-Mˆ3 compared to BLENDER-BOT3, indicating a successful integration of the memory management capability followed by an improvement in overall conversation capability. Specifically, BLENDERBOT3-Mˆ3 outperforms BLENDERBOT3 with a relative 4% performance gain in terms of F1 score (average result from all sessions). It demonstrates that keeping memory up-to-date through memory management improves general conversation performance in long-term conversations, which further can be a cornerstone of lifelong conversations. Additionally, the numbers of entries in memory per every 100 turns is reported in Table 1, showing that memory management can effectively reduce external memory usage.\nEvaluation in other tasks One may consider that in exchange for the memory management ability, other existing abilities might be compromised. To deal with this concern, we directly measure the task performance of each module, which can be also inferred from Table 1 Perplexity of each module is reported in Table 2. Across all tasks, we observe that the average PPL score of BLENDERBOT3-Mˆ3 increases 0.05 from the PPL of BLENDERBOT3.\nThe above explorations show that the overall conversational performance has been improved by successfully incorporating the new memory management capability into conversational agents." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose an effortless way to improve open-domain conversation systems by integrating memory management into them. It is effortless in that we fully leverage existing data to construct memory management data in an auto-mated way which can be easily scaled up. Our proposed method does not affect BB3's performance in other tasks, and does not require additional costs for the external memory and model parameters, but rather reduces the costs. We show that in end-to-end conversation evaluation, our proposed model BLENDERBOT3-Mˆ3, which is multi-task trained with memory management, outperforms BLENDERBOT3 with a relative 4% performance gain in terms of F1 score. To deal with lifelong conversations where conversation histories and memories are accumulated endlessly, keeping memory up-to-date compactly via memory management can be a promising direction.\nLimitations and Future Work While constructing memory management dataset, we comprise memories with randomly selected user information sentences. This may cause inconsistent memories and different distributions from those encountered in the actual conversation flows. Therefore, a careful design of the memory could be a potential avenue for further improving model performance.\nEven with management, memory will constantly increase. Accumulated large memory is not suitable for the input of LM and occupies storage. However, our experiments only assume long conversations where the length of maximum memory is predetermined and fixed." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Since the purpose of the conversation system is to interact with human, it is important to build reliable and controllable system. Also, as the proposed system stores the information of the user, protecting privacy is also important. Lastly, we will release the dataset and code for research purpose only to prevent from unintended usage of our product." } ]
Open-domain conversation systems integrate multiple conversation skills into a single system through a modular approach. One of the limitations of the system, however, is the absence of management capability for external memory. In this paper, we propose a simple method to improve BLENDERBOT3 by integrating memory management ability into it. Since no training data exists for this purpose, we propose an automating dataset creation for memory management. Our method 1) requires little cost for data construction, 2) does not affect performance in other tasks, and 3) reduces external memory. We show that our proposed model BLENDERBOT3-Mˆ3 , which is multitask trained with memory management, outperforms BLENDERBOT3 with a relative 4% performance gain in terms of F1 score.
Effortless Integration of Memory Management into Open-Domain Conversation Systems
[ { "figure_caption": "Figure 1 :1Figure 1: Illustrative examples of open-domain conversation with an agent with/without memory management. Left:Without memory management, an agent cannot handle the situation of changing user's information. Also, the size of the memory is monotonically increased during the conversation. Right: With memory management, the agent can replaces the out-dated information with the new one. In addition, the size of the memory also tend to be suppressed.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Modular performance (PPL) for other conversation capabilities are shown. There is no significant differences between BLENDERBOT3 and BLENDERBOT3-Mˆ3.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Eunbi Choi; Kyoung-Woon On; Gunsoo Han; Sungwoong Kim; Daniel Wontae; Daejin Jo; Seung Eun Rho; Taehwan Kwon; Minjoon Seo
[ { "authors": "Sanghwan Bae; Donghyun Kwak; Soyoung Kang; Min Young Lee; Sungdong Kim; Yuin Jeong; Hyeri Kim; Sang-Woo Lee; Woo Chul Park; Nako Sung", "journal": "", "ref_id": "b0", "title": "Keep me updated! memory management in long-term conversations", "year": "2022" }, { "authors": "Minh-Thang Daniel De Freitas; David R Luong; Jamie So; Noah Hall; Romal Fiedel; Zi Thoppilan; Apoorv Yang; Gaurav Kulshreshtha; Yifeng Nemade; Lu; V Quoc; Le", "journal": "", "ref_id": "b1", "title": "Towards a human-like opendomain chatbot", "year": "2020" }, { "authors": "Hyunwoo Kim; Youngjae Yu; Liwei Jiang; Ximing Lu; Daniel Khashabi; Gunhee Kim; Yejin Choi; Maarten Sap", "journal": "", "ref_id": "b2", "title": "Prosocialdialog: A prosocial backbone for conversational agents", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b3", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Mojtaba Komeili; Kurt Shuster; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Internet-augmented dialogue generation", "year": "2022" }, { "authors": "Hongyuan Lu; Wai Leung William; Hong Lam; Helen Cheng; Meng", "journal": "", "ref_id": "b5", "title": "Partner personas generation for dialogue response generation", "year": "2022" }, { "authors": "Pierre-Emmanuel Mazaré; Samuel Humeau; Martin Raison; Antoine Bordes", "journal": "", "ref_id": "b6", "title": "Training millions of personalized dialogue agents", "year": "2018" }, { "authors": "Stephen Roller; Emily Dinan; Naman Goyal; Da Ju; Mary Williamson; Yinhan Liu; Jing Xu; Myle Ott; Kurt Shuster; Eric Michael Smith; Y.-Lan Boureau; Jason Weston", "journal": "", "ref_id": "b7", "title": "Recipes for building an open-domain chatbot", "year": "2020" }, { "authors": "Kurt Shuster; Mojtaba Komeili; Leonard Adolphs; Stephen Roller; Arthur Szlam; Jason Weston", "journal": "", "ref_id": "b8", "title": "Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion", "year": "2022" }, { "authors": "Kurt Shuster; Jing Xu; Mojtaba Komeili; Da Ju; Eric Michael Smith; Stephen Roller; Megan Ung; Moya Chen; Kushal Arora; Joshua Lane", "journal": "", "ref_id": "b9", "title": "Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage", "year": "2022" }, { "authors": "Eric Michael; Smith ; Mary Williamson; Kurt Shuster; Jason Weston; Y-Lan Boureau", "journal": "", "ref_id": "b10", "title": "Can you put it all together: Evaluating conversational agents' ability to blend skills", "year": "2020" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam M Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Yaguang Du; Hongrae Li; Huaixiu Lee; Amin Zheng; Marcelo Ghafouri; Yanping Menegali; Maxim Huang; Dmitry Krikun; James Lepikhin; Dehao Qin; Yuanzhong Chen; Zhifeng Xu; Adam Chen; Maarten Roberts; Yanqi Bosma; Chung-Ching Zhou; I A Chang; Willard James Krivokon; Marc Rusch; Kathleen S Pickett; Meredith Ringel Meier-Hellstern; Tulsee Morris; Renelito Delos Doshi; Toju Santos; Johnny Hartz Duke; Ben Søraker; Vinodkumar Zevenbergen; Mark Prabhakaran; Ben Díaz; Kristen Hutchinson; Alejandra Olson; Erin Molina; Josh Hoffman-John; Lora Lee; Ravindran Aroyo; Alena Rajakumar; Matthew Butryna; V O Lamm; Joseph Kuzmina; Aaron Fenton; Rachel Cohen; Ray Bernstein; Blaise Kurzweil; Claire Aguera-Arcas; Marian Cui; Ed Croak; Chi Huai Hsin; Quoc Le", "journal": "", "ref_id": "b11", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Megan Ung; Jing Xu; Y-Lan Boureau", "journal": "", "ref_id": "b12", "title": "Saferdialogues: Taking feedback gracefully after conversational safety failures", "year": "2021" }, { "authors": "Sean Welleck; Jason Weston; Arthur D Szlam; Kyunghyun Cho", "journal": "", "ref_id": "b13", "title": "Dialogue natural language inference", "year": "2018" }, { "authors": "Jing Xu; Arthur Szlam; Jason Weston; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Beyond goldfish memory: Long-term open-domain conversation", "year": "2022" }, { "authors": "Jing Xu; Arthur D Szlam; Jason Weston", "journal": "", "ref_id": "b15", "title": "Beyond goldfish memory: Long-term open-domain conversation", "year": "2021" }, { "authors": "Xinchao Xu; Zhibin Gou; Wenquan Wu; Zheng-Yu Niu; Hua Wu; Haifeng Wang; Shihang Wang", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Long time no see! open-domain conversation with long-term persona memory", "year": "2022" }, { "authors": "Saizheng Zhang; Emily Dinan; Jack Urbanek; Arthur D Szlam; Douwe Kiela; Jason Weston", "journal": "", "ref_id": "b17", "title": "Personalizing dialogue agents: I have a dog, do you have pets too", "year": "2018" }, { "authors": "Hanxun Zhong; Zhicheng Dou; Yutao Zhu; Hongjin Qian; Ji-Rong Wen", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Less is more: Learning to refine dialogue history for personalized dialogue generation", "year": "2022" } ]
[]
10.18653/v1/2022.naacl-main.223
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b39", "b32", "b49", "b7" ], "table_ref": [], "text": "Part-of-Speech (POS) tagging is a process of assigning the most probable grammatical category * Equal contribution.\n(or tag) to each word (or token) in a given sentence of a particular natural language. POS tagging is one of the fundamental steps for many natural language processing (NLP) applications, including machine translation, parsing, text chunking, spell and grammar checking. While great strides have been made for (major) Indo-European languages such as English, French and German, work on the African languages is quite scarce. The vast majority of African languages lack annotated datasets for training and evaluating basic NLP systems.\nThere have been recent works on the development of benchmark datasets for training and evaluating models in African languages for various NLP tasks, including machine translation (NLLB-Team et al., 2022;Adelani et al., 2022a), text-tospeech (Ogayo et al., 2022;Meyer et al., 2022), speech recognition (Ritchie et al., 2022), sentiment analysis (Muhammad et al., 2022(Muhammad et al., , 2023)), news topic classification (Adelani et al., 2023), and named entity recognition (Adelani et al., 2021(Adelani et al., , 2022b)). However, there is no large-scale dataset for POS covering several African languages.\nTo tackle the data bottleneck issue for lowresource languages, recent work applied crosslingual transfer (Artetxe et al., 2020;Pfeiffer et al., 2016), nine African languages2 are represented. Still, only four of the nine languages have training data, i.e. Afrikaans, Coptic, Nigerian-Pidgin, and Wolof. In this work, we create the largest POS dataset for 20 African languages following the UD annotation guidelines." }, { "figure_ref": [], "heading": "Languages and their characteristics", "publication_ref": [ "b9", "b8", "b17" ], "table_ref": [ "tab_0" ], "text": "We focus on 20 Sub-Saharan African languages, spoken in circa 27 countries in the Western, Eastern, Central and Southern regions of Africa. An overview of the focus languages is provided in Table 1. The selected languages represent four language families: Niger-Congo (17), Afro-Asiatic (Hausa), Nilo-Saharan (Luo), and English Creole (Naija). Among the Niger-Congo languages, eight belong to the Bantu languages.\nThe writing system of our focus languages is mostly based on Latin script (sometimes with additional letters and diacritics). Besides Naija, Kiswahili, and Wolof, the remaining languages are all tonal. As far as morphosyntax is concerned, noun classification is a prominent grammatical feature for an important part of our focus languages. 12 of the languages actively make use of between 6-20 noun classes. This includes all Bantu languages, Ghomálá', Mossi, Akan and Wolof (Nurse and Philippson, 2006;Payne et al., 2017;Bodomo and Marfo, 2002;Babou and Loporcaro, 2016). Noun classes can play a central role in POS annotation. For instance, in isiXhosa, adding the class prefix can change the grammatical category of the word (Delman, 2016). All languages use the SVO word order, while Bambara additionally uses the SOV word order. Appendix A provides the details about the language characteristics." }, { "figure_ref": [], "heading": "Data and Annotation for MasakhaPOS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data collection", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 provides the data source used for POS annotation -collected from online newspapers. The choice of the news domain is threefold. First, it is the second most available resource after the religious domain for most African languages. Second, it covers a diverse range of topics. Third, the news domain is one of the dominant domains in the UD. We collected monolingual news corpus with an open license for about eight African languages, mostly from local newspapers. For the remaining 12 languages, we make use of MAFAND-MT (Adelani et al., 2022a) translation corpus that is based on the news domain. While there are a few issues with translation corpus such as translationese effect, we did not observe serious issues in annotation. The only issue we experienced was a few misspellings of words, which led to annotators labeling a few words with the \"X\" tag. However, as a post-processing step, we corrected the misspellings and assigned the correct POS tags." }, { "figure_ref": [], "heading": "POS Annotation Methodology", "publication_ref": [ "b45" ], "table_ref": [], "text": "For the POS annotation task, we collected 1,500 sentences per language. As manual POS annotation is very tedious, we agreed to manually annotate 100 sentences per language in the first instance. This data is then used as training data for automatic POS tagging (i.e., fine-tuning RemBERT (Chung et al., 2021) PLM) of the remaining unannotated sentences. Annotators proceeded to fix the mistakes of the predictions (i.e. 1,400 sentences). This drastically reduced the manual annotation efforts since a few tags are predicted with almost 100% accuracy like punctuation marks, numbers and symbols. Proper nouns were also predicted with high accuracy due to the casing feature.\nTo support work on manual corrections of annotations, most of the languages used the IO Annotator 3 tool, a collaborative annotation platform for text and images. The tool provides support for multi-user annotations simultaneously on datasets. For each language, we hired three native speakers with linguistics backgrounds to perform POS an-3 https://ioannotator.com/ notation. 4 To ensure high-quality annotation, we recruited a language coordinator to supervise annotation in each language. In addition, we provided online support (documentation and video tutorials) to train annotators on POS annotation. We made use of the Universal POS tagset (Petrov et al., 2012), which contains 17 tags. 5 To avoid the use of spurious tags, for each word to be annotated, annotators have to choose one of the possible tags made available on the IO Annotator tool through a dropdown menu. For each language, annotation was done independently by each annotator. At the end of annotation, language coordinators worked with their team to resolve disagreements using IOAnnotator or Google Spreadsheet. We refer to our newly annotated POS dataset as MasakhaPOS." }, { "figure_ref": [], "heading": "Quality Control", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Computation of automatic inter-agreement metrics scores like Fleiss Kappa was a bit challenging due to tokenization issues, e.g. many compound family names are split. Instead, we adopted the tokenization defined by annotators since they are annotating all words in the sentence. Due to several annotation challenges as described in section 5, seven language teams (Ghomálá', Fon, Igbo, Chichewa chiShona, Kiswahili, and Wolof) decided to engage annotators on online calls (or in person discussions) to agree on the correct annotation for each word in the sentence. The other language teams allowed their annotators to work individually, and only discuss sentences on which they did not agree. Seven of the 13 languages achieved a sentence-level annotation agreement of over 75%. Two more languages (Luganda and isiZulu) have sentence-level agreement scores of between 64.0% to 67.0%. The remaining four languages (Ewe, Luo, Mossi, and Setswana) only agreed on less than 50% of the annotated sentences. This confirms the difficulty of the annotation task for many language teams. Despite this challenge, we ensured that all teams resolved all disagreements to produce high-quality POS corpus. Appendix B provides details of the number of agreed annotation by each language team.\nAfter quality control, we divided the annotated sentences into training, development and test splits consisting of 50%, 10%, 40% of the data respectively. We chose a larger test set proportion that is similar to the size of test sets in the UD, usually larger than 500 sentences. Table 1 provides the details of the data split. We split very long sentences into two to fit the maximum sequence length of 200 for PLM fine-tuning. We further performed manual checks to correct sentences split at arbitrary parts." }, { "figure_ref": [], "heading": "Annotation challenges", "publication_ref": [], "table_ref": [], "text": "When annotating our focus languages, we faced two main challenges: tokenization and POS ambiguities." }, { "figure_ref": [], "heading": "Tokenization and word segmentation", "publication_ref": [ "b35" ], "table_ref": [], "text": "In UD, the basic annotation units are syntactic words (rather than phonological or orthographical words) (Nivre et al., 2016). Accordingly, clitics need to be split off and contraction must be undone where necessary. Applying the UD annotation scheme to our focus languages was not straightforward due to the nature of those languages, especially with respect to the notion of word, the use of clitics and multiword units." }, { "figure_ref": [], "heading": "Definition of word", "publication_ref": [ "b12", "b10" ], "table_ref": [], "text": "For many of our focus languages (e.g. Chichewa, Luo, chiShona, Wolof and isiXhosa), it was difficult to establish a dividing line between a word and a phrase. For instance, the chiShona word ndakazomuona translates into English as a whole sentence ('I eventually saw him'). This word consists of several morphemes that convey distinct morphosyntactic information (Chabata, 2000): Nda-(subject concord), -ka-(aspect), -zo-(auxiliary), -mu-(object concord), -ona-(verb stem). This illustrates pronoun incorporation (Bresnan and Mchombo, 1987), i.e. subject and/or object pronouns appear as bits of morphology on a verb or other head, functioning as agreement markers. Naturally, one may want to split this word into several tokens reflecting the different grammatical functions. For UD, however, morphological features such as agreement are encoded as properties of words and there is no attempt at segmenting words into morphemes, implying that items like ndakazomuona should be treated as a single unit." }, { "figure_ref": [], "heading": "Clitics", "publication_ref": [], "table_ref": [], "text": "In languages like Hausa, Igbo, IsiZulu, Kinyarwanda, Wolof and Yorùbá, we observed an extensive use of cliticization. Function words such as prepositions, conjunctions, auxiliaries and determiners can attach to other function or content words. For example, the Igbo contracted form yana consists of a pronoun (PRON) ya and a coordinating conjunction (CCONJ) na. Following UD, we segmented such contracted forms, as they correspond to multiple (syntactic) words. However, there were many cases of fusion where a word has morphemes that are not necessarily easily segmentable. For instance, the chiShona word vave translates into English as 'who (PRON) are (AUX) now (ADV)'. Here, the morpheme -ve, which functions both as auxiliary and adverb, cannot be further segmented, even though it corresponds to multiple syntactic words. Ultimately, we treated the word vave as a unit, which received the AUX POS tag.\nIn addition, there were word contractions with phonological changes, posing serious challenges, as proper segmentation may require to recover the underlying form first. For instance, the Wolof contracted form \"cib\" (Dione, 2019) consists of the preposition ci 'in' and the indefinite article ab 'a'. However, as a result of phonological change, the initial vowel of the article is deleted. Accordingly, to properly segment the contracted form, it won't be sufficient to just extract the preposition ci because the remaining form b will not have meaning. Also, some word contractions are ambiguous. For instance, in Wolof, a form like geek can be split into gi 'the' and ak where ak can function as a conjunction 'and' or as a preposition 'with'." }, { "figure_ref": [], "heading": "One unit or multitoken words?", "publication_ref": [ "b29" ], "table_ref": [], "text": "Unlike the issue just described in 5.1.2, it was sometimes necessary to go in the other direction, and combine several orthographic tokens into a single syntactic word. Examples of such multitoken words are found e.g. in Setswana (Malema et al., 2017). For instance, in the relative structure ngwana yo o ratang (the child who likes ...), the relative marker yo o is a multitoken word that matches the noun class (class 1) of the relativized noun ngwana ('child'), which is subject of the verb ratang ('to like'). In UD, multitoken words are allowed for a restricted class of phenomena, such as numerical expressions like 20 000 and abbreviations (e. g.). We advocate that this restricted class be expanded to phenomena like Setswana relative markers." }, { "figure_ref": [], "heading": "POS ambiguities", "publication_ref": [], "table_ref": [], "text": "There were cases where a word form lies on the boundary between two (or more) POS categories." }, { "figure_ref": [], "heading": "Verb or conjunction?", "publication_ref": [ "b27", "b42", "b22", "b27" ], "table_ref": [], "text": "In quite a few of our focus languages (e.g. Yorùbá, Wolof), a form of the verb 'say' is also used as a subordinate conjunction (to mark out clause boundaries) with verbs of speaking. For example, in the Yorùbá sentence Olú gbàgbé pé Bolá tí jàde (lit. 'Olu forgot that Bola has gone') (Lawal, 1991), the item pé seems to behave both like a verb and a subordinate conjunction. On the one hand, because of the presence of another verb gbàgbé 'to forget', the pattern may be analyzed as a serial verb construction (SVC) (Oyelaran, 1982;Güldemann, 2008), i.e. a construction that contains sequences of two or more verbs without any syntactic marker of subordination. This would mean that pé is a verb. On the other hand, however, this item shows properties of a complementizer (Lawal, 1991). For instance, pé can occur in sentence initial position, which in Yorùbá is typically occupied by subordinating conjunctions. Also, unlike verbs, pé cannot undergo reduplication for nominalization (an ability that all Yorùbá verbs have). This seems to provide evidence for treating this item as a subordinate conjunction rather than a verb." }, { "figure_ref": [], "heading": "Adjective or Verb?", "publication_ref": [ "b31", "b54", "b31", "b6", "b53" ], "table_ref": [], "text": "In some of our focus languages, the category of adjectives is not entirely distinct morpho-syntactically from verbs. In Wolof and Yorùbá, the notions that would be expressed by adjectives in English are encoded through verbs (McLaughlin, 2004). Igbo (Welmers, 2018) and Éwé (McLaughlin, 2004) have a very limited set of underived adjectives (8 and 5, respectively). For instance, in Wolof, unlike in English, an 'adjective' like gaaw 'be quick' does not need a copula (e.g. 'be' in English) to function as a predicate. Likewise, the Bambara item téli 'quick' as in the sentence Sò ka téli 'The horse is quick' (Aplonova and Tyers, 2017) has adjectival properties, as it is typically used to modify nouns and specify their properties or attributes. It also has verbal properties, as it can be used in the main predicative position functioning as a verb. This is signaled by the presence of the auxiliary ka, which is a special predicative marker ka that typically accompanies qualitative verbs (Vydrin, 2018)." }, { "figure_ref": [], "heading": "Adverbs or particles?", "publication_ref": [], "table_ref": [], "text": "The distinction between adverbs and particles was not always straightforward. For instance, many of our focus languages have ideophones, i.e. words that convey an idea by means of a sound (often reduplicated) that expresses an action, quality, manner, etc. Ideophones may behave like adverbs by modifying verbs for such categories as time, place, direction or manner. However, they can also function as verbal particles. For instance, in Wolof, an ideophone like jërr as in tàng jërr \"very hot\" (tàng means \"to be hot\") is an intensifier that only cooccurs as a particle of that verb. Thus, it would not be motivated to treat it as another POS other than PART. Whether such ideophones are PART or ADV or the like varies depending on the language.\n6 Baseline Experiments" }, { "figure_ref": [], "heading": "Baseline models", "publication_ref": [ "b18", "b14" ], "table_ref": [], "text": "We provide POS tagging baselines using both CRF and multilingual PLMs. For the PLMs, we finetune three massively multilingual PLMs pre-trained on at least 100 languages (mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) CRF is one of the most successful sequence labeling approach prior to PLMs. CRF models the sequence labeling task as an undirected graphical model, using both labelled observations and contextual information as features. We implemented the CRF model using sklearn-crfsuite,6 using the following features: the word to be tagged, two consecutive previous and next words, the word in lowercase, prefixes and suffixes of words, length of the word, and other boolean features like is the word a digit, a punctuation mark, the beginning of a sentence or end of a sentence.\nMassively multilingual PLM We fine-tune mBERT, XLM-R (base & large), and RemBERT pre-trained on 100-110 languages, but only few African languages. mBERT, XLM-R, and Rem-BERT were pre-trained on two (swa & yor), three (hau, swa, & xho), and eight (hau, ibo, nya, sna, swa, xho, yor, & zul) of our focus languages respectively. The three models were all pre-trained using masked language model (MLM), mBERT and RemBERT additionally use the nextsentence prediction objective." }, { "figure_ref": [], "heading": "Africa-centric PLMs", "publication_ref": [ "b46", "b55" ], "table_ref": [], "text": "We fine-tune AfriBERTa, AfroLM and AfroXLMR (base & large). The first two PLMs were pre-trained using XLM-R style pretraining, AfroLM additionally make use of active learning during pre-training to address data scarcity of many African languages. On the other hand, AfroXLMR was created through language adaptation (Pfeiffer et al., 2020) of XLM-R on 17 African languages, \"eng\", \"fra\", and \"ara\". AfroLM was pre-trained on all our focus languages, while AfriB-ERTa and AfroXLMR were pre-trained on 6 (hau, ibo, kin, pcm, swa, & yor) and 10 (hau, ibo, kin, nya, pcm, sna, swa, xho, yor, & zul) respectively. We fine-tune all PLMs using the Hug-gingFace Transformers library (Wolf et al., 2020).\nFor PLM fine-tuning, we make use of a maximum sequence length of 200, batch size of 16, gradient accumulation of 2, learning rate of 5e -5, and number of epochs 50. The experiments were performed on using Nvidia V100 GPU." }, { "figure_ref": [], "heading": "Baseline results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "Table 2 shows the results of training POS taggers for each focus language using the CRF and PLMs. Suprinsingly, the CRF model gave a very impressive result for all languages with only a few points below the best PLM (-3.7). In general, finetuning PLMs gave a better result for all languages. The mBERT performance is (+1.3) better in accuracy than CRF. AfroLM and AfriBERTa are only slightly better than mBERT with (< 1 point). One of the reasons for AfriBERTa's poor performance is that most of the languages are unseen during pre-training. 7 On the other hand, AfroLM was pretrained on all our focus languages but on a small dataset (0.73GB) which makes it difficult to train a good representation for each of the languages covered during pre-training. Furthermore, XLM-R-base gave slightly better accuracy on average than both AfroLM (+0.6) and AfriBERTa (+0.4) despite seeing fewer African languages. However, the performance of the AfroXLMR-base exceeds that of XLM-R-base because it has been further adapted to 17 typologically diverse African languages, and the performance (±0.1) is similar to the larger PLMs i.e RemBERT and XLM-R-large.\nImpressive performance was achieved by large versions of massively multilingual PLMs like XLM-R-large and RemBERT, and AfroXLMR (base & large) i.e better than mBERT (+1.8 to +2.4) and better than CRF (+3.1 to +3.7). The performance of the large PLMs (e.g. AfroXLMRlarge) is larger for some languages when compared to mBERT like bbj (+10.1), mos (+4.7), nya (+3.3), and zul (+3.3). Overall, AfroXLMRlarge achieves the best accuracy on average over all languages (89.4) because it has been pre-trained on more African languages with larger monolingual data and it's large size. Interestingly, 11 out of 20 languages reach an impressive accuracy of (> 90%) with the best PLM which is an indication of consistent and high quality POS annotation.\nAccuracy by tag distribution Table 3 shows the POS tagging results by tag distribution using our best model \"AfroXLMR-large\". The tags that are easiest (with accuracy over > 90%) to detect across all languages are PUNCT, NUM, PROPN, NOUN, and VERB, while the most difficult are SYM, INTJ, and X tags. The difficult tags are often infrequent, which does not affect the overall accuracy. Surprisingly, a few languages like Yorùbá and Kinyarwanda, have very good accuracy on almost all tags except for the infrequent tags in the language.\n7 Cross-lingual Transfer" }, { "figure_ref": [], "heading": "Experimental setup for effective transfer", "publication_ref": [ "b16", "b46", "b5", "b28", "b46", "b23", "b5", "b46", "b47", "b47", "b5" ], "table_ref": [ "tab_1" ], "text": "The effectiveness of zero-shot cross-lingual transfer depends on several factors including the choice of the best performing PLM, choice of an effective cross-lingual transfer method, and the choice of the best source language for transfer. Oftentimes, the source language chosen for cross-lingual transfer 7 14 out of 20 languages are unseen is English due to the availability of training data which may not be ideal for distant languages especially for POS tagging (de Vries et al., 2022). To further improve performance, parameter-efficient fine-tuning approaches (Pfeiffer et al., 2020;Ansell et al., 2022) can be leveraged with additional monolingual data for both source and target languages. We highlight how we combine these different factors for effective transfer below:\nChoice of source languages Prior work on the choice of source language for POS tagging shows that the most important features are geographical similarity, genetic similarity (or closeness in language family tree) and word overlap between source and target language (Lin et al., 2019). We choose seven source languages for zero-shot transfer based on the following criteria (1) availability of POS training data in UD,8 . Only three African languages satisfies this criteria (Wolof, Nigerian-Pidgin, and Afrikaans) (2) geographical proximity to African languages -this includes nonindigeneous languages that have official status in Africa like English, French, Afrikaans, and Arabic.\n(3) language family similarity to target languages. The languages chosen are: Afrikaans (afr), Arabic (ara), English (eng), French (fra), Nigerian-Pidgin (pcm), Wolof (wol), and Romanian (ron). While Romanian does not satisfy the last two criteria -it was selected based on the findings of de Vries et al. ( 2022) -Romanian achieves the best transfer performance to the most number of languages in UD. Appendix C shows the data split for the source languages.\nParameter-efficient cross-lingual transfer The standard way of zero-shot cross-lingual transfer involves fine-tuning a multilingual PLM on the source language labelled data (e.g. on a POS task), and evaluate it on a target language. We refer to it as FT-Eval (or Fine-tune & evaluate). However, the performance is often poor for unseen languages in PLM and distant languages. One way to address this is to perform language adaptation using monolingual corpus in the target language before fine-tuning on the downstream task (Pfeiffer et al., 2020), but this setup does not scale to many languages since it requires modifying all the parameters of the PLM and requires large disk space (Alabi et al., 2022) like Adapters (Houlsby et al., 2019) and Lottery-Ticketing Sparse Fine-tunings (LT-SFT) (Ansell et al., 2022) -they are also modular and composable making them ideal for cross-lingual transfer.\nHere, we make use of MAD-X 2.0 9 adapter based approach (Pfeiffer et al., 2020(Pfeiffer et al., , 2021) ) and LT-SFT approach. The setup is as follows: (1) We train language adapters/SFTs using monolingual news corpora of our focus languages. We perform language adaptation on the news corpus to match the POS task domain, similar to (Alabi et al., 2022). We provide details of the monolingual corpus in Appendix E. (2) We train a task adapter/SFT on the source language labelled data using source language adapter/SFT. (3) We substitute the source language adapter/SFT with the target language/SFT to run prediction on the target language test set, while retaining the task adapter.\nChoice of PLM We make use of AfroXLMRbase as the backbone PLM for all experiments because it gave an impressive performance in Table 2, and the availability of language adapters/SFTs for some of the languages by prior works (Pfeiffer et al., 2021;Ansell et al., 2022;Alabi et al., 2022). When a target language adapter/SFT of AfroXLMR-base is absent, XLM-R-base language adapter/SFT can be used instead since they share the same architecture and number of parameters, as demonstrated in Alabi et al. (2022). We did not find XLM-R-large based adapters and SFTs online, 10 and they are time-consuming to train especially for high-resource languages like English." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b28", "b16" ], "table_ref": [ "tab_4", "tab_4", "tab_4" ], "text": "Parameter-efficient fine-tuning are more effective Figure 1 shows the result of cross-lingual 9 an extension of MAD-X where the last adapter layers are dropped, which has been shown to improve performance 10 https://adapterhub.ml/ transfer from seven source languages with POS training data in UD, and their average accuracy on 20 African languages. We report the performance of the standard zero-shot cross-lingual transfer with AfroXLMR-base (i.e. FT-Eval), and parameterefficient fine-tuning approaches i.e MAD-X and LT-SFT. Our result shows that MAD-X and LT-SFT gives significantly better results than FT-Eval, the performance difference is over 10% accuracy on all languages. This shows the effectiveness of parameter-efficient fine-tuning approaches on cross-lingual transfer for low-resource languages despite only using small monolingual data (433KB -50.2MB, as shown in Appendix E) for training target language adapters and SFTs. Furthermore, we find MAD-X to be slightly better than LT-SFT especially when ron (+3.5), fra (+3.2), pcm (+2.9), and eng (+2.6) are used as source languages.\nThe best source language In general, we find eng, ron, and wol to be better as source languages to the 20 African languages. For the FT-Eval, eng and ron have similar performance. However, for LT-SFT, wol was slightly better than the other two, probably because we are transfering from an African language that shares the same family or geographical location to the target languages.\nFor MAD-X, eng was surprisingly the best choice.\nMulti-source fine-tuning leads to further gains Table 4 shows that co-training the best three source languages (eng, ron, and wol) leads to improved performance, reaching an impressive accuracy of 68.8% with MAD-X. For the FT-Eval, we performed multi-task training on the combined training set of the three languages. LT-SFT supports multi-source fine-tuning -where a task SFT can be trained on data from several languages jointly. However, MAD-X implementation does not support multi-source fine-tuning. We created our ver- sion of multi-source fine-tuning following these steps: (1) We combine all the training data of the three languages (2) We train a task adapter using the combined data and one of the best source languages' adapter. We experiment using eng, ron, and wol as source language adapter for the combined data. Our experiment shows that eng or wol achieves similar performance when used as language adapter for multi-source fine-tuning. We only added the result using wol as source adapter on Table 4. Appendix Appendix F provides more details on MAD-X multi-source fine-tuning.\nPerformance difference by language family Table 4 shows the transfer result per language for the three best source languages. wol has a better transfer performance to non-Bantu Niger-Congo languages in West Africa than eng and ron, especially for bbj, ewe, fon, ibo, mos, twi, and yor despite having a smaller POS training data (1.2k sentences) compared to ron (8k sentences) and eng (12.5k sentences). Also, wol adapter was trained on a small monolingual corpus (5.2MB). This result aligns with prior studies that choosing a source language from the same family leads to more effective transfer (Lin et al., 2019;de Vries et al., 2022). However, we find MAD-X to be more sensitive to the size of monolingual corpus. We obtained a very terrible transfer accuracy when we only train language adapter for wol on the news domain (2.5MB) i.e MAD-X (N), lower than FT-Eval. By additionally combining the news corpus with Wikipedia corpus (2.7MB) i.e MAD-X (N+W), we were able to obtain an impressive result comparable to LT-SFT. This highlight the importance of using larger monolingual corpus to train source language adapter. wol was not the best source language for Bantu languages probably because of the difference in language characteristics. For example, Bantu languages are very morphologically-rich while non-Bantu Niger-Congo languages (like wol) are not.\nOur further analysis shows that sna was better in transferring to Bantu languages. Appendix G provides result for the other source languages." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we created MasakhaPOS, the largest POS dataset for 20 typologically-diverse African languages. We showed that POS annotation of these languages based on the UD scheme can be quite challenging, especially with regard to word segmentation and POS ambiguities. We provide POS baseline models using CRF and by fine-tuning multilingual PLMs. We analyze cross-lingual transfer on MasakhaPOS dataset in single-source and multi-source settings. An important finding that emerged from this study is that choosing the appropriate transfer languages substantially improves POS tagging for unseen languages. The transfer performance is particularly effective when pretraining includes a language that shares typological features with the target languages." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Some Language families in Africa not covered For example, Khoisan and Austronesian (like Malagasy). We performed extensive analysis and experiments on Niger-Congo languages but we only covered one language each in the Afro-asiatic (Hausa) and Nilo-Saharan (Dholuo) families.\nNews domain Our annotated dataset belong to the news domain, which is a popular domain in UD. However, the POS dataset and models may not generalize to other domains like speech transcript, conversation data etc.\nTransfer results may not generalize to all NLP tasks We have only experimented with POS task, the best transfer language e.g for non-Bantu Niger-Congo languages i.e Wolof, may not be the same for other NLP tasks.\n10 Ethics Statement or Broader Impact\nOur work aims to understand linguistic characteristics of African languages, we do not see any potential harms when using our POS datasets and models to train ML models, the annotated dataset is based on the news domain, and the articles are publicly available, and we believe the dataset and POS annotation is unlikely to cause unintended harm. Also, we do not see any privacy risks in using our dataset and models because it is based on news domain.\nTable 5 provides the details about the language characteristics. bh, ch, dl, dy, dz, gc, gq, gr, gx, hh, hl, kh, kr, lh, mh, ng, ngc, ngh, ngq, ngx, nkq, nkx, nh, nkc, nx, ny, nyh, ph, qh, rh, sh, th, ths, thsh, ts, tsh, ty, tyh, wh, xh, yh " }, { "figure_ref": [], "heading": "B Annotation Agreement", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C UD POS data split", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Table 7 provides the UD POS corpus found online that we make use for determining the best transfer languages" }, { "figure_ref": [], "heading": "D Hyper-parameters for Experiments", "publication_ref": [ "b55", "b5" ], "table_ref": [], "text": "Hyper-parameters for Baseline Models The PLMs were trained for 20 epochs with a learning rate of 5e-5 using huggingface transformers (Wolf et al., 2020). We make use of a batch size of 16\nHyper-parameters for adapters We train the task adapter using the following hyper-parameters: batch size of 8, 20 epochs, \"pfeiffer\" adapter config, adapter reduction factor of 4 (except for Wolof, where we make use of adapter reduction factor of 1), and learning rate of 5e-5. For the language adapters, we make use of 100 epochs or maximum steps of 100K, minimum number of steps is 30K, batch size of 8, \"pfeiffer+inv\" adapter config, adapter reduction factor of 2, learning rate of 5e-5, and maximum sequence length of 256.\nHyper-parameters for LT-SFT We make use of the default setting used by the Ansell et al. (2022) paper." }, { "figure_ref": [], "heading": "E Monolingual data for Adapter/SFTs language adaptation", "publication_ref": [], "table_ref": [], "text": "Table 8 provides the UD POS corpus found online that we make use for determining the best transfer languages F MAD-X multi-source fine-tuning\nFigure 2 provides the result of MAD-X with different source languages, and multi-source finetuning using either eng, ron or wol as language adapter for task adaptation prior to zero-shot transfer. Our result shows that making of wol as lan-" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was carried out with support from Lacuna Fund, an initiative co-founded by The Rockefeller Foundation, Google.org, and Canada's International Development Research Centre. We are grateful to Sascha Heyer, for extending the ioAnnotator tool to meet our requirements for POS annotation. We appreciate the early advice from Graham Neubig, Kim Gerdes, and Sylvain Kahane on this project. David Adelani acknowledges the support of DeepMind Academic Fellowship programme. We appreciate all the POS annotators that contributed to this dataset. Finally, we thank the Masakhane leadership, Melissa Omino, Davor Orlic and Knowledge4All for their administrative šupport throughout the project." }, { "figure_ref": [], "heading": "Language", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Source", "publication_ref": [ "b43", "b40" ], "table_ref": [], "text": "Size (MB)\nBambara (bam) MAFAND-MT (Adelani et al., 2022a) 0.8MB Ghomálá' (bbj) MAFAND-MT (Adelani et al., 2022a) 0.4MB Éwé (ewe) MAFAND-MT (Adelani et al., 2022a) 0.5MB Fon (fon) MAFAND-MT (Adelani et al., 2022a) 1.0MB Hausa (hau)\nVOA (Palen-Michel et al., 2022) 46.1MB Igbo (ibo) BBC Igbo (Ogueji et al., 2021) 16.6MB Kinyarwanda (kin) KINNEWS (Niyongabo et al., 2020) 35.8MB Luganda (lug)\nBukedde (Alabi et al., 2022) 7.9MB Luo (luo)\nRamogi FM news (Adelani et al., 2021) and MAFAND-MT (Adelani et al., 2022a) 1.4MB guage adapters leads to slightly better accuracy (69.1%) over eng (68.7%) and ron (67.8%). But in general, either one can be used, and they all give an impressive performance over LT-SFT, as shown in Table 9." }, { "figure_ref": [], "heading": "G Cross-lingual transfer from all source languages", "publication_ref": [], "table_ref": [], "text": "Table 9 shows the result of cross-lingual transfer from each source language (afr, ara, eng, fra, pcm, ron, and wol) to each of the African languages. We extended the evaluation to include sna (since it was recommended as the best transfer language for a related task -named entity recogni-tion by (Adelani et al., 2022b)) by using the newly created POS corpus. We also tried other Bantu languages like kin and swa, but their performance was worse than sna. Our evaluation shows that sna results in better transfer to Bantu languages because of it's rich morphology. We achieved the best result for all languages using multi-source transfer from (eng, ron, wol, sna) languages. " } ]
In this paper, we present MasakhaPOS, the largest part-of-speech (POS) dataset for 20 typologically diverse African languages. We discuss the challenges in annotating POS for these languages using the UD (universal dependencies) guidelines. We conducted extensive POS baseline experiments using conditional random field and several multilingual pretrained language models. We applied various cross-lingual transfer models trained with data available in UD. Evaluating on the Masakha-POS dataset, we show that choosing the best transfer language(s) in both single-source and multi-source setups greatly improves the POS tagging performance of the target languages, in particular when combined with cross-lingual parameter-efficient fine-tuning methods. Crucially, transferring knowledge from a language that matches the language family and morphosyntactic properties seems more effective for POS tagging in unseen languages.
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages
[ { "figure_caption": ", and Rem-BERT (Chung et al., 2021)), and three Africacentric PLMs like AfriBERTa (Ogueji et al., 2021), AfroXLMR (Alabi et al., 2022), and AfroLM (Dossou et al., 2022) pre-trained on several African languages. The baseline models are:", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Languages and Data Splits for MasakhaPOS Corpus. Language, family (NC: Niger-Congo), number of speakers, news source, and data split in number of sentences.", "figure_data": "AfricanNo. of#Average sentenceLanguageFamilyRegionSpeakers SourceTrain / dev / test Tokens Length (# Tokens)Bambara (bam)NC / MandeWest14M MAFAND-MT (Adelani et al., 2022a) 793/ 158/ 63440,13725.9Ghomálá' (bbj)NC / GrassfieldsCentral1M MAFAND-MT750/ 149/ 59923,11115.4Éwé (ewe)NC / KwaWest7M MAFAND-MT728/ 145/ 58228,15919.4Fon (fon)NC / Volta-NigerWest2M MAFAND-MT798/ 159/ 63749,46030.6Hausa (hau)Afro-Asiatic / Chadic West63M Kano Focus and Freedom Radio753/ 150/ 60141,34627.5Igbo (ibo)NC / Volta-NigerWest27M IgboRadio and Ka O . dI . Taa803/ 160/ 64252,19532.5Kinyarwanda (kin) NC / BantuEast10M IGIHE, Rwanda757/ 151/ 60440,55826.8Luganda (lug)NC / BantuEast7M MAFAND-MT733/ 146/ 58624,65816.8Luo (luo)Nilo-SaharanEast4M MAFAND-MT757/ 151/ 60445,73430.2Mossi (mos)NC / GurWest8M MAFAND-MT757/ 151/ 60433,79122.3Chichewa (nya)NC / BantuSouth-East14M Nation Online Malawi728/ 145/ 58224,16316.6Naija (pcm)English-CreoleWest75M MAFAND-MT752/ 150/ 60038,57025.7chiShona (sna)NC / BantuSouth12M VOA Shona747/ 149/ 59639,78526.7Kiswahili (swa)NC / BantuEast & Central98M VOA Swahili675/ 134/ 53940,78929.5Setswana (tsn)NC / BantuSouth14M MAFAND-MT753/ 150/ 60241,81127.9Akan/Twi (twi)NC / KwaWest9M MAFAND-MT775/ 154/ 61841,20326.2Wolof (wol)NC / SenegambiaWest5M MAFAND-MT770/ 154/ 61644,00228.2isiXhosa (xho)NC / BantuSouth9M Isolezwe Newspaper752/ 150/ 60125,31316.8Yorùbá (yor)NC / Volta-NigerWest42M Voice of Nigeria and Asejere875/ 174/ 69843,60124.4isiZulu (zul)NC / BantuSouth27M Isolezwe Newspaper753/ 150/ 60124,02816.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "85.4 88.8 90.2 92.8 78.1 97.3 90.0 88.0 91.1 80.5 90.8 88.1 93.2 82.2 84.9 92.9 88.1 94.2 89.4 88.8 RemBERT (575M) 90.6 82.6 88.9 90.8 93.0 79.3 98.0 90.3 87.5 90.4 82.4 90.9 89.1 93.1 83.6 86.0 92.1 89.3 94.7 90.2 89.1 Accuracy of baseline models on MasakhaPOS dataset . We compare several multilingual PLMs including the ones trained on African languages. Average is over 5 runs.", "figure_data": "Modelbambbj ewefon hauibokinlugluo mosnya pcmsna swatsntwiwolxhoyorzul AVGCRF89.1 78.9 88.0 88.1 89.8 75.2 95.3 88.3 84.6 86.0 77.7 85.6 85.9 89.3 81.4 81.5 91.0 81.8 92.0 84.285.7Massively-multilingual PLMsmBERT (172M)89.9 75.2 86.0 87.6 90.7 76.5 96.9 89.6 87.0 86.5 79.9 90.4 87.5 92.0 81.9 83.9 92.5 85.9 93.4 86.8 87.0XLM-R-base (270M)90.1 83.6 88.5 90.1 92.5 77.2 96.7 89.1 87.2 90.7 79.9 90.5 87.9 92.9 81.3 84.1 92.4 87.4 93.7 88.0 88.2XLM-R-large (550M) 90.2 Africa-centric PLMsAfroLM (270M)89.2 77.8 87.5 82.4 92.7 77.8 97.4 90.8 86.8 89.6 81.1 89.5 88.7 92.8 83.8 83.9 92.1 87.5 91.1 88.8 87.6AfriBERTa-large (126M)89.4 79.6 87.4 88.4 93.0 79.3 97.8 89.8 86.5 89.9 79.7 89.8 87.8 93.0 82.5 83.7 91.7 86.1 94.5 86.9 87.8AfroXLMR-base (270M)90.2 83.5 88.5 90.1 93.0 79.1 98.2 90.9 86.9 90.9 82.7 90.8 89.2 92.9 82.7 84.3 92.4 88.5 94.5 89.4 88.9AfroXLMR-large (550M) 90.5 85.3 88.7 90.4 93.0 78.9 98.4 91.6 88.1 91.2 83.2 91.2 89.5 93.2 83.0 84.9 92.9 88.7 95.0 90.1 89.4ADJ ADP ADV AUX CCONJ DET INTJ NOUN NUM PART PRON PROPN PUNCT SCONJ SYM VERBXACCbam41.077.072.082.091.00.091.090.095.097.082.0100.071.025.083.00.090.7bbj71.080.067.089.084.085.00.082.086.078.091.092.0100.088.086.085.6ewe72.083.057.094.089.0 100.091.091.087.090.093.0100.084.013.082.088.7fon91.088.069.075.094.096.091.090.089.095.091.0100.051.089.090.4hau86.080.071.096.089.084.00.094.098.095.076.098.099.086.096.0 62.092.9ibo95.089.056.098.076.079.00.070.095.00.098.095.0100.06.00.081.079.2kin86.099.091.00.0100.099.099.0 100.084.098.097.0100.097.00.099.00.098.4lug71.096.072.090.090.076.094.093.094.015.094.0100.089.092.091.6luo73.088.069.087.069.082.089.096.086.042.089.0100.094.0 100.086.00.088.2mos64.083.072.091.093.084.091.093.094.083.090.0100.095.092.091.2nya74.079.056.025.077.081.020.092.086.012.073.086.099.06.089.083.1pcm78.097.074.086.098.092.095.098.090.086.091.098.086.045.091.091.1sna51.094.044.087.089.083.095.096.00.078.092.099.058.060.094.089.4swa95.086.065.082.095.056.097.098.086.051.097.0100.091.095.00.093.1tsn57.080.082.042.053.078.017.094.097.062.076.091.099.018.00.095.00.082.4twi55.082.068.052.087.093.00.086.077.021.082.092.0100.09.00.087.084.8wol0.094.081.094.096.090.022.091.090.098.092.096.0100.085.062.094.092.9xho73.069.047.017.088.054.00.087.0 100.080.095.0100.057.00.090.088.3yor84.092.082.099.097.097.095.094.083.095.096.0100.098.095.00.095.1zul68.026.072.021.067.082.00.091.099.081.099.0100.091.0 100.091.0 96.090.0AVE69.283.168.469.186.479.015.990.893.469.779.092.899.768.033.890.4 19.889.4", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Tag", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": ". Several parameter-efficient approaches have been proposed Zero-shot cross-lingual transfer results using FT-Eval, LT-SFT and MAD-X. Average over 20 languages. Experiments performed using AfroXLMR-base. Evaluation metric is Accuracy.", "figure_data": "Accuracy45 50 55 60 65 7048.9 FT-Eval 60.5 61.448.4 LT-SFT5052.6 MAD-X63.46645.661.764.949.652.553.163.56748.164.865.756.766.868.830 40 35afrara 36.9engfra Source Languages pcm 32.8ronwoleng-ron-wolFigure 1:", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Cross-lingual transfer to MasakhaPOS . Zero-shot Evaluation using FT-Eval, LT-SFT, and MAD-X, with ron, eng, and wol as source languages. Experiments are based on AfroXLMR-base. Non-Bantu Niger-Congo languages highlighted with gray . AVG* excludes pcm and wol from the average since they are source languages.", "figure_data": "Methodbambbj ewefon hauibokinlugluo mosnya pcmsna swatsntwiwolxhoyorzul AVG AVG*eng as a source languageFT-Eval52.1 31.9 47.8 32.5 67.1 74.5 63.9 57.8 38.4 45.3 59.0 82.1 63.7 56.9 49.4 35.9 35.9 45.9 63.3 48.8 52.651.9LT-SFT67.9 57.6 67.9 55.5 69.0 76.3 64.2 61.0 74.5 70.3 59.4 82.4 64.6 56.9 49.5 52.1 78.2 45.9 65.3 49.8 63.461.5MAD-X62.9 58.5 68.7 55.8 67.0 77.8 70.9 65.7 73.0 71.8 70.1 83.2 69.8 61.2 49.8 53.0 75.2 57.1 66.9 60.9 66.064.5ron as a source languageFT-Eval46.5 30.5 37.6 30.9 67.3 77.7 73.3 56.9 36.7 40.6 62.2 78.9 66.3 61.0 55.8 35.7 33.8 49.6 63.5 56.353.152.7LT-SFT60.6 57.0 64.9 60.4 67.5 77.4 68.2 58.5 70.2 67.9 58.2 78.1 64.6 59.7 57.4 55.7 81.9 46.3 64.8 51.2 63.561.7MAD-X63.5 62.2 66.6 61.8 66.5 80.0 73.5 62.7 76.5 71.8 66.0 83.7 71.1 64.5 61.2 53.5 79.5 48.6 69.5 57.8 67.065.4wol as a source languageFT-Eval40.8 36.5 39.8 37.4 55.1 58.6 49.2 51.8 35.1 44.9 49.0 51.6 53.8 42.9 45.0 38.4 88.6 46.0 52.5 45.548.145.7LT-SFT (N)64.4 64.3 69.8 63.0 67.0 79.7 63.7 64.0 74.1 72.2 56.5 72.7 67.7 53.0 51.3 56.2 92.5 46.0 69.8 47.7 64.862.8MAD-X (N)46.6 41.8 47.2 37.8 53.9 51.8 41.0 39.0 46.5 44.0 38.3 40.2 44.3 38.8 44.6 40.1 85.6 39.2 46.4 36.0 45.243.2MAD-X (N+W) 61.7 63.6 68.9 63.1 66.8 77.0 67.8 69.1 73.7 71.3 63.2 75.1 68.9 55.8 50.7 54.9 90.4 49.6 70.0 51.7 65.763.8multi-source: eng-ron-wolFT-Eval44.2 36.3 39.3 39.3 69.4 78.5 70.6 59.2 35.5 46.8 60.9 81.4 65.8 58.5 53.8 38.8 89.1 48.8 65.2 53.556.753.6LT-SFT67.4 64.6 70.0 64.2 70.4 81.1 68.7 63.9 76.4 73.9 58.8 83.0 69.6 57.3 52.7 57.2 93.1 45.8 69.8 48.3 66.864.4MAD-X66.2 65.5 70.3 64.9 69.1 82.3 73.1 68.0 75.1 74.2 69.2 83.9 69.4 62.6 53.6 55.2 90.1 52.3 70.8 59.4 68.866.7", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "POS annotation agreements at the sentence level for 13 out of the 20 focus languages. ¯, kw, Îw, gw, ky, Îy, gy, sh, ts yes, 2 tones no", "figure_data": "No. of Latin Letters", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ", nq, ph, hh, ny, gq, hl, bh, nj, ch, ngc, ngq, th, ngx, kl, ntsh, sh, kh, tsh, ng, nk, gx, xh, gc, mb, dl, nc, qh ", "figure_data": "yes, 2 tones noSVOagglutinativestrong prefixingactive, 17, zhYorùbá (yor)25 c, q, v, x, ze . , gb, s . , o .yes, 3 tones yesSVOisolatinglittle affixationvestigial, 2isiZulu (zul)55 -nx, tsyes, 3 tones noSVOagglutinativestrong prefixingactive, 17", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Linguistic Characteristics of the Languages", "figure_data": "No. agreedagreedNo. agreedagreedLang. annotation annotation (%) Lang. annotation annotation (%)bam1,09177.9 pcm1,07376.6ewe61644.0 tsn1,05824.4hau1,07977.1 twi1,30693.2kin1,12780.5 xho1,37898.4lug93766.9 yor1,05975.6luo56440.3 zul90564.6mos82949.2", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Number of sentences with agreed annotations and their percentages", "figure_data": "LanguageData Source# Train/# dev/ # testAfrikaans (afr)UD_Afrikaans-AfriBooms1,315/ 194/ 425Arabic (ara)UD_Arabic-PADT6,075/ 909/ 680English (eng)UD_English-EWT12,544/ 2001/ 2077French (fra)UD_French-GSD14,450/ 1,476/ 416Naija (pcm)UD_Naija-NSC7,279/ 991/ 972Romanian (ron) UD_Romanian-RRT8,043/ 752/ 729Wolof (wol)UD_Wolof-WTB1,188/ 449/ 470", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Data Splits for UD POS datasets used as source languages for cross-lingual transfer.", "figure_data": "", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" } ]
Cheikh M Bamba; David Ifeoluwa Adelani; Peter Nabende; Jesujoba O Alabi; Thapelo Sindane; Happy Buzaaba; Shamsuddeen Hassan Muhammad; Chris Chinenye Emezue; Perez Ogayo; Anuoluwapo Aremu; Catherine Gitau; Derguene Mbaye; Jonathan Mukiibi; Blessing Sibanda; Bonaventure F P Dossou; Andiswa Bukula; Rooweither Mabuya; Allahsera Auguste Tapo; Edwin Munkoh-Buabeng; Victoire Memdjokam Koagne; Ouoba Fatoumata; Kabore; Amelia Taylor; Kalipe † Godson; Tebogo Macucwa; Vukosi Marivate; Tajuddeen Gwadabe; Elvis Tchiaze Mboning; Ikechukwu Onyenwe; Gratien Atindogbe; Tolulope Anu Adelani; Idris Akinade; Olanrewaju Samuel; Marien Nahimana; Théogène Musabeyezu; Emile Niyomutabazi; Ester Chimhenga; Kudzai Gotosa; Patrick Mizha; Apelete Agbolo; Seydou Traore; Chinedu Uchechukwu; Aliyu Yusuf; Muhammad Abdullahi; Dietrich Klakow; Masakhane Nlp
[ { "authors": "David Adelani; Jesujoba Alabi; Angela Fan; Julia Kreutzer; Xiaoyu Shen; Machel Reid; Dana Ruiter; Dietrich Klakow; Peter Nabende; Ernie Chang; Tajuddeen Gwadabe; Freshia Sackey; F P Bonaventure; Chris Dossou; Colin Emezue; Michael Leong; Shamsuddeen Beukman; Guyo Muhammad; Oreen Jarso; Andre Yousuf; Gilles Niyongabo Rubungo; Eric Hacheme; Muhammad Umair Peter Wairagala; Benjamin Nasir; Tunde Ajibade; Yvonne Ajayi; Jade Gitau; Mohamed Abbott; Millicent Ahmed; Anuoluwapo Ochieng; Perez Aremu; Jonathan Ogayo; Fatoumata Mukiibi; Godson Ouoba Kabore; Derguene Kalipe; Mbaye; Auguste Allahsera; Victoire Tapo; Edwin Memdjokam Koagne; Valencia Munkoh-Buabeng; Idris Wagner; Ayodele Abdulmumin; Happy Awokoya; Blessing Buzaaba; Andiswa Sibanda; Sam Bukula; Manthalu", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "a. A few thousand translations go a long way! leveraging pre-trained models for African news translation", "year": "2022" }, { "authors": "David Adelani; Graham Neubig; Sebastian Ruder; Shruti Rijhwani; Michael Beukman; Chester Palen-Michel; Constantine Lignos; Jesujoba Alabi; Shamsuddeen Muhammad; Peter Nabende; M Cheikh; Andiswa Bamba Dione; Rooweither Bukula; Mabuya; F P Bonaventure; Blessing Dossou; Happy Sibanda; Jonathan Buzaaba; Godson Mukiibi; Derguene Kalipe; Amelia Mbaye; Fatoumata Taylor; Chris Kabore; Anuoluwapo Chinenye Emezue; Perez Aremu; Catherine Ogayo; Edwin Gitau; Victoire Munkoh-Buabeng; Memdjokam Koagne; Auguste Allahsera; Tebogo Tapo; Vukosi Macucwa; Mboning Marivate; Tajuddeen Tchiaze Elvis; Tosin Gwadabe; Orevaoghene Adewumi; Joyce Ahia; Neo Nakatumba-Nabende; Ignatius Lerato Mokono; Chiamaka Ezeani; Chukwuneke; Oluwaseun Mofetoluwa; Gilles Adeyemi; Idris Quentin Hacheme; Odunayo Abdulmumin; Oreen Ogundepo; Tatiana Yousuf; Dietrich Moteu; Klakow", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition", "year": "2022" }, { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Stephen Ruder; Israel Mayhew; Shamsuddeen H Abebe Azime; Chris Chinenye Muhammad; Joyce Emezue; Perez Nakatumba-Nabende; Aremu Ogayo; Catherine Anuoluwapo; Derguene Gitau; Jesujoba Mbaye; Seid Alabi; Tajuddeen Muhie Yimam; Ignatius Rabiu Gwadabe; Ezeani; Andre Rubungo; Jonathan Niyongabo; Verrah Mukiibi; Iroro Otiende; Davis Orife; Samba David; Tosin Ngom; Paul Adewumi; Mofetoluwa Rayson; Gerald Adeyemi; Emmanuel Muriuki; Chiamaka Anebi; Nkiruka Chukwuneke; Eric Odu; Samuel Peter Wairagala; Clemencia Oyerinde; Tobius Siro; Temilola Saul Bateesa; Yvonne Oloyede; Victor Wambui; Deborah Akinode; Maurice Nabagereka; Ayodele Katusiime; Awokoya; Mboup Mouhamadane; Dibora Gebreyohannes; Henok Tilaye; Kelechi Nwaike; Degaga Wolde; Abdoulaye Faye; Blessing Sibanda; Orevaoghene Ahia; F P Bonaventure; Kelechi Dossou; Thierno Ogueji; Diop Ibrahima; Abdoulaye Diallo; Adewale Akinfaderin; Tendai Marengereke; Salomey Osei", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b2", "title": "MasakhaNER: Named entity recognition for African languages", "year": "2021" }, { "authors": "David Ifeoluwa Adelani; Marek Masiak; Jesujoba Israel Abebe Azime; Atnafu Oluwadara Alabi; Christine Lambebo Tonja; Odunayo Mwase; Ogundepo; F P Bonaventure; Akintunde Dossou; Doreen Oladipo; Chris Chinenye Nixdorf; Sana Emezue; K Blessing; Davis Sibanda; Lolwethu David; Jonathan Ndolela; Tunde Mukiibi; Tatiana Moteu Oluwaseyi Ajayi; Brian Ngoli; Abraham Toluwase Odhiambo; Nnaemeka C Owodunni; Obiefuna; Hassan Shamsuddeen; Saheed Muhammad; Mesay Salahudeen Abdullahi; Tajuddeen Gemeda Yigezu; Idris Gwadabe; Abdulmumin; Taye Mahlet; Oluwabusayo Bame; Iyanuoluwa Olufunke Awoyomi; Tolulope Shode; Anu Adelani; Abdulganiy Habiba; Abdul-Hakeem Kailani; Adetola Omotayo; Afolabi Adeeko; Anuoluwapo Abeeb; Olanrewaju Aremu; Clemencia Samuel; Wangari Siro; Onyekachi Kimotho; Chinedu E Raphael Ogbu; Chiamaka I Mbonu; Samuel Chukwuneke; Jessica Fanijo; Ojo; F Oyinkansola; Tadesse Awosan; Kebede Guge; Toadoum Sakayo; Pamela Sari; Freedmore Nyatsine; Oreen Sidume; Mardiyyah Yousuf; Ussen Oduwole; Kanda Kimanuka; Thina Patrick Tshinu; Siyanda Diko; Abdulmejid Nxakama; Sinodos Tuni Johar; Muhidin Gebre; Shafie Mohamed; Fuad Mire Abdi Mohamed; Moges Hassan; Evrard Ahmed Mehamed; Pontus Ngabire; Stenetorp", "journal": "", "ref_id": "b3", "title": "Masakhanews: News topic classification for african languages", "year": "2023" }, { "authors": "O Jesujoba; David Alabi; Marius Ifeoluwa Adelani; Dietrich Mosbach; Klakow", "journal": "International Committee on Computational Linguistics", "ref_id": "b4", "title": "Adapting pretrained language models to African languages via multilingual adaptive fine-tuning", "year": "2022" }, { "authors": "Alan Ansell; Edoardo Ponti; Anna Korhonen; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Composable sparse fine-tuning for crosslingual transfer", "year": "2022" }, { "authors": "Ekaterina Aplonova; Francis Tyers", "journal": "", "ref_id": "b6", "title": "Towards a dependency-annotated treebank for bambara", "year": "2017" }, { "authors": "Mikel Artetxe; Sebastian Ruder; Dani Yogatama", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "On the cross-lingual transferability of monolingual representations", "year": "2020" }, { "authors": "Anta Cheikh; Michele Babou; Loporcaro", "journal": "Journal of African Languages and Linguistics", "ref_id": "b8", "title": "Noun classes and grammatical gender in wolof", "year": "2016" }, { "authors": "Adams Bodomo; Charles Marfo", "journal": "", "ref_id": "b9", "title": "The morphophonology of noun classes in dagaare and akan", "year": "2002" }, { "authors": "Joan Bresnan; Sam A Mchombo", "journal": "Language", "ref_id": "b10", "title": "Topic, pronoun, and agreement in chiche ŵa", "year": "1987" }, { "authors": "Ronald Cardenas; Ying Lin; Ji Heng; Jonathan May", "journal": "", "ref_id": "b11", "title": "A grounded unsupervised universal part-ofspeech tagger for low-resource languages", "year": "2019" }, { "authors": "Emmanuel Chabata", "journal": "Lexikos", "ref_id": "b12", "title": "The shona corpus and the problem of tagging", "year": "2000" }, { "authors": "Chung Hyung Won; Thibault Fevry; Henry Tsai; Melvin Johnson; Sebastian Ruder", "journal": "", "ref_id": "b13", "title": "Rethinking embedding coupling in pre-trained language models", "year": "2021" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Guy De Pauw; Naomi Maajabu; Peter Waiganjo Wagacha", "journal": "European Language Resources Association (ELRA", "ref_id": "b15", "title": "A knowledge-light approach to luo machine translation and part-of-speech tagging", "year": "2010" }, { "authors": "Martijn Wietse De Vries; Malvina Wieling; Nissim", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Make the best of cross-lingual transfer: Evidence from POS tagging with over 100 languages", "year": "2022" }, { "authors": "Xolani Delman", "journal": "", "ref_id": "b17", "title": "Development of Part-of-speech Tagger for Xhosa", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "M Cheikh; Dione Bamba", "journal": "", "ref_id": "b19", "title": "Developing universal dependencies for wolof", "year": "2019" }, { "authors": "Jonas Cheikh M Bamba Dione; Sina Kuhn; Zarrieß", "journal": "", "ref_id": "b20", "title": "Design and development of part-of-speechtagging resources for wolof (niger-congo, spoken in senegal)", "year": "2010" }, { "authors": "F P Bonaventure; Atnafu Dossou; Oreen Lambebo Tonja; Salomey Yousuf; Abigail Osei; Iyanuoluwa Oppong; Oluwabusayo Shode; Chris C Olufunke Awoyomi; Emezue", "journal": "", "ref_id": "b21", "title": "Afrolm: A selfactive learning-based multilingual pretrained language model for 23 african languages", "year": "2022" }, { "authors": "Tom Güldemann", "journal": "De Gruyter Mouton", "ref_id": "b22", "title": "Quotative Indexes in African Languages. A Synchronic and Diachronic Survey", "year": "2008" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b23", "title": "Parameter-efficient transfer learning for NLP", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b24", "title": "", "year": "" }, { "authors": "Olájídé Ishola; Daniel Zeman", "journal": "European Language Resources Association", "ref_id": "b25", "title": "Yorùbá dependency treebank (YTB)", "year": "2020" }, { "authors": "Mariya Koleva", "journal": "", "ref_id": "b26", "title": "Towards adaptation of nlp tools for closely-related bantu languages: Building a partof-speech tagger for zulu", "year": "2013" }, { "authors": "Adenike Lawal", "journal": "Studies in African Linguistics", "ref_id": "b27", "title": "Yoruba pe and ki verbs or complementizers", "year": "1991" }, { "authors": "Yu-Hsiang Lin; Chian-Yu Chen; Jean Lee; Zirui Li; Yuyan Zhang; Mengzhou Xia; Shruti Rijhwani; Junxian He; Zhisong Zhang; Xuezhe Ma; Antonios Anastasopoulos; Patrick Littell; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Choosing transfer languages for cross-lingual learning", "year": "2019" }, { "authors": "Gabofetswe Malema; Boago Okgetheng; Moffat Motlhanka", "journal": "International Journal on Natural Language Computing", "ref_id": "b29", "title": "Setswana part of speech tagging", "year": "2017" }, { "authors": "Gabofetswe Malema; Boago Okgetheng; Bopaki Tebalo; Moffat Motlhanka; Goaletsa Rammidi", "journal": "", "ref_id": "b30", "title": "Complex setswana parts of speech tagging", "year": "2020" }, { "authors": "Fiona Mclaughlin", "journal": "", "ref_id": "b31", "title": "Is there an adjective class in wolof. Adjective classes: A cross-linguistic typology", "year": "2004" }, { "authors": "Josh Meyer; David Adelani; Edresson Casanova; Alp Öktem; Daniel Whitenack; Julian Weber; Kabongo Salomon; Elizabeth Kabenamualu; Iroro Salesky; Colin Orife; Perez Leong; Chris Chinenye Ogayo; Jonathan Emezue; Salomey Mukiibi; Osei; Agbolo Apelete; Victor Akinode; Bernard Opoku; Olanrewaju Samuel; Jesujoba Alabi; Shamsuddeen Hassan; Muhammad ", "journal": "", "ref_id": "b32", "title": "BibleTTS: a large, high-fidelity, multilingual, and uniquely African speech corpus", "year": "2022" }, { "authors": "Shamsuddeen Hassan; Muhammad ; Idris Abdulmumin; Abinew Ali Ayele; Nedjma Ousidhoum; David Ifeoluwa Adelani; Seid Muhie Yimam; Ibrahim Sa; ; Ahmad; Meriem Beloucif; Saif Mohammad; Sebastian Ruder", "journal": "", "ref_id": "b33", "title": "Afrisenti: A twitter sentiment analysis benchmark for african languages", "year": "2023" }, { "authors": "Shamsuddeen Hassan; Muhammad ; David Ifeoluwa Adelani; Sebastian Ruder; Ibrahim Sa'id Ahmad; Idris Abdulmumin; Shehu Bello; Monojit Bello; Chris Choudhury; Saheed Chinenye Emezue; Anuoluwapo Salahudeen Abdullahi; Alípio Aremu; Pavel Jorge; Brazdil", "journal": "European Language Resources Association", "ref_id": "b34", "title": "NaijaSenti: A Nigerian Twitter sentiment corpus for multilingual sentiment analysis", "year": "2022" }, { "authors": "Joakim Nivre; Marie-Catherine De Marneffe; Filip Ginter; Yoav Goldberg; Jan Hajic; Christopher D Manning; Ryan Mcdonald; Slav Petrov; Sampo Pyysalo; Natalia Silveira", "journal": "", "ref_id": "b35", "title": "Universal dependencies v1: A multilingual treebank collection", "year": "2016" }, { "authors": "Andre Rubungo; Qu Niyongabo; Julia Hong; Li Kreutzer; Huang", "journal": "International Committee on Computational Linguistics", "ref_id": "b36", "title": "KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi", "year": "2020" }, { "authors": "Marta Nllb-Team; James Ruiz Costa-Jussà; Cross; Maha Onur Ccelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Alison Wenzek; Bapi Youngblood; Loïc Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon L Rowe; C Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzm'an; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b37", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "", "journal": "Routledge", "ref_id": "b38", "title": "The Bantu Languages. Routledge Language Family Series", "year": "2006" }, { "authors": "Perez Ogayo; Graham Neubig; Alan W Black", "journal": "", "ref_id": "b39", "title": "Building African Voices", "year": "2022" }, { "authors": "Kelechi Ogueji; Yuxin Zhu; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages", "year": "2021" }, { "authors": "Chinedu Ikechukwu E Onyenwe; Mark Uchechukwu; Hepple", "journal": "Association for Computational Linguistics and Dublin City University", "ref_id": "b41", "title": "Part-of-speech tagset and corpus development for igbo, an african", "year": "2014" }, { "authors": " Olasope O Oyelaran", "journal": "Studies in African Linguistics", "ref_id": "b42", "title": "On the scope of the serial verb construction in yoruba", "year": "1982" }, { "authors": "Chester Palen-Michel; June Kim; Constantine Lignos", "journal": "European Language Resources Association", "ref_id": "b43", "title": "Multilingual open text release 1: Public domain news in 44 languages", "year": "2022" }, { "authors": "", "journal": "Language Science Press", "ref_id": "b44", "title": "Diversity in African languages", "year": "2017" }, { "authors": "Slav Petrov; Dipanjan Das; Ryan Mcdonald", "journal": "European Language Resources Association (ELRA", "ref_id": "b45", "title": "A universal part-of-speech tagset", "year": "2012" }, { "authors": "Jonas Pfeiffer; Ivan Vulić; Iryna Gurevych; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", "year": "2020" }, { "authors": "Jonas Pfeiffer; Ivan Vulić; Iryna Gurevych; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "UNKs everywhere: Adapting multilingual language models to new scripts", "year": "2021" }, { "authors": "Maria Edoardo; Goran Ponti; Olga Glavaš; Qianchu Majewska; Ivan Liu; Anna Vulić; Korhonen", "journal": "", "ref_id": "b48", "title": "XCOPA: A multilingual dataset for causal commonsense reasoning", "year": "2020" }, { "authors": "Sandy Ritchie; You-Chi Cheng; Mingqing Chen; Rajiv Mathews; Daan Van Esch; Bo Li; Khe Chai; Sim ", "journal": "", "ref_id": "b49", "title": "Large vocabulary speech recognition for languages of africa: multilingual modeling and self-supervised learning", "year": "2022" }, { "authors": "A Adedjouma; Sèmiyou; Mamoud A John Or Aoga; Igue", "journal": "Research Journal of Computer and Information Technology Sciences", "ref_id": "b50", "title": "Part-of-speech tagging of yoruba standard, language of niger-congo family", "year": "2012" }, { "authors": "Kathleen Siminyu; Godson Kalipe; Davor Orlic; Jade Z Abbott; Vukosi Marivate; Sackey Freshia; Prateek Sibal; Bhanu Bhakta Neupane; David Ifeoluwa Adelani; Amelia Taylor; Jamiil Toure Ali; Kevin Degila; Momboladji Balogoun; Ibrahima Thierno; Davis Diop; Chayma David; Hatem Fourati; Malek Haddad; Naski", "journal": "", "ref_id": "b51", "title": "Ai4d -african language program", "year": "2021" }, { "authors": "Aminu Tukur; Kabir Umar; Muhammad", "journal": "", "ref_id": "b52", "title": "Parts-of-speech tagging of hausa-based texts using hidden markov model", "year": "2020" }, { "authors": "Valentin Vydrin", "journal": "Rhema", "ref_id": "b53", "title": "Where corpus methods hit their limits: the case of separable adjectives in bambara", "year": "2018" }, { "authors": "Wm E Welmers", "journal": "University of California Press", "ref_id": "b54", "title": "African language structures", "year": "2018" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "", "ref_id": "b55", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "", "journal": "MAD", "ref_id": "b56", "title": "", "year": "" } ]
[]
10.18653/v1/N19-1388
2023-10-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b6", "b29", "b39", "b8", "b38", "b9", "b27", "b30", "b19", "b12", "b11", "b35", "b39", "b24", "b2" ], "table_ref": [], "text": "Multilingual models confer the benefit of facilitating cross-lingual learning; however, they also grapple with the issue of language interference (Conneau et al., 2020;Wang et al., 2020a;Shaham et al., 2022). Recent studies aim to alleviate negative language interference through the introduction of language-specific (LS) modules (Zhang et al., 2020;Fan et al., 2020;Zhang et al., 2021;Fan et al., 2021;Pires et al., 2023). In this setup, each language batch is processed through its designated module rather than a shared module. Although this approach is promising and barely inflates the number of FLOPs like Mixture-of-Experts (MoE) (Shazeer et al., 2017;Lepikhin et al., 2021), 2 the number of parameters becomes difficult to manage and sometimes impractical when working with a large variety of languages. This is because the fundamental element forming LS or MoE modules is typically the full-rank weight matrix derived from a densely connected layer, which causes a rapid increase in the number of parameters with a large number of languages or experts. 3 In this paper, we first scrutinize the parameter efficiency of language-specific modules from the perspective of using fewer parameters. Consequently, a necessary question arises (RQ1): can we approximate the original dense weight matrix using substantially fewer parameters? To answer this question, we propose novel and parameter-efficient method, Language-Specific computational cost may only come from communication among devices (such as ALLToALL) or gate routing. 3 Although MoE employs a routing mechanism to keep the number of experts smaller than the number of languages, the parameter cost remains substantial.\nMatrix Synthesis (LMS), which can achieve similar performance to switch transformer even with three to four times smaller LS parameters (as shown in Figure 1).\nThen, we further investigate parameter efficiency from the perspective of knowledge density in each LS module. Given recent discoveries that the performance improvement of sparsely activated models diminishes with an increase in the number of experts (Hoffmann et al., 2022;Gao et al., 2022;Xu et al., 2023), we hypothesize that knowledge in these experts (or LS modules) is over-estimated. Hence, we propose another question (RQ2): Could a single shared module encapsulate the same level of knowledge as language-specific modules? In addressing this question, we introduce the Fuse Distillation (FD) method to examine the feasibility of condensing the multilingual knowledge into a single module.\nOur main contributions are summarized as follows:\n• We propose the parameter-efficient and lightweight LMS method, which substantially outperforms previous LS methods or MoE with fewer than or the same number of parameters, e.g., +1.73 BLEU over Switch Transformer on OPUS-100 multilingual translation.\n• We introduce FD to condense multilingual knowledge from LS modules into a shared module. FD is able to use only 2M more parameters (1% increase) to achieve the 65% of performance gains from Switch Transformer which use 760M more parameters (314% increase) during inference.\n• LMS and FD show strong generalization performance among multiple tasks, including multilingual machine translation (MMT) (Zhang et al., 2020), multilingual named-entity recognition (MNER) (Pan et al., 2017), and multilingual question answering (MQA) (Artetxe et al., 2020)." }, { "figure_ref": [], "heading": "Lightweight LS Modules", "publication_ref": [], "table_ref": [], "text": "In this section, we address RQ1 by constructing LS modules with significantly fewer parameters." }, { "figure_ref": [], "heading": "Language-Specific Matrix Synthesis", "publication_ref": [ "b0", "b13", "b13" ], "table_ref": [], "text": "Language-specific modules are typically composed of linear projections, whose weights are fullrank matrices in previous studies. We propose the Language-specific Matrix Synthesis (LMS) method to form low-rank matrices to approximate the full-rank ones. This is inspired by the concept of \"intrinsic dimension\" in pre-trained language models (Aghajanyan et al., 2021;Hu et al., 2021) and \"intrinsic rank\" in trainable matrices, leading to the idea that features are learned in a subspace. Specifically, as shown in Figure 2, our LS matrix is derived from the multiplication of an LS 'vertical' matrix with an LS 'flat' matrix. Formally speaking, let W ∈ R r×c be a weight matrix in the model and we want to build parallel LS matrices which have the same size. Hence, for each language l i , i ∈ {1, 2, • • • , L} with L being the number of languages, there exists an LS vertical matrix\nW l i v ∈ R r×d and an LS flat matrix W l i f ∈ R d×c (d ≪ min(r, c\n)) that we use to approximate the full-rank matrix. Here, we propose two synthesis methods: language-wise and pair-wise synthesis.\nFigure 2: The difference between pair-and languagewise synthesis. Language-wise synthesis constructs a low-rank matrix using both the vertical and flat matrices derived from the same language. Conversely, pairwise synthesis formulates the matrix by combining the vertical matrix from the source language with the flat matrix from the target language.\nLanguage-Wise Synthesis Most multilingual tasks, such as conventional multilingual questionanswering, are characterized by a languagemonolithic nature: a single example only pertains to a single language, and examples from different languages build the multilingual data. Under such circumstances, a naive way to assemble a language-specific matrix for a given language, l i , is straightforwardly using its corresponding vertical and flat matrices, such that W l i = W l i v W l i f . Pair-Wise Synthesis Cross-lingual tasks like MMT can also be accomplished using languagewise synthesis, wherein the encoder uses the source language matrix and the decoder uses the target language matrix. However, we posit that this is not the optimal strategy for MMT tasks due to the lack of learning bilingual information. Motivated by this, we introduce a pair-wise synthesis method to accommodate the bilingual context in each example in MMT. In this strategy, the language-specific matrix is a composition of the vertical matrix from the source language l i and the flat matrix from the target language l j :\nW l i →l j = W l i v W l j\nf . The difference between the language-wise and pairwise synthesis approaches is depicted in Figure 2. In Section 5, we will demonstrate that the pair-wise synthesis approach is more effective.\nAfter deriving a language-specific matrix, we incorporate it into the original full-rank matrix, as opposed to performing an isolated forward pass of the model like MoE and conventional LS methods. This approach stems from our hypothesis that the employment of low-rank matrices alone may not sufficiently facilitate the learning of features. Therefore, given an input x i associated with a source language l i and a target language l j (l i and l j are the same for language-monolithic tasks), our modified forward pass yields the output x o :\nx o = (W + W l i →l j )x i = (W + W l i v W l j f )x i .\n(1) 2.2 Where to Implement?\nWe primarily focus on incorporating languagespecific matrices generated using the LMS method into the linear projection of each feedforward network (FFN) layer in every transformer layer. Recall from earlier that r and c are the number of rows and columns in the matrix, and L is the number of languages. Thus, the total number of language-specific parameters added is given by 2L • N • d • (c + r), where N represents the number of layers. We also conduct an ablation study to examine the performance when implementing LMS in attention layers in Section 6. For initialization, we employ a random Gaussian distribution for vertical matrices and zeros for flat matrices suggested by Hu et al. (2021).\n3 Can We Fuse Multilingual Knowledge in A Single Module?\nIn this section, we introduce Fuse Distillation (FD) and use a preliminary experiment to answer RQ2: whether we can condense the multilingual knowledge from language-specific modules into a single module." }, { "figure_ref": [ "fig_1" ], "heading": "Fuse Distillation", "publication_ref": [ "b17", "b12", "b11", "b35" ], "table_ref": [], "text": "Let us first consider a language-(or task-) level MoE (Kudugunta et al., 2021), where we replace a single FFN layer with L FFN modules. L is the number of languages, as defined previously.\nThe slight difference from the original design is we discard the routing gate and make each expert language-specific, i.e., an expert only serves batches in its corresponding language. Given recent findings that model improvements diminish with an increasing number of experts (Hoffmann et al., 2022;Gao et al., 2022;Xu et al., 2023), we hypothesize that information contained in experts is sparse and can be condensed into a shared module.\nTo fuse knowledge from L FFN layers to the shared one, we propose the following training scheme and name this method Fuse Distillation: We first add an additional shared FFN parallel to an existing model with L FFN layers as shown in Figure 3. During training, each batch undergoes two forward passes and one backward pass. In the first forward pass, the batch is processed through its language-specific FFN module; in the second pass, the batch is routed through the shared FFN. To fuse the language-specific knowledge contained within the L FFN modules into the shared FFN module, a distillation loss between the outputs from the two forward passes is also incorporated:\nL f d = KL(g(p l ) ∥ p s ).\n(2) where p l denotes the probability output for the LS pass, and p s represents the shared pass output. The function g(•) signifies that gradients will not be traced back, so only the shared module learns from LS modules but LS ones do not learn from this loss. The backward pass also involves optimizing the model by minimizing the Cross-Entropy loss (CE) between the target and predicted values (the regular training loss). Thus, the total loss is:\nL = 1 2 (CE(y ∥ p l ) + CE(y ∥ p s )) + L f d ,(3)\nwhere y denotes gold labels.\nThen, during the inference stage, we discard the LS modules. The model only forward passes the shared FFN for inference. To evaluate whether the shared FFN has effectively learned all LS information, we conduct a comparison between its results and those obtained via the routing through LS modules instead." }, { "figure_ref": [], "heading": "Preliminary Experiments", "publication_ref": [ "b22", "b25", "b28", "b23" ], "table_ref": [], "text": "Our preliminary experiments are conducted under three settings:\n(1) Naive MMT: A basic multilingual translation model is trained without any modifications.\n(2) FD: This setting utilizes our proposed fuse distillation method.\n(3) FD-LS: We train the model with the FD method, but during the inference stage, the input is processed through its language-specific FFN module instead of the shared module as the original language-level MoE did.\nWe carry out our experiments using the IWSLT benchmarks, focusing on the many-to-many translation model paradigm. Following Lin et al. (2021); Xu et al. (2022), we collect 8 Englishcentric language pairs from the IWSLT'14 dataset, with sizes ranging from 89K to 169K sentences. We train all methods with the same number of steps and leave detailed training settings in Appendix A. We report sacreBLEU scores (Papineni et al., 2002;Post, 2018) with the FLORES-200 tokenizer (NLLB Team et al., 2022)." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "Overview results of these 4 settings are shown in Table 1. The reported scores are the average of both xx→en and en→xx directions. As anticipated, after applying language-specific modules for each FFN layer, FD-LS has considerable enhancements over the naive MMT (+1.50 BLEU gains). Importantly, after discarding LS modules, FD only performs slightly worse than FD-LS (+1.17 vs. +1.50) with much fewer parameters for inference (48M vs. 149M). This observation underscores the feasibility of condensing multilingual knowledge into a single FFN module, thereby reducing the need of a large number of LS parameters for inference." }, { "figure_ref": [ "fig_2" ], "heading": "Combining LMS and FD", "publication_ref": [ "b14" ], "table_ref": [], "text": "We have shown the success of multilingual information condensation by fuse distillation. We are interested in further reducing the parameters needed by utilizing the language-specific matrix synthesis method during inference, so we then attempt to incorporate the FD method within LMS. Similar to Section 3.1, apart from the LS vertical and flat matrices, we introduce shared vertical and flat matrices, denoted as W shared v and W shared f , respectively. To employ the fuse distillation method, each batch is required to undergo two forward passes. The initial pass navigates through the LS matrix\nW + W l i v W l j\nf , while the subsequent pass traverses the shared matrix W + W shared v W shared f . These two passes generate two respective outputs, p l and p s . Given the common parameter W shared across both paths, we utilize symmetric KL divergence (Jiang et al., 2020) for distillation, as opposed to the traditional KL divergence:\nL ′ f d = 1 2 (KL(p l ∥ p s ) + KL(p s ∥ p l )). (4)\nThus, the backward pass optimizes both the standard prediction loss and the fuse distillation Table 1: Average BLEU on IWSLT'14 many-to-many translation. Our proposed FD is able to fuse the majority of knowledge into a single module (+1.17 vs. +1.50) with the same parameters as the naive model during inference. loss.\nIn Figure 4, we provide a comprehensive comparison of space complexity for generating extra LS (or expert) modules, among conventional LS modules, Mixture-of-Experts, and our proposed methods. Notably, our methods demonstrate substantial reductions in parameter usage during both training and inference." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate our LMS and LMS+FD methods using three tasks: MMT, MNER, and MQA. Similar to Section 3.2, we have two routing options for the LMS+FD method during inference time: 1) evaluating the model by passing the shared route (denoted as LMS+FD-Share, the default setting), or 2) passing the language-specific module (denoted as LMS+FD-LS). We present results for both routes to show the performance difference between using the condensed module and the original LS modules. Considering the computational cost for MMT, we run all methods once with the same random seed. For the other two tasks, we run experiments with 3 different random seeds and report the average scores. For ease of implementation, we build homogeneous batches (i.e., a batch only containing sentences in one language or one language direction) and only activate the corresponding LS module.4 " }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b38", "b10" ], "table_ref": [], "text": "We compare our approaches against two strong baselines that incorporate additional parameters to mitigate language interference. CLSR: The first baseline is Conditional Language-Specific Routing (CLSR) (Zhang et al., 2021), which employs LS linear projections following FFN or attention layer. Following their best settings, we set the budget p = 0.3 for LS routing. The original setting used shared LS projections across all encoder or decoder sublayers. We also consider a non-shared version, where each sublayer has its own LS projection, and denote it as CLSR*.\nSwitch Transformer: We also consider Switch Transformer (Fedus et al., 2021) as the second strong baseline, which uses similar FLOPs as our methods. 5 We use 16 experts for every two layers with a gate balance loss with a weight of 0.01." }, { "figure_ref": [], "heading": "Multilingual Machine Translation", "publication_ref": [ "b39", "b16", "b28", "b23", "b39", "b39" ], "table_ref": [ "tab_1" ], "text": "Data and Training settings We concentrate on the many-to-many translation setting, with results reported from two benchmarks. The first is the English-centric IWSLT'14 dataset, as aforementioned in Section 3.2. Additionally, we examine the OPUS-100 dataset (Zhang et al., 2020), which encompasses 100 languages in total, including 94 development/test language pairs. We preprocess the data by sentencepiece (Kudo and Richardson, 2018), establishing a vocabulary size of 32K for the IWSLT'14 dataset and 64K for the OPUS-100 dataset. We utilize transformer small and transformer big for IWSLT'14 and OPUS-100, respectively. We fix the training steps for all methods for a fair comparison. For IWSLT'14, we use d = 32 as the rank for low-rank matrices.\nFor OPUS-100, we consider three settings: (i) d = 64 to match the parameter size of the Switch Transformer, (ii) d = 16 to match the parameter size of CLSR, and (iii) d = 4 for very through a single module in each expert layer.\nlightweight LS model construction. The default LMS setting for MMT tasks is pair-wise unless otherwise specified. We discuss more training details in Appendix A.\nEvaluation We report results in terms of sacreBLEU (Post, 2018), tokenized by FLORES-200 tokenizer (NLLB Team et al., 2022), and win ratio (WR) (Zhang et al., 2020) which is the proportion of language pairs on which our method beats the baseline. For IWSLT'14, we report the scores averaged by xx→en and en→xx directions.\nFor OPUS-100, we split the 94 test language pairs into three groups based on their training data size suggested by Zhang et al. (2020): high-resource (> 0.9M, 45 languages), low-resource (< 0.1M, 21 languages) and medium-resource (others, 28 languages), and report the averaged scores in each category. We use beam search with a width of 5 and use a length penalty of 1.\nLMS performance: Light and Effective LS Module The primary results for IWSLT'14 and OPUS-100 are presented in Table 2 Language-Wise or Pair-Wise? We compare language-and pair-wise synthesis in both IWSLT'14 and OPUS-100 (d = 64) datasets. On average, pair-wise synthesis outperforms languagewise synthesis by 0.27 BLEU points on IWSLT'14 (+1.05 vs. +0.78). Moreover, the pair-wise method (+3.60 and +3.35) also shows superior performance on the OPUS-100 dataset compared with the language-wise one (+2.09 and + 2.09). Notably, pair-wise synthesis with d = 16 surpassed the performance of language-wise synthesis with d = 64, even though the latter has 4 times more extra parameters. Hence, this discovery strongly advocates for the use of pair-wise synthesis over the language-wise approach.\nFD performance: Can FD Fuse 95 Languages? On the IWSLT'14 8-language MMT dataset, we observe negligible differences between LMS and LMS+FD (+1.05 vs. +0.88), suggesting successful condensation of information from various language-specific modules into the shared module. In the 95-language (94 languages plus English) scenario of OPUS-100, FD with a dimensionality of 16 utilizes only an additional 2M parameters (less than 1% increase compared to the 242M naive model) to attain 65% of the performance improvements from Switch Transformer (+1.13 vs. +1.75 on average), which requires 760M additional parameters (a 314% increase). While FD may not condense all multilingual information due to restricted parameter capacity, its parameter efficiency is commendable. " }, { "figure_ref": [], "heading": "Multilingual Named-Entity Recognition", "publication_ref": [ "b24" ], "table_ref": [], "text": "Data and Settings We evaluate our methods on Wikiann Named-Entity Recognition (Pan et al., 2017) dataset. We randomly select 24 languages to conduct experiments. The model architecture is based on pre-trained XLM-R base , attached with a feed-forward token-level classifier. We set the dropout rate as 0.1 and run 20 epochs for all methods. We set d = 32 for low-rank matrices and report F1 scores." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "The overall results are shown in Table 4. When applying LMS to each FFN layer for 24 languages, the model size increases by only 70M, while yielding a 0.55 F1 improvement. After implementing LMS+FD, the performance improves by 0.67 with the LS route and achieves a 0.33 gain with the shared route, which requires only an additional 3M parameters. Full results are shown in Appendix B." }, { "figure_ref": [], "heading": "Multilingual Question Answering", "publication_ref": [ "b2" ], "table_ref": [], "text": "Data and Settings We pick 6 languages from TyDiQA (Typologically Diverse Question Answering)-Gold Passage to conduct the MQA experiments (Artetxe et al., 2020). Following Xu and Murray (2022), the representations of subwords in XLM-R base are input to a span classification head; a linear layer computing the answer's start and end. We set d = 32 for low-rank matrices, dropout rate = 0.1, and run 20 epochs." }, { "figure_ref": [ "fig_0" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "The overall results are shown in Table 5. Upon the application of LMS and LMS+FD, all methods exhibit improved performance with a slight increase in parameters. Notably, LMS+FD-Share outperforms LMS+FD-LS. This suggests that FD may be more effective in fusing knowledge when the number of languages is relatively small. Full results are shown in Appendix C. 6 Ablation Study 6.1 Is LMS Parameter-Efficient?\nHere, we examine the parameter efficiency of the LMS method, i.e., whether an increase in extra parameters yields a proportional enhancement in model performance. We conduct experiments with d ranging from 4 to 60 in increments of 8 to observe the resulting performance variations. For comparison, we examine the Switch Transformer with 4, 8, 12, 16 experts to assess its parameter efficiency. We focus on the MMT task using the OPUS-100 dataset. Due to computational demands, we limit experiments to randomly selected 15 languages from OPUS-100, designated as OPUS-15. We leave training details in Appendix D. We report the average BLEU gains over all translation directions in Figure 1. The plot reveals that the LMS curve is steeper compared to that of the Switch Transformer, indicating a higher parameter efficiency for our method, i.e., it achieves greater model performance with fewer additional parameters. Compared with a 16-expert Switch Transformer, LMS with d = 52 yields similar performance by using 3.7 times smaller parameters (51M vs. 189M). Numeric results are in Appendix E." }, { "figure_ref": [], "heading": "Applying LMS to The Attention Layer", "publication_ref": [], "table_ref": [], "text": "In our default design, the LMS is solely applied to FFN layers. We are interested in assessing the potential benefits of extending LMS to the attention layer (in each K, Q, V, output projection). We consider three model variants: (1) LMS applied only to FFN layers (default design), (2) LMS applied only to the attention layers, and (3) LMS applied to both FFN and attention layers. We conduct experiments on OPUS-15, with a fixed rank value of d = 20.\nWe show the averaged BLEU of all translation directions of the three designs in parameters. Moreover, applying LMS to both FFN and attention layers results in a marginal improvement over its application solely to FFN layers. This outcome suggests that LS information is primarily situated in FFN layers, aligning with the previous findings of Wang et al. (2020b)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b39", "b8", "b38", "b9", "b39", "b18", "b7", "b31", "b27", "b30", "b19", "b10", "b35", "b38", "b20", "b32", "b3", "b26", "b5", "b4" ], "table_ref": [], "text": "Language-Specific Modules To mitigate language interference, previous studies incorporate language-specific modules into models, such as additional language-aware linear projections (Zhang et al., 2020;Fan et al., 2020;Zhang et al., 2021;Fan et al., 2021), LS layer normalization (Zhang et al., 2020). Feed-Forward Networks (Kwon and Chung, 2023), or even entire languagedependent transformer layers (Escolano et al., 2021;Wang and Zhang, 2022;Pires et al., 2023). Similar to LS modules, Mixture-of-Experts (MoE) are also able to reduce language interference (Shazeer et al., 2017;Lepikhin et al., 2021;Fedus et al., 2021;Xu et al., 2023). However, the parameter count of LS (or expert) drastically increases when scaling to numerous languages. Zhang et al. (2021) address this issue by sharing all LS modules across all encoder or decoder layers. However, this does not fundamentally resolve the problem, given that the complexity of constructing LS modules remains unaltered and that different layers may need to learn varying types of LS information.\nLightweight Modules Our proposed techniques draw inspiration from another research line, lightweight fine-tuning, wherein the model undergoes fine-tuning on a parameter subset significantly smaller than that of the original model, such as prefix tuning (Li and Liang, 2021), prompt tuning (Lester et al., 2021), multitask prompt tuning (Wang et al., 2023), LoRA (Hu et al., 2021). In the multilingual machine translation setting, previous studies use language-pair adapters (Bapna and Firat, 2019) to fine-tune a specific direction. This approach also extends to languagewise adapters (Philip et al., 2020), languagefamily adapters (Chronopoulou et al., 2023), hyperadapters (Baziotis et al., 2022) to facilitate the cross-lingual learning. In light of the efficient lightweight modules, we propose LMS to help LS modules scale to hundreds of languages." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The construction of language-specific modules (or experts) using full-rank matrices tends to be parameter-intensive and inefficient, especially as the number of languages (or experts) increases. To address this, we have introduced the Language-Specific Matrix Synthesis (LMS) method that approximates the original full-rank matrix. Notably, pair-wise synthesis, a variant of the LMS methods, exhibits commendable performance in MMT tasks. Further, we have proposed the Fuse Distillation (FD) approach to condense multilingual information into a shared module, thereby further diminishing parameter requirements during inference. Our methods outperform CLSR and Switch Transformer in MMT tasks and also demonstrate their effectiveness in MNER and MQA tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b19" ], "table_ref": [], "text": "One limitation of our LMS method is that it necessitates the construction of homogeneous batches, i.e., batches containing sentences exclusively in one language or language direction. However, this limitation could potentially be addressed by implementing ALLToALL communications amongst devices, a strategy that is already widely employed in Mixture of Experts (MoE) models (Lepikhin et al., 2021), which is a topic we intend to explore in future research. In each forward pass of an FFN layer, we need an additional step to multiply two small matrices, creating the low-rank large matrix. The additional cost of this operation is negligible, as the computational complexity of the FLOPs/tok for a Feedforward linear projection, given an input dimension c and output dimension r, is O(r • c), while the complexity for constructing the low-rank matrix with rank d is O(d • (r + c)). For example, in our ablation study, when r = 2048, c = 512, and d = 20, the difference in computational load can be 2048×512 20×(512+2048) ≈ 20 times less. In terms of actual training time, no significant differences were observed; the discrepancy was less than 1 second per 100 updates. Additionally, a potentially effective strategy to enhance multilingual information encapsulation in FD could involve using a larger shared module relative to other lightweight LS modules. This could be an intriguing avenue for future research." }, { "figure_ref": [], "heading": "A Training Details for IWSLT'14 and OPUS-100", "publication_ref": [ "b1", "b16" ], "table_ref": [], "text": "To balance the training data, we also over-sample low-resource languages with a temperature of T = 5 (Aharoni et al., 2019) for the OPUS-100 data and T = 2 for the IWSLT'14 data. We preprocess the data by sentencepiece (Kudo and Richardson, 2018), establishing a vocabulary size of 32K for the IWSLT'14 dataset and 64K for the OPUS-100 dataset. We pre-pend a special language id symbol at the beginning of the source sentence to indicate the target language. We build homogeneous batches (i.e., a batch only containing sentences in one language direction) and only activate the corresponding language-specific matrix. We set the dropout rate as 0.1 for both datasets. For the IWSLT'14 dataset, we fix the training steps at 150K with 8K warm-up steps for all methods, with a batch size of 4096 tokens.\nFor OPUS, we fix the training steps at 100K with 8K warm-up steps for all methods, with a batch size of 4096 tokens but accumulating gradients 4 times. We train all models on 4 RTX 6000 GPUs. For the IWSLT'14 dataset, we employ the transformer small model (with an FFN dimension of 1024 and an embedding dimension of 512), while the transformer big model (with an FFN dimension of 4096 and an embedding dimension of 1024) is utilized for training the OPUS-100 dataset. The maximum learning rate is 0.0005. The optimizer is Adam (Kingma and Ba, 2014) with inverse_sqrt learning rate scheduler and weight decay of 0. We use beam search with a width of 5 and use a length penalty of 1." }, { "figure_ref": [], "heading": "B Full Results for MNER", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "We show the full results of MNER in Table 7." }, { "figure_ref": [], "heading": "C Full Results for MQA", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "We show the full results of MQA in Table 8." }, { "figure_ref": [], "heading": "D Training Details for The Ablation Study", "publication_ref": [ "b16" ], "table_ref": [], "text": "We randomly pick 15 languages from the OPUS-100 data to build a smaller 15-language data (OPUS-15) for the ablation study: eu, pt, bg, sk, zh, sl, de, hr, nb, ga, rw, as, fy, mr, se. We conduct the ablation study under the many-to-many translation settings. To balance the training data, we sample the data with a temperature of T = 5.\nWe preprocess the data by sentencepiece (Kudo and Richardson, 2018), establishing a vocabulary size of 32K vocabulary. we fix the training steps at 50K with 8K warm-up steps for all methods, with a batch size of 4096 tokens. We employ the transformer base model (with an FFN dimension of 2048 and an embedding dimension of 512) for training the OPUS-15 dataset. The other settings are the same as Appendix A." }, { "figure_ref": [ "fig_0" ], "heading": "E Numeric Results for The Ablation Study", "publication_ref": [], "table_ref": [], "text": "Figure 1 shows the averaged BLEU over all directions. Here, We show the detailed numeric results in Figure 9. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank anonymous reviewers for their insightful feedback. We also extend our gratitude to Lingfeng Shen, Hieu Hoang, Young Jin Kim, Hany Hassan Awadalla, Stephen Rawls, and Amr Sharaf for their valuable suggestions. This work was supported in part by IARPA BETTER (#2019-19051600005). The views and conclusions contained in this work are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, or endorsements of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. This work is also supported in part by an Amazon Initiative for Artificial Intelligence (AI2AI) Faculty Research Award." } ]
Incorporating language-specific (LS) modules or Mixture-of-Experts (MoE) are proven methods to boost performance in multilingual model performance, but the scalability of these approaches to hundreds of languages or experts tends to be hard to manage. We present Language-specific Matrix Synthesis (LMS), a novel method that addresses the issue. LMS utilizes parameter-efficient and lightweight modules, reducing the number of parameters while outperforming existing methods, e.g., +1.73 BLEU over Switch Transformer on OPUS-100 multilingual translation. Additionally, we introduce Fuse Distillation (FD) to condense multilingual knowledge from multiple LS modules into a single shared module, improving model inference and storage efficiency. Our approach demonstrates superior scalability and performance compared to state-of-the-art methods.
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
[ { "figure_caption": "Figure 1 :1Figure 1: We show the BLEU gains between the LMS method and the Switch Transformer as the model's parameters increase in our multilingual translation ablation study. The LMS method notably outperforms the Switch Transformer with similar extra LS (expert) parameter counts, achieving comparable performance even with four to five times fewer parameters.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: We utilize a language-level MoE architecture to verify the feasibility of fusing multilingual knowledge from all language-specific modules into a single shared module. During training, each batch goes through the LS module in the first forward pass and goes through the shared module in the second pass. Then, we conduct distillation between two outputs to condense the knowledge into the shared module. For inference, we discard the LS module and only use the shared module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Suppose we incorporate additional language-specific (LS) linear projections into a layer. We compare the space complexity of the extra LS parameters (or experts) needed across all methods for both training and inference phases. Let's denote L = 15 as the number of languages, r = 4096 as the output dimension, c = 1024 as the input dimension, E = 8 represents the number of experts for Mixture-of-Experts (MoE), and d = 32 signifies the rank for low-rank matrices. The number adjacent to the dashed line is the number of parameters calculated based on the given sample numbers. In this case, one can observe that the Language-Specific Matrix Synthesis (LMS) requires a significantly lower quantity of LS parameters compared to other methods during training, and fuse distillation (FD) demands a substantially reduced number of additional parameters during the inference stage.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Overall BLEU results of on IWSLT'14 many-to-many translation. LMS outperforms all baselines. At inference, LMS+FD-Share utilizes extra 1M parameters to exceed baselines that enlarge the model size 2 or 3 times.", "figure_data": "Methodsardeesfaheitnlplavg.#paramsTraining InferenceNaive MMT25.03 32.59 39.98 18.76 33.39 34.00 36.71 22.37 30.3548M48MSwitch Transformer +0.28 +0.40 +0.45 +0.04 +0.60 +0.59 +0.34 +0.67 +0.42149M149MCLSR+0.00 +0.48 +0.51 -0.23 +0.31 +0.50 +0.42 +0.30 +0.2853M53MCLSR*+0.66 +0.87 +1.16 +0.53 +0.99 +1.00 +0.87 +0.94 +0.88105M105MLMS, lang-wise+0.48 +0.53 +0.88 +0.83 +0.86 +0.91 +0.81 +0.91 +0.7858M58MLMS+0.87 +1.08 +1.04 +0.62 +1.37 +1.20 +1.04 +1.16 +1.0558M58MLMS+FD-Share+0.82 +0.93 +1.06 +0.34 +1.23 +0.92 +0.87 +0.83 +0.8860M49MLMS+FD-LS+1.23 +1.34 +1.44 +0.77 +1.51 +1.36 +1.24 +1.15 +1.2660M58MMethodsen→xxxx→en#paramshighmedlowallWR (%) highmedlowallWR (%) Training InferenceNaive MMT23.89 31.17 29.76 27.37-29.40 31.85 31.49 30.60-242M242MSwitch Transformer+1.87 +3.29 +3.51 +2.66100+1.18 +1.15 -0.31 +0.84831002M1002MCLSR+0.02 +0.00 +0.01 +0.0252+1.33 +2.00 +2.71 +1.8391443M443MLMS, lang-wise, d = 64 +2.12 +2.28 +1.77 +2.0995+1.85 +2.34 +2.30 +2.0994989M989MLMS, d = 64+3.60 +3.82 +3.32 +3.6099+2.75 +3.74 +4.16 +3.3595989M989MLMS+FD-Share, d = 64 +0.49 +0.75 +1.29 +0.7488+0.64 +1.52 +2.08 +1.2298996M250MLMS+FD-LS, d = 64+1.72 +2.03 +2.60 +2.01100+1.64 +2.82 +4.03 +2.5299996M996MLMS, d = 16+2.45 +2.62 +2.56 +2.5399+1.75 +2.68 +3.40 +2.3996429M429MLMS+FD-Share, d = 16 +0.54 +1.13 +2.20 +1.0994+0.81 +1.26 +1.85 +1.1794431M244MLMS+FD-LS, d = 16+1.28 +1.84 +2.74 +1.77100+1.35 +2.25 +3.53 +2.10100431M431MLMS, d = 4+1.72 +2.05 +2.31 +1.9599+1.33 +1.80 +1.71 +1.5593289M289M", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The overall MNER results (F1 score) between baseline and our three proposed methods.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The overall MQA results (F1 score) between baseline and our three proposed methods.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "LMS applied only to attention layers yields inferior performance compared to LMS applied only to FFN layers with a similar number of extra", "figure_data": "Methodsavg. BLEU WR (%) #paramsNaive MMT28.05-61MLMS, ffn only (default)+2.1010080MLMS, att only+1.3210077MLMS, att+ffn+2.1410096M", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The average BLEU gains with three different LMS designs with a fixed rank d = 20.", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ".56 94.7 91.59 88.25 89.64 76.79 82.42 92.60 73.22 96.65 90.47 90.63 LMS 90.47 92.76 94.87 92.95 88.45 89.62 80.4 83.15 92.88 75.92 97.00 90.69 90.87 LMS-FD-Share 90.67 92.79 94.91 92.29 87.98 89.74 80.01 82.61 93.05 73.18 96.84 90.61 91.24 LMS-FD-LS 90.90 93.15 95.13 93.05 88.25 89.87 80.75 83.33 93.17 74.04 96.94 90.78 .42 93.75 77.32 92.71 93.56 92.46 93.84 92.07 86.59 84.20 89.75 96% LMS-FD-Share 94.88 92.31 93.65 77.78 92.39 93.40 92.41 93.79 92.07 85.67 84.33 89.53 88 LMS-FD-LS 95.03 92.63 93.83 77.99 92.67 93.75 92.67 94.02 92.22 86.88 84.35 89.87 100%", "figure_data": "MethodsazptmsafkkarqutevimytlfrhiNaive NER90.12 9291.54roeutrzhethunlidelheenavg. WR (%)Naive NER94.90 92.17 93.49 77.26 92.06 93.24 92.18 93.64 92.01 86.23 83.97 89.20-LMS95.01 92", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Full results for the NMER task. We report F1 scores. .18 82.76 63.70 81.90 75.89 LMS+FD-LS 78.95 73.47 78.80 84.27 61.90 81.35 76.46 LMS+FD-Share 79.08 73.44 78.86 84.34 62.15 81.29 76.53", "figure_data": "Methodsbnenfiidkoswavg.Naive MQA77.69 70.36 78.26 83.00 61.60 80.97 75.31LMS77.171.7 78", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Full results for the MQA task. We report F1 scores.", "figure_data": "Methodsen→xxxx→enextra #paramshighmedlowallWR (%) highmedlowallWR (%)TrainingNaive MMT20.94 42.3 22.72 26.99-25.45 37.25 27.95 29.1--Switch Transformer, E = 421.94 45.00 25.76 28.8510026.21 39.35 29.12 30.3010038MSwitch Transformer, E = 822.36 45.11 27.47 29.4510026.37 40.02 29.26 30.599388MSwitch Transformer, E = 12 22.66 45.50 27.19 29.6510026.52 40.32 29.55 30.81100138MSwitch Transformer, E = 16 23.05 46.25 28.61 30.3510026.82 40.33 30.31 31.12100189MLMS, d = 421.61 40.55 24.24 27.198726.16 38.52 29.21 30.071004MLMS, d = 1222.20 44.10 25.12 28.6310026.56 39.40 28.65 30.4010012MLMS, d = 2022.57 45.19 25.85 29.2610026.86 39.89 30.34 31.0310020MLMS, d = 2822.82 43.56 26.13 29.019327.07 39.88 30.27 31.1310028MLMS, d = 3623.10 43.89 26.3 29.289327.24 40.07 30.31 31.2710036MLMS, d = 4423.32 43.61 26.52 29.379327.30 40.53 30.81 31.5310043MLMS, d = 5223.36 45.05 26.64 29.809327.36 40.75 30.72 31.6010051MLMS, d = 6023.50 45.63 26.94 30.0910027.51 40.88 31.20 31.8110059M", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "The numeric results for the Figure1.", "figure_data": "", "figure_id": "tab_13", "figure_label": "9", "figure_type": "table" } ]
Haoran Xu; Weiting Tan; Stella Li; Yunmo Chen; Benjamin Van Durme; Philipp Koehn; Kenton Murray
[ { "authors": "Armen Aghajanyan; Sonal Gupta; Luke Zettlemoyer", "journal": "", "ref_id": "b0", "title": "Intrinsic dimensionality explains the effectiveness of language model finetuning", "year": "2021" }, { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Mikel Artetxe; Sebastian Ruder; Dani Yogatama", "journal": "", "ref_id": "b2", "title": "On the Cross-lingual Transferability of Monolingual Representations", "year": "2020" }, { "authors": "Ankur Bapna; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Simple, scalable adaptation for neural machine translation", "year": "2019" }, { "authors": "Christos Baziotis; Mikel Artetxe; James Cross; Shruti Bhosale", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Multilingual machine translation with hyper-adapters", "year": "2022" }, { "authors": "Alexandra Chronopoulou; Dario Stojanovski; Alexander Fraser", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Language-family adapters for low-resource multilingual neural machine translation", "year": "2023" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Carlos Escolano; Marta R Costa-Jussà; A R José; Mikel Fonollosa; Artetxe", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Multilingual machine translation: Closing the gap between shared and language-specific encoder-decoders", "year": "2021" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary", "journal": "", "ref_id": "b8", "title": "Beyond english-centric multilingual machine translation", "year": "2020" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary", "journal": "The Journal of Machine Learning Research", "ref_id": "b9", "title": "Beyond english-centric multilingual machine translation", "year": "2021" }, { "authors": "William Fedus; Barret Zoph; Noam Shazeer", "journal": "", "ref_id": "b10", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2021" }, { "authors": "Ze-Feng Gao; Peiyu Liu; Wayne Xin Zhao; Zhong-Yi Lu; Ji-Rong Wen", "journal": "", "ref_id": "b11", "title": "Parameter-efficient mixture-of-experts architecture for pre-trained language models", "year": "2022" }, { "authors": "Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Clark", "journal": "", "ref_id": "b12", "title": "An empirical analysis of compute-optimal large language model training", "year": "2022" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b13", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Haoming Jiang; Pengcheng He; Weizhu Chen; Xiaodong Liu; Jianfeng Gao; Tuo Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Sneha Kudugunta; Yanping Huang; Ankur Bapna; Maxim Krikun; Dmitry Lepikhin; Minh-Thang Luong; Orhan Firat", "journal": "", "ref_id": "b17", "title": "Beyond distillation: Task-level mixture-of-experts for efficient inference", "year": "2021" }, { "authors": "Yoohwan Kwon; Soo-Whan Chung", "journal": "IEEE", "ref_id": "b18", "title": "Mole: Mixture of language experts for multi-lingual automatic speech recognition", "year": "2023" }, { "authors": "Dmitry Lepikhin; Hyoukjoong Lee; Yuanzhong Xu; Dehao Chen; Orhan Firat; Yanping Huang; Maxim Krikun; Noam Shazeer; Zhifeng Chen", "journal": "", "ref_id": "b19", "title": "{GS}hard: Scaling giant models with conditional computation and automatic sharding", "year": "2021" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "", "ref_id": "b20", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Zehui Lin; Liwei Wu; Mingxuan Wang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Learning language specific sub-network for multilingual machine translation", "year": "2021" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Maillard", "journal": "", "ref_id": "b23", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Xiaoman Pan; Boliang Zhang; Jonathan May; Joel Nothman; Kevin Knight; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Crosslingual name tagging and linking for 282 languages", "year": "2017" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Jerin Philip; Alexandre Berard; Matthias Gallé; Laurent Besacier", "journal": "", "ref_id": "b26", "title": "Monolingual adapters for zero-shot neural machine translation", "year": "2020" }, { "authors": "Telmo Pessoa Pires; Robin M Schmidt; Yi-Hsiu Liao; Stephan Peitz", "journal": "", "ref_id": "b27", "title": "Learning language-specific layers for multilingual machine translation", "year": "2023" }, { "authors": "Matt Post", "journal": "WMT", "ref_id": "b28", "title": "A call for clarity in reporting bleu scores", "year": "2018" }, { "authors": "Uri Shaham; Maha Elbayad; Vedanuj Goswami; Omer Levy; Shruti Bhosale", "journal": "", "ref_id": "b29", "title": "Causes and cures for interference in multilingual translation", "year": "2022" }, { "authors": "Noam Shazeer; Azalia Mirhoseini; * Krzysztof Maziarz; Andy Davis; Quoc Le; Geoffrey Hinton; Jeff Dean", "journal": "", "ref_id": "b30", "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "year": "2017" }, { "authors": "Qian Wang; Jiajun Zhang", "journal": "", "ref_id": "b31", "title": "Parameter differentiation based multilingual neural machine translation", "year": "2022" }, { "authors": "Zhen Wang; Rameswar Panda; Leonid Karlinsky; Rogerio Feris; Huan Sun; Yoon Kim", "journal": "", "ref_id": "b32", "title": "Multitask prompt tuning enables parameter-efficient transfer learning", "year": "2023" }, { "authors": "Zirui Wang; Zachary C Lipton; Yulia Tsvetkov; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "On negative interference in multilingual models: Findings and a meta-learning treatment", "year": "2020" }, { "authors": "Zirui Wang; Zachary C Lipton; Yulia Tsvetkov", "journal": "", "ref_id": "b34", "title": "On negative interference in multilingual models: Findings and a meta-learning treatment", "year": "2020" }, { "authors": "Haoran Xu; Maha Elbayad; Kenton Murray; Jean Maillard; Vedanuj Goswami", "journal": "", "ref_id": "b35", "title": "Towards being parameter-efficient: A stratified sparsely activated transformer with dynamic capacity", "year": "2023" }, { "authors": "Haoran Xu; Philipp Koehn; Kenton Murray", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "The importance of being parameters: An intradistillation method for serious gains", "year": "2022" }, { "authors": "Haoran Xu; Kenton Murray", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Por qué não utiliser alla språk? mixed training with gradient optimization in few-shot cross-lingual transfer", "year": "2022" }, { "authors": "Biao Zhang; Ankur Bapna; Rico Sennrich; Orhan Firat", "journal": "", "ref_id": "b38", "title": "Share or not? learning to schedule language-specific capacity for multilingual translation", "year": "2021" }, { "authors": "Biao Zhang; Philip Williams; Ivan Titov; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Improving massively multilingual neural machine translation and zero-shot translation", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 305.78, 271.56, 218.13, 29.41 ], "formula_id": "formula_0", "formula_text": "W l i v ∈ R r×d and an LS flat matrix W l i f ∈ R d×c (d ≪ min(r, c" }, { "formula_coordinates": [ 3, 176.08, 308.6, 85.83, 15.82 ], "formula_id": "formula_1", "formula_text": "W l i →l j = W l i v W l j" }, { "formula_coordinates": [ 3, 76.32, 532.8, 195.73, 16.65 ], "formula_id": "formula_2", "formula_text": "x o = (W + W l i →l j )x i = (W + W l i v W l j f )x i ." }, { "formula_coordinates": [ 3, 362.76, 572.51, 105.04, 18 ], "formula_id": "formula_3", "formula_text": "L f d = KL(g(p l ) ∥ p s )." }, { "formula_coordinates": [ 3, 314.21, 730.51, 210.93, 24.43 ], "formula_id": "formula_4", "formula_text": "L = 1 2 (CE(y ∥ p l ) + CE(y ∥ p s )) + L f d ,(3)" }, { "formula_coordinates": [ 4, 458.96, 594.85, 62.69, 15.82 ], "formula_id": "formula_5", "formula_text": "W + W l i v W l j" }, { "formula_coordinates": [ 4, 321.12, 715.49, 204.02, 24.43 ], "formula_id": "formula_6", "formula_text": "L ′ f d = 1 2 (KL(p l ∥ p s ) + KL(p s ∥ p l )). (4)" } ]
[ { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b1", "b4", "b21", "b37", "b35", "b19", "b0", "b13", "b28", "b18", "b10", "b20", "b2", "b22", "b17", "b16", "b13", "b28" ], "table_ref": [], "text": "Large language models (LLMs) have demonstrated exceptional performance in various NLP tasks, utilizing in-context learning to eliminate the need for task-specific fine-tuning (Brown et al., 2020;Chowdhery et al., 2022;OpenAI, 2023). Such models are typically trained on extensive datasets, capturing a wealth of world or domain-specific knowledge within their parameters.\nDespite these achievements, LLMs exhibit certain shortcomings, particularly when confronted with complex reasoning and knowledge-intensive tasks (Zhang et al., 2023;Yu et al., 2023). One prominent drawback is their propensity to hallucinate content, generating information not grounded by world knowledge, leading to untrustworthy outputs and a diminished capacity to provide accurate information (Yu et al., 2022b;Manakul et al., 2023;Alkaissi and McFarlane, 2023). Another limitation of LLMs is the quality and scope of the knowledge they store. The knowledge embedded within an LLM may be incomplete or out-of-date due to the reliability of the sources in the pre-training corpus (Lazaridou et al., 2022;Shi et al., 2023). The vastness of the information landscape exacerbates this issue, making it difficult for models to maintain a comprehensive and up-to-date understanding of world facts. Moreover, LLMs cannot \"memorize\" all world information, especially struggling with the long tail of knowledge from their training corpus (Mallen et al., 2022;Kandpal et al., 2022). This inherent limitation compels them to balance the storage and retrieval of diverse and rare knowledge against focusing on more frequently encountered information, leading to potential inaccuracies when addressing questions related to less common topics or requiring nuanced understanding.\nExisting methods for enhancing the factuality of language models involve adjusting model outputs based on human feedback, followed by reinforcement learning-based fine-tuning (Nakano et al., 2021;Campos and Shern, 2022;Ouyang et al., 2022;Liu et al., 2023). While this approach simulates human-to-human task learning environments, fine-tuning LLMs, can be exceedingly costly due to the exponential growth in LLM size and the necessity for annotators to provide extensive feedback. Over-reliance on positively-rated data may limit the model's ability to identify and rectify negative attributes or errors, potentially hampering its capacity to generalize to unseen data and novel scenarios. Furthermore, once LLMs are fine-tuned, they are unable to receive real-time feedback during inference or facilitate immediate error correction.\nIn this paper, we aim to provide automatic feedback in a plug-and-play manner without the need for fine-tuning LLMs. We explore two primary research questions: First, can we employ a retrieval method to provide feedback on individual generated outputs without relying on human annotators? Second, can we integrate the feedback to refine previous outputs in a plug-and-play manner, circumventing the expensive fine-tuning of language models? With regards to the two questions posed, we propose a novel pipeline for improving language model inference through automatic retrieval feedback, named REFEED, in a plug-and-play framework. Specifically, the language model generates initial outputs, followed by a retrieval model using the original query and generated outputs as a new query to retrieve relevant information from large document collections like Wikipedia. The retrieved information enables the language model to reconsider the generated outputs and refine them, potentially producing a new output (though it may remain the same if no changes are made).\nNotably, compared to retrieve-then-read methods (Lewis et al., 2020;Lazaridou et al., 2022;Shi et al., 2023), our method benefits from more relevant documents retrieved from the corpus to directly elucidate the relationship between query and outputs. Besides, without generating the initial output, the supporting document cannot be easily retrieved due to the lack of lexical or semantic overlap with the question. We discuss the comparison in more detail in related work and experiments.\nTo further enhance our proposed REFEED pipeline, we introduce two innovative modules within this framework. Firstly, we diversify the initial generation step to produce multiple output candidates, enabling the model to determine the most reliable answer by analyzing the diverse set of retrieved documents. Secondly, we adopt an ensemble approach that merges language model outputs before and after retrieval feedback using a perplexity ranking method, as the retrieval feedback may occasionally mislead the language model (refer to the case study in Figure 5 for details).\nOverall, our main contributions can be summarized as follows:\n1. We propose a novel pipeline utilizing retrieval feedback, named REFEED to improve large language models in a plug-and-play manner.\n2. We design two novel modules to advance the REFEED pipeline, specifically diversifying generation outputs and ensembling initial and postfeedback answers.\n3. Our experiments on three challenging knowledge-intensive tasks demonstrate that REFEED can achieve state-of-the-art performance under the few-shot setting." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Solving Knowledge-intensive Tasks via", "publication_ref": [ "b11", "b25", "b26", "b7", "b9", "b29", "b28", "b3", "b14", "b11", "b35", "b30", "b28" ], "table_ref": [], "text": "Retrieve-then-Read Pipeline\nMainstream methods for solving knowledgeintensive NLP tasks employ a retrieve-then-read model pipeline. Given an input query, a retriever is employed to search a large evidence corpus (e.g., Wikipedia) for relevant documents that may contain the answer. Subsequently, a reader is used to scrutinize the retrieved documents and predict an answer. Recent research has primarily focused on improving either the retriever (Karpukhin et al., 2020;Qu et al., 2021;Sachan et al., 2022) or the reader (Izacard and Grave, 2021;Yu et al., 2022a;Ju et al., 2022), or training the entire system endto-end (Singh et al., 2021;Shi et al., 2023). Early retrieval methods largely employed sparse retrievers, such as BM25 (Chen et al., 2017). Recently, ORQA (Lee et al., 2019) and DPR (Karpukhin et al., 2020) have revolutionized the field by using dense contextualized vectors for document indexing, resulting in superior performance compared to traditional approaches. Recently, several work proposed to replace the retrieval model with a large language model as retriever, owing to the powerful knowledge memorization capabilties (Yu et al., 2023;Sun et al., 2023). However, these methods may still be prone to hallucination issues and are unable to access up-to-date information. Notably, compared to retrieve-then-read pipelines like Re-PLUG (Shi et al., 2023), our method benefits from more relevant documents retrieved from the corpus to directly elucidate the relationship between query and outputs. Additionally, without generating the initial output, the text supporting the output cannot be easily identified due to the lack of lexical or semantic overlap with the question." }, { "figure_ref": [], "heading": "Aligning Language Model with Instructions via Human Feedback", "publication_ref": [ "b20", "b2", "b22", "b17", "b27", "b22", "b17" ], "table_ref": [], "text": "Human feedback plays a crucial role in evaluating language model performance, addressing accuracy, fairness, and bias issues, and offering insights for model improvement to better align with human expectations. Recognizing the significance of integrating human feedback into language models, researchers have developed and tested various humanin-the-loop methodologies (Nakano et al., 2021;Campos and Shern, 2022;Ouyang et al., 2022;Liu et al., 2023;Scheurer et al., 2023). Instruct-GPT (Ouyang et al., 2022) was a trailblazer in this domain, utilizing reinforcement learning from human feedback (RLHF) to fine-tune GPT-3 to adhere to a wide range of written instructions. It trained a reward model (RM) on this dataset to predict the preferred model output based on human labelers' preferences. The RM then served as a reward function, and the supervised learning baseline was fine-tuned to maximize this reward using the PPO algorithm. Liu et al. (2023) proposed converting all forms of feedback into sentences, which were subsequently used to fine-tune the model. This approach leveraged the language comprehension capabilities of language models. Although these approaches have shown promising results in enhancing language model performance on specific tasks, they also present several significant limitations. These methods rely on human-annotated data and positively-rated model generations for fine-tuning pre-trained language models, which can involve considerable costs and time investments. Moreover, by relying exclusively on positively-rated data, the model's capacity to identify and address negative attributes or errors may be limited, consequently reducing its generalizability to novel and unseen data." }, { "figure_ref": [], "heading": "Comparative Analysis of Concurrent Related Work", "publication_ref": [ "b6", "b23", "b6", "b23" ], "table_ref": [], "text": "In light of rapid advancements in the field, numerous concurrent works have adopted a similar philosophy, employing automated feedback to enhance language model performance (He et al., 2023;Peng et al., 2023). While these studies jointly validate the efficacy and practicality of incorporating retrieval feedback, it is crucial to emphasize the differences between our work and these contemporary investigations. He et al. (2023) initially demonstrated that retrieval feedback could bolster faithfulness in chain-of-thought reasoning. However, their study is confined to commonsense reasoning tasks, and the performance improvement observed is not substantial. In contrast, our work predominantly targets knowledge-intensive NLP tasks, wherein external evidence assumes a more critical role in providing valuable feedback to augment model performance. Peng et al. (2023) proposed utilizing ChatGPT to generate feedback based on retrieved evidence. In comparison, our research demonstrates that retrieved documents can be directly employed as feedback to refine language model outputs, significantly enhancing the efficiency of this method. Simultaneously, building on the fundamental retrieval feedback concept, we introduce two novel modules, i.e., diversifying generation outputs and ensembling initial and post-feedback answers. In comparison to existing research, our proposed REFEED methodology offers a distinctive contribution to the ongoing discourse. As we persist in exploring this avenue of inquiry, we foresee future studies refining these techniques, ultimately achieving even greater performance gains." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "In this section, we provide an in-depth description of our innovative plug-and-play retrieval feedback (REFEED) pipeline, specifically designed to tackle a variety of knowledge-intensive tasks ( §3.1). The pipeline operates by initially prompting a language model (e.g., InstructGPT) to generate an answer in response to a given query, followed by the retrieval of documents from extensive document collections, such as Wikipedia. Subsequently, the pipeline refines the initial answer by incorporating the information gleaned from the retrieved documents. Besides, we introduce two novel modules based on our REFEED framework. The first module aims to diversify the initial generation step, producing multiple output candidates. This enables the model to identify the most reliable answer by examining the broad range of retrieved documents. The second module employs an ensemble approach that combines language model outputs from both before and after the retrieval feedback process. This is achieved using a perplexity ranking method, which mitigates the risk of retrieval feedback inadvertently misleading the language model." }, { "figure_ref": [], "heading": "Proposed Method: REFEED", "publication_ref": [ "b1", "b22" ], "table_ref": [], "text": "Background. Traditional large language models, such as GPT-3.5 based architectures, have primarily focused on encoding an input query and predicting the corresponding output answer (Brown et al., 2020;Ouyang et al., 2022). In this pro-" }, { "figure_ref": [], "heading": "Step-1 (Generate an initial answer): Q Â", "publication_ref": [], "table_ref": [], "text": "Step-2 (Retrieve documents): Q, Â D1, … Dn\nStep-3 (Refine the previous answer): Q, Â, D1, … Dn A\n[few-shot demos here] Question: Who is the new defense against the dark arts teacher? The answer is ____." }, { "figure_ref": [], "heading": "Severus Snape", "publication_ref": [], "table_ref": [], "text": "Who is the new defense against the dark arts teacher? Severus Snape\nRefer to the following passage answer the question. Question: Who is the new defense against the dark arts teacher? Passage: Neville Longbottom (Fictional Character) … The answer is ____.\nGPT 3.5 BM25 GPT 3.5" }, { "figure_ref": [], "heading": "Amycus Carrow", "publication_ref": [ "b15", "b13", "b28" ], "table_ref": [], "text": "Figure 1: REFEED operates by initially prompting a large language model to generate an answer in response to a given query, followed by the retrieval of documents from extensive document collections. Subsequently, the pipeline refines the initial answer by incorporating the information gleaned from the retrieved documents.\ncess, the question q, when combined with a text prompt, serves as input to the model, which then generates the answer. This can be represented as p(a|q, θ), where θ denotes the pre-trained model parameters. In practical scenarios, the maximum a posteriori estimation (MAP) serves as the final answer, as illustrated by â = argmax a p(a|q, θ).\nHowever, this direct approach to eliciting answers from large language models often leads to suboptimal performance. This is because it does not fully exploit the wealth of supplementary world knowledge available to the model (Levine et al., 2022). To address this limitation, recent research has explored methods to improve model performance by incorporating an additional auxiliary variable, corresponding to a retrieved document (d). This extension modifies the model formulation to p(a|q) = i p(a|d i , q)p(d i |q), marginalizing over all possible documents. In practice, it is infeasible to compute the sum over all possible documents (d) due to the vast number of potential sources. Consequently, the most common approach involves approximating the sum over d using the k highest ranked documents, and providing all these documents as part of the input. We assume, w.l.o.g., that these documents are d 1 , . . . , d k , yielding p(a|q) = k i=1 p(a|d i , q)p(d i |q). This technique is referred to as the retrieve-then-read pipeline (Lazaridou et al., 2022;Shi et al., 2023)." }, { "figure_ref": [], "heading": "Basic Pipeline", "publication_ref": [], "table_ref": [], "text": "Contrary to traditional methods mentioned above, REFEED is designed to offer feedback via re-trieval targeted specifically to individually generated outputs. It can be formulated as p(a|q) = i p(a|d i , q, a)p(d i | a, q)p( a|q), where a represents the initial output, a is the final output, and d i is conditioned not only on q but also on a. Thus, d i is intended to provide feedback specifically on a as the output, rather than providing general information to the query q. As in the case of the retrieve-and-read pipeline, we retain only the top k = 10 highest ranked documents: p(a|q) = k i=1 p(a|d i , q, a)p(d i | a, q)p( a|q). This method enables a smooth integration of feedback to refine previous outputs in a plug-andplay fashion, eliminating the need for costly finetuning. REFEED takes advantage of a collection of relevant documents retrieved from an extensive textual corpus, facilitating the direct elucidation of relationships between queries and outputs. Additionally, without generating an initial output, it becomes difficult to retrieve text that supports the output due to the absence of lexical or semantic overlap with the question. Essentially, REFEED functions by first prompting a large language model to produce an answer in response to a given query, followed by the retrieval of documents from external sources. The pipeline then refines the initial answer by incorporating information obtained from the retrieved documents. The three-step process is illustrated in Figure 1 and outlined below.\nStep 1: Generate an Initial Answer. In this initial step, our primary objective is to prompt a language model to generate an answer based on the given question. To achieve this, various decoding strategies can be employed, e.g., greedy decoding and sampling methods. In our experiments, we opted for greedy decoding due to its simplicity and reproducibility, allowing more consistent performance across multiple runs. This step is essential for establishing a foundation upon which the following steps can build and refine the initial answer.\nStep 2: Retrieve Supporting Documents. The second step in our pipeline involves utilizing a retrieval model (e.g., BM25) to acquire a set of document from an extensive document collection, such as Wikipedia. In our experiments, we retrieve top-10 documents, which offers a balanced trade-off between computational efficiency for step 3 inference and the inclusion of sufficient information. The primary goal of this step is to identify relevant information that can corroborate or refute the relationship between the question and the initially generated answer. By extracting pertinent information from these vast sources, the pipeline can effectively leverage external knowledge to improve the accuracy and reliability of the generated response.\nStep 3: Refine the Previous Answer. The final step of our pipeline focuses on refining the previously generated answer by taking into account the document retrieved in step 2. During this stage, the language model evaluates the retrieved information and adjusts the initial answer accordingly, ensuring that the response is more accurate. This refinement process may involve rephrasing, expanding, or even changing the answer based on the newfound knowledge. By incorporating the insights gleaned from the retrieved document, the refined answer is better equipped to address the initial query comprehensively and accurately, resulting in an improved overall performance. This step is critical for bridg-ing the gap between the initial generation and the wealth of external knowledge, ultimately producing a high-quality, well-informed output." }, { "figure_ref": [ "fig_2" ], "heading": "Enhanced Modules", "publication_ref": [], "table_ref": [], "text": "Diverse Answer Generation. Diversity in answer generation plays a pivotal role in the first step of our REFEED pipeline. Rather than merely generating a single answer with the highest probability, we implement sampling methods to produce a set of potential answers. This approach fosters diversity in the generated outputs and enables a more comprehensive retrieval feedback based on diverse answers. As a result, a more refined and accurate final response is produced that effectively addresses the given question.\nTo elaborate, we input the question q along with a text prompt into the model, which subsequently samples multiple distinct answers, denoted as p(a j |q, θ). We then utilize the n generated answers as input queries for the retrieval process, i.e., [q, a 1 ], • • • , [q, a n ]. This stage is realized by multiple decoding passes, wherein the input query is fed into the language model with nucleus sampling. This strategy increases the probability of obtaining a more diverse set of retrieved documents encompassing a broader spectrum of relevant information. Formally, it can be represented as p(a|q) = i,j p(a|d i,j , q, a j )p(d i,j | a j , q)p( a j |q).\nConsidering the limitations on the number of documents, or say context length of language model input, that can be employed in STEP 3, we merge all retrieved documents (across different a j ), rank them based on query-document similarity scores, and retain only the top-k documents for further processing. In our experiments, we use k = 10, which offers a balanced trade-off between computational efficiency and the inclusion of di-verse information. Furthermore, to account for the possibility of different queries retrieving identical documents, we perform de-duplication to ensure that the set of retrieved documents remains diverse and relevant. Lastly, when computing the final answer, we provide all n generated answers as well as the aforementioned top-k documents as part of the prompt. Formally, this can be represented as By incorporating diversity in answer generation in STEP 1, we effectively broaden the potential answer space, facilitating the exploration of a wider variety of possible solutions. p(a|q) = k i=1 n j=1 p(a|d i,j , q, a j )p(d i,j | a j , q)p( a j |q). Ensembling Initial and Post-Feedback Answers. Retrieval feedback serves as a crucial component in obtaining relevant information to validate the accuracy of initially generated answers. Nonetheless, there may be instances where the retrieved documents inadvertently mislead the language model, causing a correct answer to be revised into an incorrect one (see examples in Figure 5). To address this challenge, we introduce an ensemble technique that considers both the initial answers and the revised answers post-retrieval feedback, ultimately improving the overall generation performance.\nIn ensemble process, we utilize average negative log-likelihood to rank the generated answers before (i.e., LL before (a|q) = 1 t t i=1 p(x i |x <i , q)) and after incorporating retrieved (i.e., LL after (a|q) = 1 t t i=1 p(x i |x <i , q, a, d)). If the log-likelihood of an answer before retrieval feedback is higher than that after retrieval feedback, we retain the initially generated answer. On the other hand, if the log-likelihood is lower after retrieval feedback, we choose the refined answer. This strategy allows for a more informed assessment of the trustworthiness of answer before and after retrieval feedback, ensuring a more accurate final response." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b12", "b8", "b33", "b5", "b11", "b7", "b24", "b38", "b11", "b11", "b26", "b35", "b24" ], "table_ref": [], "text": "In this section, we conduct comprehensive experiments on three knowledge-intensive NLP tasks, including single-hop QA (i.e., NQ (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017)), multihop QA (i.e., HotpotQA (Yang et al., 2018)) and dialogue system (i.e., WoW (Dinan et al., 2019)). In single-hop QA datasets, we employ the same splits as previous approaches (Karpukhin et al., 2020;Izacard and Grave, 2021). With regard to the HotpotQA and WoW datasets, our approach involves the usage of dataset splits provided by the KILT challenge (Petroni et al., 2021).\nTo thoroughly assess the performance of our model, we employ a variety of evaluation metrics, taking into consideration the professional standards established in the field. For the evaluation of open-domain QA, we use exact match (EM) and F1 score for evaluating model performance (Zhu et al., 2021). For EM score, an answer is deemed correct if its normalized form -obtained through the normalization procedure delineated by (Karpukhin et al., 2020) -corresponds to any acceptable answer in the provided list. Similar to EM score, F1 score treats the prediction and ground truth as bags of tokens, and compute the average overlap between the prediction and ground truth answer. Besides, we also incorporate Recall@K (R@K) as an intermediate evaluation metric, which is calculated as the percentage of top-K retrieved or generated documents containing the correct answer. This metric has been widely adopted in previous research (Karpukhin et al., 2020;Sachan et al., 2022;Yu et al., 2023), thereby establishing its credibility within the field. When evaluating open-domain dialogue systems, we adhere to the guidelines set forth by the KILT benchmark (Petroni et al., 2021), which recommends using a combination of F1 and Rouge-L (R-L) scores as evaluation metrics. This approach ensures a comprehensive and rigorous assessment of our model's performance, aligning with professional standards and best practices." }, { "figure_ref": [], "heading": "Backbone Language Model", "publication_ref": [], "table_ref": [], "text": "Codex: OpenAI Codex, i.e., code-davinci-002, a sophisticated successor to the GPT-3 model, has undergone extensive training utilizing an immense quantity of data. This data comprises not only natural language but also billions of lines of source code obtained from publicly accessible repositories, such as those found on GitHub. As a result, the Codex model boasts unparalleled proficiency in generating human-like language and understanding diverse programming languages. Text-davinci-003: Building on the foundation laid by previous InstructGPT models, OpenAI's text-davinci-003 represents a significant advancement in the series. This cutting-edge model showcases considerable progress in multiple areas, including the ability to generate superior quality written content, an enhanced capacity to process and execute complex instructions, and an expanded capability to create coherent, long-form narratives. After careful consideration, we ultimately decided against employing ChatGPT and GPT-4 as the foundational models for our project. The primary reason for this decision is OpenAI's announcement that both models will be subject to ongoing updates in their model parameters 1 . These continual modifications would lead to nonreproducible experiments, potentially compromising the reliability of our research outcomes." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b22", "b35", "b13", "b28" ], "table_ref": [], "text": "In our comparative analysis, we assess our proposed model against two distinct groups of baseline methodologies. The first group encompasses closed-book models, including Instruct-1 https://platform.openai.com/docs/models/gpt-3-5 GPT (Ouyang et al., 2022) and GenRead (Yu et al., 2023), which operate without the assistance of any external supporting documents during the inference process. Each of these baseline methods adheres to a uniform input format, specifically utilizing the structure: [prompt words; question].\nThe second group of models adheres to a retrieve-then-read pipeline (Lazaridou et al., 2022;Shi et al., 2023), which entails a two-stage process. In the initial stage, a retriever component is employed to identify and extract a select number of relevant documents pertaining to a given question from an extensive corpus, such as Wikipedia. Subsequently, a reader component is tasked with inferring a conclusive answer based on the content gleaned from the retrieved documents. Similar to the first group, all baseline methods within this sec- ond group adhere to a standardized input format, which is defined as: [prompt words; passage; question]. As RePLUG are not a open-source model, we implemented it independently, which may result in slightly different outcomes compared to the performance reported in their respective papers." }, { "figure_ref": [], "heading": "Experimental Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Zero/Few-shot Question Answering and Dialogue Evaluation", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "In the zero-shot setting, there is no training question-answer pairs and conversational inputoutput pairs for the models. Consequently, all models are expected to generate answers solely based on the input test question provided, without the benefit of prior training data to guide their responses.\nFor the purposes of our experiments, we utilized text-davinci-003 as the backbone model due to its remarkable performance in zero-shot scenarios. This model excels in situations where no training question-answer pairs are available, as it is adept at generating accurate and relevant answers based solely on the input test question. As demonstrated in Table 1, our proposed REFEED outperforms baseline method by effectively leveraging retrieval feedback. In particular, REFEED exhibits a significant improvement in EM scores by +7.7 on two open-domain QA benchmarks in comparison to the original text-davinci-003. We also observe a similar trend in the context of multi-hop QA tasks and dialogue systems, in which our proposed REFEED consistently surpasses the baseline model.\nOn the other hand, when juxtaposed with methods that directly retrieve or generate documents, our proposed REFEED demonstrates a markedly superior performance. This can be attributed to the fact that alternative methods often struggle to retrieve relevant passages when there is an absence of lexical overlap between the query and the source text. Our proposed REFEED offers a more robust and accurate solution for knowledge-intensive tasks, outpacing baseline methods across various benchmarks and experimental settings.\nIn the few-shot setting, as shown in Table 2, we observed a similar pattern to the zero-shot setting, further reinforcing the effectiveness of our proposed REFEED . This consistency across various settings underscores the model's versatility and adaptability, illustrating its potential to deliver superior results across a wide range of questionanswering and dialogue evaluation tasks." }, { "figure_ref": [ "fig_1" ], "heading": "Ablation Study on Ensemble Method and Diverse Generation", "publication_ref": [ "b31" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Ensemble Method: As shown in Table 3, it is evident that the performance of ReFeed declines by an average of 0.8 EM score across three QA datasets when the ensemble method is not employed. This finding highlights the importance of implementing an ensemble strategy both prior to and following retrieval feedback in order to bolster the model's predictive accuracy and overall effectiveness. The ensemble method effectively utilizes the language model's inherent beliefs in conjunction with the retrieval feedback. It is worth noting that in several instances, the language model is already capable of determining the correct answer to a given question. However, the inclusion of retrieved documents may inadvertently mislead the model. The ensemble method addresses this issue by selecting the predicted answer with a higher log-probability, thus mitigating the negative impact of the retrieved documents on the model's overall performance.\nDiverse Generation: As shown in Table 3, the performance of ReFeed experiences a decline by an average of 1.1 EM score across three QA datasets when diverse generation is not utilized. This observation underscores the significance of incorporating diverse generation, as it can lead to multiple, distinct answers, thereby resulting in a more diverse set of documents retrieved during subsequent stages. Diverse generation plays a crucial role in enhancing the retrieval process by diversifying the range of retrieved documents. Consequently, this increased coverage improves the overall quality and relevance of the information obtained during retrieval. As shown in Figure 4, the use of diverse generation through sampling techniques brings a positive improvement on the answer hit ratio, which is a consistent finding with that in the self-consistency paper (Wang et al., 2023). Incorporating diverse generation into the REFEED framework offers several benefits, including the ability to explore a wider array of potential answers and the capacity to retrieve more comprehensive and diverse documents. With this approach, the model is better equipped to handle complex questions, ultimately leading to more accurate predictions and improved performance across various applications." }, { "figure_ref": [], "heading": "Analysis on Chain-of-thought Reasoning on Multi-hop QA", "publication_ref": [ "b32" ], "table_ref": [ "tab_3" ], "text": "Chain-of-thought reasoning entails the generation of a sequence of intermediate reasoning steps, as described in recent literature (Wei et al., 2022). We demonstrate that ReFeed can be effectively integrated with chain-of-thought reasoning to address complex tasks, as opposed to merely relying on the language model to generate answers directly.\nAs illustrated in Table 4, we implemented ReFeed in conjunction with chain-of-thought reasoning by generating intermediate reasoning steps prior to arriving at the final answer. Following this, we utilized the answer to retrieve documents for feedback and subsequently generated another chain-ofthought reasoning to refine the previously generated response. This approach led to a significant improvement on complex QA scenarios in the Hot-potQA, when compared to employing straightforward QA prompts.\nTo summarize, our proposed ReFeed methodology can be seamlessly integrated with chain-ofthought reasoning, thereby showcasing their complementary nature. The successful combination of ReFeed and chain-of-thought reasoning enables the model to handle more intricate tasks and exhibits its potential for tackling real-world challenges that demand complex problem-solving capabilities." }, { "figure_ref": [ "fig_2" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "In our case studies, we present three examples depicted in Figure 5 to illustrate the impact of retrieval feedback on the language model's ability to refine answers. The initial two instances showcase favorable outcomes, as the language model effectively refines answers into accurate ones by utilizing retrieval feedback. Conversely, the third instance is unfavorable, as the response is misguided by the documents after retrieval, culminating in an inaccurate response.\nIn the first instance, both the question-answering (QA) prompt with text-davinci-003 and the retrieve-and-read model produce \"June 1, 2018\" as answer, which is erroneous. Upon investigating the retrieved document, it discloses that the film's release date was amended to \"May 18, 2018\". However, the document (Deadpool 2) is not retrieved when solely employing the query for retrieval. This occurs because, although the generated answer \"June 1, 2018\" is inaccurate, it augments the lexical overlap between the input query and candidate documents, complicating the model's capacity to pinpoint the accurate information. In the subsequent instance, both the QA prompt with text-davinci-003 and the retrieve-and-read model generate \"Tom Waits\" as the response, which is incorrect. The retrieved document elucidates that \"Tom Waits\" is the composer, rather than the vocalist. This distinction diminishes the generation likelihood of the name, facilitating the model to produce the accurate answer, \"Steve Earle\", following retrieval feedback. It is crucial to emphasize that this informative document is retrieved irrespective of whether the generated answer is employed as a component of the query for retrieval. In the last instance, both the QA prompt with text-davinci-003 and the retrieveand-read model can generate the accurate answer. Regrettably, when incorporating the generated answer as a component of the query for retrieval, a document containing extraneous information is retrieved. This document is not retrieved in the retrieve-then-read pipeline. Alas, this document misdirects the language model, ultimately yielding an inaccurate answer." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, this paper presents a novel pipeline, REFEED, designed to improve large language models' performance in a plug-and-play framework, effectively addressing the challenges arising from knowledge-intensive tasks. By employing a retrieval method to provide automatic feedback on generated outputs and integrating this feedback to refine the outputs without the need for expensive fine-tuning, REFEED offers a practical and efficient solution. We introduce two innovative modules within the REFEED pipeline: diverse answer generation and an ensemble approach. These two modules further enhance REFEED to produce more reliable and accurate answers by considering a wider array of retrieved documents and mitigating the risk of misleading retrieval feedback. Our extensive experiments on four challenging knowledgeintensive benchmarks demonstrate the effectiveness of REFEED in achieving state-of-the-art performance under the few-shot setting. We believe by continuing to refine and optimize the REFEED pipeline, we can unlock its full potential and ex-pand its applicability across a diverse range of scenarios and applications." } ]
Large language models (LLMs) exhibit remarkable performance across various NLP tasks. However, they often generate incorrect or hallucinated information, which hinders their practical applicability in real-world scenarios. Human feedback has been shown to effectively enhance the factuality and quality of generated content, addressing some of these limitations. However, this approach is resourceintensive, involving manual input and supervision, which can be time-consuming and expensive. Moreover, it cannot be provided during inference, further limiting its practical utility in dynamic and interactive applications. In this paper, we introduce REFEED, a novel pipeline designed to enhance LLMs by providing automatic retrieval feedback in a plugand-play framework without the need for expensive fine-tuning. REFEED first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner. Experiments on four knowledge-intensive benchmark datasets demonstrate our proposed REFEED could improve over +6.0% under zero-shot setting and +2.5% under few-shot setting, compared to baselines without using retrieval feedback.
Improving Language Models via Plug-and-Play Retrieval Feedback
[ { "figure_caption": "Figure 2 :Figure 3 :23Figure", "figure_data": "", "figure_id": "fig_0", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Recall@K on test sets, measured as the percentage of top-K documents that contain the answer. The \"Baseline\" refers to direct retrieval based on the input query, where the \"REFEED-O\" represents generating only one answer, and the \"REFEED-D\" represents diverse answer generation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Question:Figure 5 :5Figure 5: Case Studies. The first two examples illustrate how utilizing retrieval feedback leads to the generation of correct answers, while the final example demonstrates a negative outcome where the language model is misled by the retrieved document, resulting in an incorrect response.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "REFEED achieves SoTA performance on three zero-shot knowledge intensive NLP tasks. The backbone model is Text-Davinci-003 (TD-003), which is trained to follow human instructions.", "figure_data": "NQTriviaQAHotpotQAWoWEMF1EMF1EMF1F1R-L*close book methods without using retrieverTD-003 (Ouyang et al., 2022) 29.9 35.4 65.8 73.2 26.0 28.2 14.2 13.3GenRead (Yu et al., 2023)32.5 42.0 66.2 73.9 36.4 39.9 14.7 13.5*open book methods with using retrieverRetrieve-then-Read31.7 41.2 61.4 67.4 35.2 38.0 14.6 13.4REFEED (Ours)39.6 48.0 68.9 75.2 41.5 45.1 15.1 14.0ModelsEMNQF1TriviaQA EM F1HotpotQA EM F1WoW F1 R-LBackbone Language Model: Text-Davinci-003 (TD-003)*close book methods without using retrieverTD-003 (Ouyang et al., 2022) 36.5 46.3 71.2 76.5 31.2 37.5 14.1 13.3GenRead (Yu et al., 2023)38.2 47.3 71.4 76.8 36.6 47.5 14.7 14.1*open book methods with using retrieverRetrieve-then-Read34.3 45.6 66.5 70.6 35.2 46.8 14.5 13.8REFEED (Ours)40.1 50.0 71.8 77.2 41.5 54.2 15.1 14.3Backbone Language Model: Code-Davinci-002 (Codex)*close book methods without using retrieverCodex (Ouyang et al., 2022)41.6 52.8 73.3 79.2 32.5 42.8 16.9 14.7GenRead (Yu et al., 2023)44.2 55.2 73.7 79.6 37.5 48.8 17.2 15.1*open book methods with using retrieverRetrieve-then-Read43.9 54.9 75.5 81.7 41.5 53.7 17.0 14.9REFEED (Ours)46.4 57.0 76.6 82.7 43.5 56.5 17.6 15.5", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "REFEED achieved SoTA performance on three few-shot knowledge intensive NLP tasks.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation Study. Our proposed ensemble method and diversifying generation in ReFeed can improve model performance on four benchmark datasets. The backbone model is Code-Davinci-002 (Codex).", "figure_data": "ModelsEMNQF1TriviaQA EM F1HotpotQA EM F1WoW F1 R-LREFEED (Ours)46.4 57.0 76.6 82.7 43.5 56.5 17.6 15.5⊢ w/o diversifying generation45.1 56.2 75.9 82.1 42.1 54.8 17.0 14.8⊢ w/o ensemble before & after 45.5 56.5 76.1 82.4 42.5 55.3 17.1 14.9", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "REFEED can be applied to chain-of-thought (CoT) reader as well, on multi-step reasoning task.", "figure_data": "ModelsHotpotQA EM F1No Retriever, QA Prompt32.5 42.8No Retriever, CoT Prompt35.0 46.8Retrieve-Read with CoT Prompt 42.1 54.8REFEED with CoT Prompt44.2 57.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Wenhao Yu; Zhihan Zhang; Zhenwen Liang; Meng Jiang; Ashish Sabharwal
[ { "authors": "Hussam Alkaissi; I Samy; Mcfarlane", "journal": "Cureus", "ref_id": "b0", "title": "Artificial hallucinations in chatgpt: implications in scientific writing", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Jon Ander; Campos ; Jun Shern", "journal": "", "ref_id": "b2", "title": "Training language models with language feedback", "year": "2022" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "", "ref_id": "b3", "title": "Reading wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b4", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Emily Dinan; Stephen Roller; Kurt Shuster; Angela Fan; Michael Auli; Jason Weston", "journal": "", "ref_id": "b5", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "year": "2019" }, { "authors": "Hangfeng He; Hongming Zhang; Dan Roth", "journal": "", "ref_id": "b6", "title": "Rethinking with retrieval: Faithful large language model inference", "year": "2023" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b7", "title": "Leveraging passage retrieval with generative models for open domain question answering", "year": "2021" }, { "authors": "Mandar Joshi; Eunsol Choi; Daniel S Weld; Luke Zettlemoyer", "journal": "", "ref_id": "b8", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "year": "2017" }, { "authors": "Mingxuan Ju; Wenhao Yu; Tong Zhao; Chuxu Zhang; Yanfang Ye", "journal": "", "ref_id": "b9", "title": "Grape: Knowledge graph enhanced passage reader for open-domain question answering", "year": "2022" }, { "authors": "Nikhil Kandpal; Haikang Deng; Adam Roberts; Eric Wallace; Colin Raffel", "journal": "", "ref_id": "b10", "title": "Large language models struggle to learn long-tail knowledge", "year": "2022" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b11", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee", "journal": "TACL", "ref_id": "b12", "title": "Natural questions: A benchmark for question answering research", "year": "2019" }, { "authors": "Angeliki Lazaridou; Elena Gribovskaya; Wojciech Stokowiec; Nikolai Grigorev", "journal": "", "ref_id": "b13", "title": "Internetaugmented language models through few-shot prompting for open-domain question answering", "year": "2022" }, { "authors": "Kenton Lee; Ming-Wei Chang; Kristina Toutanova", "journal": "", "ref_id": "b14", "title": "Latent retrieval for weakly supervised open domain question answering", "year": "2019" }, { "authors": "Yoav Levine; Itay Dalmedigos; Ori Ram; Yoel Zeldes; Daniel Jannai; Dor Muhlgay; Yoni Osin; Opher Lieber; Barak Lenz; Shai Shalev-Shwartz", "journal": "", "ref_id": "b15", "title": "Standing on the shoulders of giant frozen language models", "year": "2022" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Hao Liu; Carmelo Sferrazza; Pieter Abbeel", "journal": "", "ref_id": "b17", "title": "Languages are rewards: Hindsight finetuning using human feedback", "year": "2023" }, { "authors": "Alex Mallen; Akari Asai; Victor Zhong; Rajarshi Das; Hannaneh Hajishirzi; Daniel Khashabi", "journal": "", "ref_id": "b18", "title": "When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories", "year": "2022" }, { "authors": "Potsawee Manakul; Adian Liusie; Mark Jf Gales", "journal": "", "ref_id": "b19", "title": "Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models", "year": "2023" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders", "journal": "", "ref_id": "b20", "title": "Webgpt: Browser-assisted questionanswering with human feedback", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b21", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b22", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Baolin Peng; Michel Galley; Pengcheng He; Hao Cheng; Yujia Xie; Yu Hu; Qiuyuan Huang; Lars Liden; Zhou Yu; Weizhu Chen", "journal": "", "ref_id": "b23", "title": "Check your facts and try again: Improving large language models with external knowledge and automated feedback", "year": "2023" }, { "authors": "Fabio Petroni; Aleksandra Piktus; Angela Fan; Patrick Lewis; Majid Yazdani; Nicola De Cao; James Thorne; Yacine Jernite; Vladimir Karpukhin; Jean Maillard", "journal": "", "ref_id": "b24", "title": "Kilt: a benchmark for knowledge intensive language tasks", "year": "2021" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b25", "title": "Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering", "year": "2021" }, { "authors": "Devendra Singh Sachan; Mike Lewis; Dani Yogatama; Luke Zettlemoyer; Joelle Pineau; Manzil Zaheer", "journal": "", "ref_id": "b26", "title": "Questions are all you need to train a dense passage retriever", "year": "2022" }, { "authors": "Jérémy Scheurer; Jon Ander Campos; Tomasz Korbak; Jun Shern Chan; Angelica Chen; Kyunghyun Cho; Ethan Perez", "journal": "", "ref_id": "b27", "title": "Training language models with language feedback at scale", "year": "2023" }, { "authors": "Weijia Shi; Sewon Min; Michihiro Yasunaga; Minjoon Seo; Rich James; Mike Lewis; Luke Zettlemoyer; Wen-Tau Yih", "journal": "", "ref_id": "b28", "title": "Replug: Retrievalaugmented black-box language models", "year": "2023" }, { "authors": "Devendra Singh; Siva Reddy; Will Hamilton; Chris Dyer; Dani Yogatama", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "End-to-end training of multi-document reader and retriever for opendomain question answering", "year": "2021" }, { "authors": "Zhiqing Sun; Xuezhi Wang; Yi Tay; Yiming Yang; Denny Zhou", "journal": "", "ref_id": "b30", "title": "Recitation-augmented language models", "year": "2023" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc Le; Ed Chi; Denny Zhou", "journal": "", "ref_id": "b31", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b32", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "", "ref_id": "b33", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Donghan Yu; Chenguang Zhu; Yuwei Fang; Wenhao Yu; Shuohang Wang; Yichong Xu; Xiang Ren; Yiming Yang; Michael Zeng", "journal": "", "ref_id": "b34", "title": "a. Kg-fid: Infusing knowledge graph in fusion-in-decoder for opendomain question answering", "year": "2022" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b35", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2023" }, { "authors": "Wenhao Yu; Chenguang Zhu; Zaitang Li; Zhiting Hu; Qingyun Wang; Ji Heng; Meng Jiang", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b36", "title": "A survey of knowledge-enhanced text generation", "year": "2022" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b37", "title": "Automatic chain of thought prompting in large language models", "year": "2023" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Chao Wang; Jianming Zheng; Soujanya Poria; Tat-Seng Chua", "journal": "", "ref_id": "b38", "title": "Retrieving and reading: A comprehensive survey on open-domain question answering", "year": "2021" } ]
[]
10.18653/v1/2021.wat-1.22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b12", "b1", "b11", "b5", "b7", "b15", "b17", "b9", "b3", "b4" ], "table_ref": [ "tab_0" ], "text": "We release Sāmayik, an English-Sanskrit parallel dataset, that covers the contemporary usage of Sanskrit, written in prose. Sāmayik is a Sanskrit term that translates to the \"sayings of the contemporary world\". Sāmayik consists of 52,961 parallel sentence pairs, collected from five different sources. These are spoken content that covers contemporary world affairs, interpretation of literary works, pedagogical content, etc.\n'Itihāsa' currently forms the largest parallel machine translation corpus in English-Sanskrit (Aralikatte et al., 2021). This Sanskrit-English dataset comprises 93,000 pairs of verses in Sanskrit along with their corresponding English translations. These Sanskrit verses belong to Rāmāyaṇa and Mahābhārata written in the poetry form and belong to the classical era literature.\nSanskrit is estimated to have around 30 million extant manuscripts fit for digitization. Moreover, it has more than two million active speakers (McCartney, 2019;Chandramouli, 2011). Despite its rich heritage, Sanskrit remains classified as a low-resource language with no more than one million monolingual sentences available in the digitized form (Hellwig, 2010(Hellwig, -2021;;Maheshwari et al., 2022). The available digitized corpora for Sanskrit are vastly diverse not just in terms of the domains and chronology they span, but also in terms of the usage, stylistic features, the underlying syntax (Hellwig, 2009), and even the typological charac-*Outcome of research while pursuing PhD at IIT Bombay. Correspondence: {ayusham, ganesh}@cse.iitb.ac.in teristics such as word order (Krishna et al., 2021;Tubb and Boose, 2007).\nSentence constructions in Sanskrit follow relatively free word order. Here, sentences written in verse form have to adhere to prescribed meter patterns as per prosody. Hence, word order need not adhere to a fixed word-order pattern. However, sentences written in prose tend to form Subject-Object-Verb (SOV) ordering. Ithihāsa and other monolingual available corpora predominantly represent content written in poetry form. Content written in prose is generally underrepresented in available digitized corpora in Sanskrit, especially those written in the contemporary era. To bridge this gap we release Sāmayik.\nSāmayik is a parallel Sanskrit-English dataset encompassing multiple contemporary corpora, providing a comprehensive representation of the contemporary usage of Sanskrit. In Section 2, we provide a detailed description of each source included in our dataset, and Table 1 presents an overview of the statistics for each source. The latest corpus in our collection contains content as recent as 2022, from an ongoing podcast series 'Mann Ki Baat'. The oldest corpus in our collection is the English-Sanskrit Bible, where the Sanskrit translation was performed in 1851 and it forms less than 14% of the overall dataset. The Sanskrit component in the rest of the corpora is composed either in the latter half of the twentieth century or in the current century.\nIn addition to our dataset, we release three benchmarks by adapting pre-trained models for neural machine translation in English-Sanskrit and vice-versa. Here, we adapt four pre-trained multi-lingual seq2seq models for the task, namely ByT5 (Xue et al., 2022), mBART (Liu et al., 2020), In-dicBART (Dabre et al., 2022), and Indictrans (Gala et al., 2023). Except Indictrans, none of the models were exposed to Sanskrit during their pretraining stage where Indictrans is a multi-lingual translation model including English and Sanskrit." }, { "figure_ref": [], "heading": "Sāmayik", "publication_ref": [ "b2" ], "table_ref": [], "text": "Sāmayik is an English-Sanskrit machine translation dataset, consisting of 52,941 sentences from five different corpora. The aim of the dataset is to include translation pairs containing Sanskrit prose written in the modern era. We hired several professional English and Sanskrit linguistic experts for the purpose of translation and alignment for the development of the dataset. The educational qualifications of the experts range from a Master's degree to a Ph.D. The experience of the experts ranged from 3-20 years, with the more experienced ones assigned the job of translation while junior members were assigned the job of sentence alignment. The experts were paid as per the norms laid out by the norms set by the Government of India. Below, we give a brief description of each of the datasets involved and the steps involved in processing these sentences. 1. Bible -The New Testament: We release the New Testament of the Bible aligned with its corresponding English version. We use the Sanskrit version released by Calcutta Baptist Missionaries, originally published in 18511 . The New Testament contains 7,838 sentences from 260 chapters. Each verse is generally indexed by the book name, and chapter name followed by the verse number. For the English version of the Bible, we rely on Christodouloupoulos and Steedman (2015) where the English sentences also follow the same indexing form. Given the one-to-one correspondences at the sentence level for both English and Sanskrit sentences, the mapping was straightforward. We finally obtained a total of 7,838 parallel sentences. Further, three fluent speakers of both English and Sanskrit have verified the alignments for of 100 sentences, randomly sampled from the corpus." }, { "figure_ref": [], "heading": "Mann ki Baat (MKB)", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "2 -MKB is an ongoing monthly radio podcast hosted by the Prime Minister of India, which resumed its broadcast in 2014. Each episode is an address to the nation discussing social, cultural, and contemporary topics including conversation with individuals. Sanskrit translations by experts, albeit unofficial, are avail-able in public domain3 . We use these expert translations and manually align Sanskrit sentences with official English transcripts from 25 episodes. Additionally, these Sanskrit translations are further verified by 3 in-house language experts. The MKB English-Sanskrit corpus Sāmayik release consists of 4,047 sentences with a total of 47,843 words.\n3. Gītā Sopānaṁ-Gītā Sopānaṁis a book published by 'Samskrita Bharati' in 2009 for teaching Sanskrit to beginners. It consists of a total of 6130 sentences. As observable in Table 1, the count of unique words is just 6465 for these 6130 sentences. Gītā Sopānaṁis a self-learning book targeted at beginners and enables them to learn Sanskrit through stories. It often contains simple and small sentences with a focus on learning the grammar instead of expanding vocabulary. We perform in-house translation of the work to English sentences with the help of 4 language experts wellversed in both English and Sanskrit. Given the expert-level annotations, we only gather one translation per Sanskrit sentence. In summary, each expert annotated around 1500 sentences.\n4. Spoken Tutorials4 -Spoken Tutorial project is a large corpus of video tutorials for training students to use open-source software. These tutorials are created by domain experts and translated into several languages by expert translators. We scraped5 videos and transcripts from their website for which both English and the corresponding Sanskrit translations are available. We extracted transcripts of 254 videos where each video is an of average 10 minutes in duration. The transcripts are manually created and, therefore, do not require additional sentence segmentation. The alignment between the English and the corresponding sentences for each transcript was performed manually with the help of 5 linguistic experts. We ask experts to align English and Sanskrit sentences from the transcripts and merge sentences if one-toone correspondence is not present. Each expert aligned around 5,000 sentences. The final corpus contains 23,835 sentences comprising 237,449 words. " }, { "figure_ref": [], "heading": "NIOS -The", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Systems", "publication_ref": [ "b9", "b8", "b3", "b17", "b18", "b11", "b4" ], "table_ref": [], "text": "1. mBART (Liu et al., 2020): is a multilingual pretrained seq2seq model trained using similar objective as employed in BART (Lewis et al., 2020). We employ mbart-large-50-many-to-manymmt, trained on a large multilingual corpus of 50 languages, for our experiments. The vocabulary size of the pre-trained model is 250K and maximum sequence length of 1024 with 610M parameters.\n2. IndicBART (Dabre et al., 2022) is a multilingual pretrained seq2seq model with 244M parameters trained using the pre-training objective of BART. IndicBART was trained using corpora from Indic languages and English. While different Indic languages use different scripts, these are losslessly converted to Devanagari before tokenization during its pretraining. Hence, we use the Devanagari script for encoding Sanskrit, and Roman script for English.\n3. ByT5 (Xue et al., 2022) is a token free pretrained seq2seq model following the pre-training objective of mT5 (Xue et al., 2021). However, here it is a token-free model that uses a fixed 256-byte value in Unicode as its vocabulary. From prior work (Maheshwari et al., 2022), we observe that the use of the Devanagari script in Unicode to encode content in Sanskrit leads to the best results. We use a base version of ByT5 in our experiments which consists of 582M parameters where UTF-8 bytes are directly fed into the model without any 6 https://www.nios.ac.in/ online-course-material/ indian-knowledge-tradition.aspx text pre-processing. 4. IndicTrans (Gala et al., 2023) is a multi-lingual translation model trained on 22 Indic languages including Sanskrit. The multi-lingual model is trained with the English-Sanskrit bi-text pairs. The NLLB corpora of 3M sentences were filtered to remove noisy sentence pairs. The sentence pairs were filtered using margin-based scoring that finds the closest semantic match between the pairs of source and target sentences. Finally, the model is trained with a dataset size of 244,367 English-Sanskrit bi-lingual sentence pairs. The model is trained with the transformer architecture comprising of 18 encoder and 18 decoder layers with the feedforward dimension of 8192, and 16 attention heads. The model uses sub-word tokenization with the maximum vocab size of 32K for English-Sanskrit and 128K for Sanskrit-English models and parameter count of 1.1B. We fine-tune the English-Sanskrit and Sanskrit-Eng model with Sāmayik corpus." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b13", "b14", "b7", "b16" ], "table_ref": [ "tab_1" ], "text": "Metrics: We evaluate the performance of the models on both BLEU (Papineni et al., 2002) and ChrF (Popović, 2015). BLEU is a word-level ngram precision-based metric whereas ChrF is a character-level n-gram F-score. Here, given that Sanskrit is a morphologically rich language with more than 1,400 possible inflected forms (Krishna et al., 2021), we believe ChrF can be indicative of capturing morpho-syntactic aspects. Data: Due to the relatively low availability of the data, we consider 90% of sentence pairs from the four corpora NIOS, Spoken Tutorial (ST), Gita Sopanam (GS), and Bible as our training set and the rest as our in-domain evaluation set. The evaluation set is equally split into development and test set. To evaluate the performance of our model on a completely different domain test set, we reserve Mann Ki Baat(MKB) as an out-of-domain test set, implying that MKB was not included in the training data. Implementation Details: All models are finetuned from their pre-trained checkpoints using HuggingFace Transformers (Wolf et al., 2020) 2). Performance reported on the Google Translate, NLLB and Indictrans (Vanilla) (below double horizontal line) refers to the evaluation with pre-trained models.\nat 512 token lengths and set the batch size to 128. We use the standard cross entropy loss with label smoothing of 0.1 and AdamW optimizer (Loshchilov and Hutter). All model are trained for a maximum of 30 epochs with a batch size of 16, learning rate of 1e-3, label smoothing factor of 0.1 and weight decay of 1e-4. For IndicTrans, the learning rate is set to 1e-4, dropout is 0.2 and maximum tokens per batch of 1024 and patience of early stopping was set to 5." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "Table 2 shows the performance of all four systems on the in-domain test set. These systems are finetuned on the in-domain training data. As it can be observed from the table, different models perform the best depending on the direction of the translation. Here, mBART reports the best results for English-Sanskrit (En-Sa) translation, whereas Indictrans performs the best for Sanskrit-English (Sa-En) translation. Despite being pre-trained on significant amount of English-Sanskrit parallel corpus, Indictrans reports lower scores for En-Sa direction. However, model reports better scores on Sa-En direction. We hypothesise this can be attributed to the high morphological characteristics of the Sanskrit language which prevents fair evaluation using existing metrics. We perform a zero-shot evaluation using MKB, on our out-of-domain test data, not just on the four systems fine-tuned on Sāmayik, but also on three publicly available systems, namely Google translate (GT) service7 , NLLB-200 1.3B variant8 , and IndicTrans with no fine-tuning. As shown in Table 3, GT outperforms all other systems by a considerable margin in the Sa-En direction, though a significant drop is observed in the En-Sa direction. However, for the En-Sa direction, our fine-tuned version of mBART performs the best in terms of BLEU and our fine-tuned version of Indictrans in terms of ChrF.\nThe considerable drop in performance on the indomain dataset may be attributed to the vocabulary diversity generally observed in Sanskrit corpora. Sanskrit corpora tend to have a long tail of rare words within the corpus. Further, owing to high lexical productivity both with compounding and derivation, these corpora tend to have a diverse vocabulary for one another. Both the long tail of rare words and rich compounding are challenging for models, similar to the current NMT models, that rely on distributional semantics.\n'Contemporyness' forms a key factor for the corpora in Sāmayik. Table 4 shows the performance on MKB in En-Sa translation on the mBART and IndicBART, fine-tuned using an alternate publicly available dataset 'Itihasa'. Here, Itihasa has nearly double the number of training instances than Sāmayik. In spite of it, models fine-tuned on Sā-Bible -The New Testament 1. The book of the generation of Jesus Christ, the son of David, the son of Abraham." }, { "figure_ref": [], "heading": "1.", "publication_ref": [], "table_ref": [], "text": "इब्राह mayik outperform that on Itihasa for MKB. Further, model trained with the combination of Itihasa and Sāmayik report marginally better scores on En-Sa than the models trained only using SāmayikOṅ the contrary in Sa-En, Sāmayik reports better results than the combination of two datasets. We find that systems trained using our dataset has significantly higher BLEU and ChrF scores reinforcing the need for a corpus that follows contemporary content in Sanskrit." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We release a novel dataset, Sāmayik comprising of around 53,000 sentences for English-Sanskrit translation. Unlike existing datasets, Sāmayik emphasizes on contemporary prose writing and is curated from five diverse domains including instruction material, radio-podcast, etc. We also release a set of strong baselines built on four multilingual pre-trained models. We empirically demonstrate that models trained using our dataset achieve better performance than models trained on existing datasets, as well as pre-trained models incorporated with a Sanskrit corpus." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank following translators and reviewers towards development of the corpus. 1. Dr. Dinesh Joshi 2. Dr. Vasudev Aital 3. Shruti Sharma 4. Ashwini PN 5. Atmarama Bhat K Ayush Maheshwari was supported by a fellowship from the Ekal Foundation during his Ph.D. at IIT Bombay. Ashim Gupta is supported by the Bloomberg Data Science Ph.D. Fellowship. This work is a result of the funding by the IKS Division of the Ministry of Education (MoE), Government of India to the IKS Projects (AICTE/IKS/RFP1/2021-22/05). Ganesh Ramakrishnan is also grateful to the National Language Translation Mission (NLTM): Bhashini project by Government of India and IIT Bombay Institute Chair Professorship for their support and sponsorship." } ]
We release Sāmayik, a dataset of around 53,000 parallel English-Sanskrit sentences, written in contemporary prose. Sanskrit is a classical language still in sustenance and has a rich documented heritage. However, due to the limited availability of digitized content, it still remains a low-resource language. Existing Sanskrit corpora, whether monolingual or bilingual, have predominantly focused on poetry and offer limited coverage of contemporary written materials. Sāmayik is curated from a diverse range of domains, including language instruction material, textual teaching pedagogy, and online tutorials, among others. It stands out as a unique resource that specifically caters to the contemporary usage of Sanskrit, with a primary emphasis on prose writing. Translation models trained on our dataset demonstrate statistically significant improvements when translating out-of-domain contemporary corpora, outperforming models trained on older classical-era poetry datasets. Finally, we also release benchmark models by adapting four multilingual pre-trained models, three of them have not been previously exposed to Sanskrit for translating between English and Sanskrit while one of them is multi-lingual pre-trained translation model including English and Sanskrit. The dataset and source code is present at https://github.com/ayushbits/saamayik.
Sāmayik: A Benchmark and Dataset for English-Sanskrit Translation
[ { "figure_caption": "Number of sentences, words, unique words and average word length for different corpus in Sāmayik.available in both English and Sanskrit 6 . Each course consists of multiple topics accessible in the form of PDF files. We use PDF parsers to convert PDF content in text format, without loss of information. We hired a team of five English and Sanskrit linguistic experts who aligned the sentences from the corresponding text files. NIOS contains 11,356 parallel sentences with 105,178 total words and 30,966 unique words.", "figure_data": "National Institute of Open School-ing (NIOS) is a national-level board of educationin India established in 1989. NIOS prints self-instructional study materials for various subjectsup to the senior secondary education level. Weobtained the study materials from the Indian knowl-edge tradition courses offered by NIOS, which are", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results for different models on the indomain test set for En-Sa and Sa-En direction.", "figure_data": ".", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results for out-of-domain test set, namely, Mann Ki Baat (MKB) for En-Sa and Sa-En directions. We omit mBART due to poor performance on in-domain test split (refer Table", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "ByT5 reports the", "figure_data": "ModelIndicBARTmBARTDatasetBLEU ChrF BLEU ChrFItihasa4.6164.316.7Sāmayik6.323.27.322.3Itihasa + Sāmayik6.922.46.821.6Table 4: Comparison between existing dataset Iti-hasa, Sāmayik and Itihasa + Sāmayik on MKB out-of-domain testset for English-Sanskrit translationdirection. The score difference between Itihasa(1st row) and Sāmayik(2nd row) are statisticallysignificant at p<0.05.second best results for En-Sa, though on an aver-age it requires more than five times the sequencelength than that of the other models. Here, ByT5reports a sequence length of 156.99, as against30 for the model with the next longest sequencelength, mBART. The disparity in sequence lengtharises out of the tokenizers used in ByT5 which isat a Unicode byte level against the subword tok-enizers used in the other models.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Samples from different subsets of the Sāmayik.", "figure_data": "यीशुख्रीमः स ानो दायू द् त पू र् पु रुषवं शश्रे णी ।स ानो", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Ayush Maheshwari; Ashim Gupta; Amrith Krishna; Atul Kumar Singh; Ganesh Ramakrishnan; G Anil Kumar; Jitin Singla
[ { "authors": "Rahul Aralikatte; Miryam De Lhoneux; Anoop Kunchukuttan; Anders Søgaard", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Itihasa: A large-scale corpus for Sanskrit to English translation", "year": "2021" }, { "authors": " Chandramouli", "journal": "Government of India", "ref_id": "b1", "title": "Census of india 2011. Provisional Population Totals", "year": "2011" }, { "authors": "Christos Christodouloupoulos; Mark Steedman", "journal": "Language resources and evaluation", "ref_id": "b2", "title": "A massively parallel corpus: the bible in 100 languages", "year": "2015" }, { "authors": "Raj Dabre; Himani Shrotriya; Anoop Kunchukuttan; Ratish Puduppully; Mitesh Khapra; Pratyush Kumar", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "IndicBART: A pretrained model for indic natural language generation", "year": "2022" }, { "authors": "Jay Gala; Chitale; A K Raghavan; Sumanth Doddapaneni; Varun Gumma; Aswanth Kumar; Janki Nawale; Anupama Sujatha; Ratish Puduppully; Vivek Raghavan", "journal": "", "ref_id": "b4", "title": "Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages", "year": "2023" }, { "authors": "Oliver Hellwig", "journal": "Springer", "ref_id": "b5", "title": "Extracting dependency trees from sanskrit texts", "year": "2009-01-15" }, { "authors": "Oliver Hellwig", "journal": "", "ref_id": "b6", "title": "Dcs -the digital corpus of sanskrit", "year": "2010" }, { "authors": "Amrith Krishna; Bishal Santra; Ashim Gupta; Pavankumar Satuluri; Pawan Goyal", "journal": "Computational Linguistics", "ref_id": "b7", "title": "A Graph-Based Framework for Structured Prediction Tasks in Sanskrit", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b10", "title": "Decoupled weight decay regularization", "year": "" }, { "authors": "Ayush Maheshwari; Nikhil Singh; Amrith Krishna; Ganesh Ramakrishnan", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "A benchmark and dataset for post-OCR text correction in Sanskrit", "year": "2022" }, { "authors": "Patrick Mccartney", "journal": "", "ref_id": "b12", "title": "Sustainably-speaking yoga: Comparing sanskrit in the 2001 and 2011 indian censuses", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b13", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Maja Popović", "journal": "", "ref_id": "b14", "title": "chrf: character n-gram fscore for automatic mt evaluation", "year": "2015" }, { "authors": "A Gary; Emery R Tubb; Boose", "journal": "American Institute of Buddhist Studies", "ref_id": "b15", "title": "Scholastic Sanskrit", "year": "2007" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b16", "title": "Transformers: Stateof-the-art natural language processing", "year": "2020" }, { "authors": "Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b17", "title": "ByT5: Towards a token-free future with pre-trained byteto-byte models", "year": "2022" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b18", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12" ], "table_ref": [], "text": "The topic structure explains the topical relationship between two consecutive text units (e.g., paragraphs in a discourse, turns in a dialogue). As one of the essential dialogue analysis tasks, dialogue topic shift detection refers to detecting whether a topic shift has occurred in the response of a dialogue, which can help dialogue systems to change topics and actively guide the dialogue. Since this task can help variant models to understand dialogue topics, it is of great benefit for many downstream tasks, such as response generation [1] and reading comprehension [2,3]. It also can help those real-time applications to generate topics that perform well in dialogue scenarios due to its shift response [4,5,6]. The goal of dialogue topic shift detection is to identify the topic of dialogue by taking into account the current response information in real time. This task is similar to dialogue topic segmentation [7]. However, dialogue topic shift detection is more challenging than dialogue topic segmentation. In dialogue topic segmentation, all utterances are visible to each other, allowing the model to access the whole content of the dialogue after the responses have been given. However, dialogue topic shift detection is a real-time task and cannot access future utterances. For example in Fig. 1, there is a dialogue with two topics (e.g., favorite animal and weakness). The task of topic segmentation is to split this dialogue into two blocks, which can access all six utterances in this dialogue. The task of topic shift detection is to predict whether the next utterance will change the topic, based on all existing utterances. If we want to predict whether the topic is shifted during 𝑢𝑢 1 and 𝑢𝑢 2 , we can only access two utterances, i.e., 𝑢𝑢 1 and 𝑢𝑢 2 .\nFig. 1. An example of the topic structure in a dialogue of six utterances (i.e., 𝑢𝑢 1 -𝑢𝑢 6 ) where each block refers to a topic.\nThe majority of prior research on topic segmentation has focused on enhancing the model and disregarding the comprehensive exploration of information. As a result of insufficient training data, unsupervised techniques remain the prevailing choice for segmenting dialogue topics [8]. On the contrary, dialogue topic shift detection is a relatively new task in the field of dialogue topics. Although those topic segmentation models can be adapted in topic shift detection, the absence of future utterances makes it harder to distinguish the topic shift between utterances.\nOnly a few studies focused on dialogue topic shift detection [9][10][11]. Current studies on dialogue topics shift detection only focus on extracting surface semantic information using pre-trained models, without delving into deeper topic information. These models struggle to comprehend natural dialogues that involve randomness. To address this issue, we employ a prompt-based approach to extract dialogue information at multiple-granularity.\nMoreover, classification and generative models can offer complementary benefits. While classification models tend to perform well in large categories due to their limited search space, generative models can incorporate prior knowledge to better understand small categories, leading to more natural language expressions with clearer explanations. Inspired by this trend, we combine classification and generation to enhance our final classification, while the generation model comprehends the conversation topics at three different levels of granularity. Specifically, we first match the original topic labels to the relevant target sentences. Then, we perform a thorough keyword extraction for each topic block and use these keywords to identify the topics within the block. Lastly, we apply semantic role labeling (SRL) on the dialogue sentences to create the target sentences. We annotated a Chinese Natural Topic Dialogue corpus CNTD in previous work [12] based on NaturalConv [13]. Experimental results on our annotated corpus CNTD dataset and the publicly available English TIAGE dataset show that the proposed model outperforms the baselines." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b8", "b13", "b10", "b14", "b15", "b10", "b11", "b16", "b17", "b18", "b19", "b8" ], "table_ref": [], "text": "We first briefly introduce the relevant dialogue topic corpus, then summarize the existing methods for dialogue topic detection, and finally introduce the related research on Prompt. For English, Xie et al. [9] annotated the TIAGE corpus consisting of 500 dialogues with 7861 turns based on PersonaChat [14]. Xu et al. [11] built a dataset including 711 dialogues by joining dialogues from existing multi-turn dialogue datasets: Mul-tiWOZ Corpus [15], and Stanford Dialog Dataset [16]. Both corpora are either small or limited to a particular domain, and neither applies to the study of the natural dialogue domain. For Chinese, Xu et al. [11] annotated a dataset including 505 phone records of customer service on banking consultation. However, this corpus is likewise restricted to a few specialized domains while natural dialogues are more complicated. Therefore, we annotated a Chinese Natural Topic Dialogue corpus CNTD in previous work [12], which contains 1308 natural conversations from six different domains. And we developed a benchmark on the response-unkown dialogue topic detection task. Current studies on dialogue topics shift detection only focus on extracting surface semantic information using pre-trained models, without delving into deeper topic information.\nThe field of detecting topic shifts in dialogue is still in its infancy and has received limited attention thus far. As we mentioned above, dialogue topic shift detection is similar to topic segmentation, we first discuss the related work in this area. Historically, due to the lack of training data, early studies in dialogue topic segmentation utilized unsupervised methods relying on word co-occurrence statistics [17] or sentence topic distributions [18] to determine sentence similarity between conversational turns and identify changes in thematic or semantic content. However, with the advent of large-scale corpora such as Wikipedia, supervised methods for monologic topic segmentation have gained popularity, especially those using neural-based approaches [19,20]. These supervised techniques have become the favored choice among researchers due to their improved performance and efficiency.\nDialogue topic shift detection is strongly different from dialogue topic segmentation. For the dialogue topic shift detection task, Xie et al. [9] are the first to define this task and predicted the topic shift based on the T5 model. In general, the dialogue topic shift detection task is still a challenge, as it can only rely on the context information of the dialogue. In this paper, based on a classification module, we use a generation module to further mine information from conversations, and joint training to facilitate real-time topic shift detection. " }, { "figure_ref": [ "fig_0" ], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "Our framework is shown in Fig. 2. To enhance the performance of topic detection in the classification module, we introduce a method that mines dialogue information from various granularities through the generation module. Our model is comprised of two sub-modules: the classification module and the generation module. Our classification module includes a classification layer at the end of the encoding layer. Moreover, we consider the task of dialogue topic shift detection as a text-generation task to achieve a more thorough understanding of the topics. Unlike previous studies, we employ a generative model from three granularities to produce the target sentences. Finally, the generated results are mapped to the relevant relations to determine the relation types." }, { "figure_ref": [], "heading": "Classification Module", "publication_ref": [], "table_ref": [], "text": "Let 𝐶𝐶 = {𝑑𝑑𝑢𝑢 1 , … , 𝑑𝑑𝑢𝑢 𝑖𝑖 , … , 𝑑𝑑𝑢𝑢 𝑛𝑛-1 } represents a set of existing utterances, where 𝑛𝑛 -1 is the number of the existing utterances, and 𝑑𝑑𝑢𝑢 𝑖𝑖 is the i-th utternace. Let 𝑅𝑅 = {𝑑𝑑𝑢𝑢 𝑛𝑛 } represents a response utterance after 𝐶𝐶. Finally, the set of all known utterances includes 𝐶𝐶 and 𝑅𝑅, which can be denoted as 𝐷𝐷𝐷𝐷 = {𝐶𝐶, 𝑅𝑅}.\nWe need to learn a model 𝑓𝑓: 𝐷𝐷𝐷𝐷 → 𝑌𝑌(𝑅𝑅) to classify the response utterance 𝑅𝑅 (i.e., 𝑑𝑑𝑢𝑢 𝑛𝑛 ) into the predefined categories 𝑌𝑌 = {0,1}, which is the ground-truth label (0 denotes non-shift and 1 denotes shift).\nSimilar to the input of traditional classification models, we first convert 𝐷𝐷𝐷𝐷 into a string 𝐷𝐷 as follows.\n𝐷𝐷 =<\\𝑠𝑠 > 𝑑𝑑𝑢𝑢 1 <\\𝑠𝑠 >. . . <\\𝑠𝑠 > 𝑑𝑑𝑢𝑢 𝑛𝑛 <\\𝑠𝑠 >(1)\nwhere 𝑑𝑑𝑢𝑢 1 = {𝑤𝑤 , 1 1 . . . , 𝑤𝑤 1 𝑗𝑗 } and 𝑑𝑑𝑢𝑢 𝑛𝑛 = {𝑤𝑤 , 𝑛𝑛 1 . . . , 𝑤𝑤 𝑛𝑛 𝑘𝑘 } denote the sequence of tokens of 𝑑𝑑𝑢𝑢 1 and 𝑑𝑑𝑢𝑢 𝑛𝑛 , respectively. Then, we feed 𝐷𝐷 to the encoder stack (i.e., the encoder of T5) to obtain the encoder's hidden state 𝐻𝐻𝐻𝐻𝑑𝑑𝑑𝑑𝐻𝐻𝑛𝑛 𝐸𝐸 as follows.\n𝐻𝐻𝐻𝐻𝑑𝑑𝑑𝑑𝐻𝐻𝑛𝑛 𝐸𝐸 = 𝑇𝑇5 -Encoder(𝐷𝐷) (2)\nSince the state at the <\\𝑠𝑠 > position is not used in T5, we use the Endpoint of the span of the utterance combining the left (𝐻𝐻 𝑖𝑖 ) and right (𝐻𝐻 𝑖𝑖+1 ) hidden states to represent the corresponding 𝑑𝑑𝑢𝑢 𝑖𝑖 , as shown in Eq. 3. Then, we feed the output 𝑉𝑉 𝑠𝑠 𝑖𝑖 of the span representation layer into the classification layer to judge the topic (T) of the last 𝑑𝑑𝑢𝑢 as follows, where we choose BiLSTM as our classification layer.\n𝑉𝑉 𝑠𝑠 𝑖𝑖 = Con(𝐻𝐻 𝑖𝑖 ,H 𝑖𝑖+1 ) (3) 𝑉𝑉 𝑠𝑠 = [𝑉𝑉 𝑠𝑠 1 ,...,V 𝑠𝑠 𝑛𝑛 ](4)\n𝑃𝑃 = 𝐶𝐶𝐶𝐶𝐶𝐶𝑠𝑠𝑠𝑠𝐻𝐻𝑓𝑓𝐻𝐻𝐻𝐻𝐶𝐶(𝑉𝑉 𝑠𝑠 )(5)" }, { "figure_ref": [], "heading": "Generation Module", "publication_ref": [], "table_ref": [], "text": "At the decoding stage, we think that solely relying on label data to build the target sentence doesn't fully utilize the generation model's comprehension abilities. There is no additional relevant information in the current corpus. Hence, we pre-process it following previous work. Furthermore, to uncover more information from dialogues, we break it down into two additional levels based on the information already present in the corpus. The dialogue is divided into topic-level and turn-level, considering the topic distribution and speaker information, respectively. Before initiating the model training, we employ various methods to extract relevant information from dialogues. We describe the details of data pre-processing in the following. Based on the pre-processing results, we formulate the target sentences accordingly. Our model is then segmented into three levels of granularity. Firstly, we translate the original topic labels into the target sentences. Secondly, we analyze the topic information between topic blocks and identified keywords for each topic block which are used to form the target sentences. Lastly, we apply semantic role labeling (SRL) on the dialogue sentences to create the target sentences. In this module, the weights of decoders from different granularities are shared. Label-level Prompt First, we apply the prompt learning to the original topic labels in the dialogue topic corpus. Specifically, we design a target sentence based on the traditional prompting methods and take into account the definition of the task and the semantics of the labels. The target sentence is as follows." }, { "figure_ref": [], "heading": "Chinese: 相对上文,当前话语的话题 [MASK]。", "publication_ref": [ "b20" ], "table_ref": [], "text": "English: Relative to the above, the topic of the current discourse has [MASK]. Topic-level Prompt In dialogue, a topic consists of several utterances, which are around the same topic. To enhance the model's performance, we aimed to provide topic information for each topic in addition to building target sentences using label information. However, the corpus does not have such labels. Hence, we utilized keywords extracted for each extracted topic as the topic information for all sentences in the topic's block. Specifically, we merged turns in the same topic block as input, eliminating speaker information. Moreover, the current topic and the previous topic are used to generate topic information, considering the contextual information. Besides, the response utterance is included in the current topic block to utilize the response information.\nWe created templates to construct target sentences and filled them with topic information, using keyBERT [21] for keyword extraction in each dialogue topic block. For the final set of candidate tuples, we apply these rules for filtering as follows.\n• We use the confidence values from keyBERT to choose the top keywords.\n• As the best keywords of consecutive blocks may overlap, we prioritize fulfilling the block with the highest-ranked topic of the turn, and then the next block chooses the next best keyword from the candidate tuple. • For those topic blocks with no candidate tuple, we manually select the keywords for those blocks.\nThe specific form of the target sentence is as follows, where T2 is the current topic block and T1 is the previous topic block." }, { "figure_ref": [], "heading": "Chinese: 前文在谈论[MASK],而当前在谈论[MASK],因此对话话题[MASK]。", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "English: Since the previous text is talking about [MASK] and the current text is talking about [MASK], the topic of conversation has [MASK]. Turn-level Prompt", "publication_ref": [], "table_ref": [], "text": "We think the key information for a topic shift is also hinted at various turns besides that obtained at the topic level. To better detect topic shift scenarios, it would be beneficial for the model to acquire semantic information of utterances, such as semantic roles. However, this information is absent in the dialogue topic corpus. Hence, we applied the SRL tool LTP1 for Chinese and AllenNLP2 for English to extract all possible tuples for turns (utterances). Then we use the following three rules to filter out those redundant tuples as follows.\n• Streamlining core semantics. Except for the elements mentioned in Rule 3, we remove the unimportant elements from the extracted candidate tuples. • Reducing semantic overlapping. Since the long nested sentences lead to semantic overlapping, we remove the small tuples that are included in the larger candidate tuples. Finally, considering that the final result is too long to invalidate the target sentence, we limit the length range of the final information to 10 or fewer. • Extracting key information. We tend to select the role A1 (i.e., patient) in SRL as the result in the sentence, followed by the attribute Predicate. And for a few semantically incomplete tuples (i.e., the tuples do not contain A0 and A1 in SRL), we manually determine the final information based on the content of the turn. The specific form of the target sentence is as follows, where S1 and S2 refer to the created sentence of the previous turn and the current turn, respectively." }, { "figure_ref": [], "heading": "Chinese: 前文在谈论[MASK],而当前在谈论[MASK],因此对话话题[MASK]。", "publication_ref": [], "table_ref": [ "tab_0", "tab_0" ], "text": "English: Since the previous text is talking about [MASK] Table 1 shows examples of label-level, topic-level, and turn-level target sentences (i.e., template) in English. These target sentences are constructed to prompt the model to predict the information in the corresponding position of [MASK] and finally determine whether the response (i.e., turn2 in Table 1) is the beginning of a new topic.\nFinally, we use the final constructed target sentence T in the same style as the encoding layer as follows.\n𝑇𝑇 𝑙𝑙 = 𝑊𝑊 𝑇𝑇 <\\𝑠𝑠 > (𝐶𝐶 ∈ {𝐿𝐿𝐶𝐶𝐿𝐿𝐻𝐻𝐶𝐶, 𝑇𝑇𝑇𝑇𝑇𝑇𝐻𝐻𝑇𝑇, 𝑇𝑇𝑢𝑢𝐶𝐶𝑛𝑛})\nwhere 𝑊𝑊 𝑇𝑇 = {𝑤𝑤 𝑇𝑇 1 , … , 𝑤𝑤 𝑇𝑇 𝑛𝑛 } denotes the sequence of tokens of the target sentence. Then we feed them to the decoding layer to obtain the decoder hidden state 𝐻𝐻 𝑙𝑙 as follows.\n𝐻𝐻 𝑙𝑙 = 𝑇𝑇5 -Encoder(𝑇𝑇 𝑙𝑙 )(7)\nFinally, we use a linear layer generator with softmax to produce the predicted target sentence, where the last [MASK] of the target sentence will be predicted as \"SHIFT'' or \"NON-SHIFT''." }, { "figure_ref": [], "heading": "Model Training", "publication_ref": [], "table_ref": [], "text": "We learned the above two modules together. The loss function of the classifier ( 𝐿𝐿 𝐶𝐶𝑙𝑙𝐶𝐶𝑠𝑠𝑠𝑠 ) and the multi-granularity generator( 𝐿𝐿 𝐿𝐿𝐶𝐶𝐿𝐿𝐿𝐿𝑙𝑙 , 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑖𝑖𝑇𝑇 , 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑛𝑛 ) is the crossentropy loss, and the total loss (Loss) is the sum of both losses, as follows.\n𝐿𝐿𝑇𝑇𝑠𝑠𝑠𝑠 = 𝐿𝐿 𝐶𝐶𝑙𝑙𝐶𝐶𝑠𝑠𝑠𝑠 + 𝐿𝐿 𝐿𝐿𝐶𝐶𝐿𝐿𝐿𝐿𝑙𝑙 + 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑖𝑖𝑇𝑇 + 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑛𝑛(8)" }, { "figure_ref": [], "heading": "EXPERIMENTATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Experimental Settings", "publication_ref": [ "b21", "b20" ], "table_ref": [], "text": "We evaluate our model on two datasets, the Chinese CNTD and English TIAGE.\nFollowing previous work, we used the same dataset partitioning on English TIAGE and Chinese CNTD. Based on the dataset of CNTD and TIAGE, we extract (context, response) pairs from each dialogue as input and the label of response as a target for the response-known task. In our experiments, every utterance except the first utterance of the dialogue can be considered as a response. As for evaluation in all experiments of this paper, we report Precision (P), Recall (R), and Macro-F1 scores.\nOur experiments are all performed on 3090Ti and use Pytorch and Huggingface as deep learning frameworks, with 2 BiLSTM layers in encoding for both English and Chinese. For Chinese, we use mt5(base) [22] as our T5 model, which is pre-trained on the mC4 corpus, covering 101 languages. It is a T5(base) model with 12 encoder layers and 12 decoder layers. Besides, the model file used by keyBERT [21] is paraphrase-multilingual-MiniLM-L12-v2 for Chinese and paraphrase-MiniLM-L6-v2 for English. For each experiment, we set the batch size to 2 and the number of training epochs to 20. In addition, we used the warm-up strategy as well as the AdamW optimizer and set the decay factor to 0.01." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b8", "b22", "b8", "b23", "b23", "b24" ], "table_ref": [ "tab_1", "tab_2" ], "text": "In the task of dialogue topic shift detection, Xie et al. [9] is the only work that established a benchmark using the T5 model on TIAGE. Due to the similarity of this task to topic segmentation, we also attempted to utilize the hierarchical model along with the pre-trained model as our baseline for topic shift detection. While in the pre-trained model, T5 is considered the SOTA model. Hence, we conduct the following baselines for comparison: 1) RoEBRTa [23], an improvement on BERT; 2) T5 [9], a modification of the Transformer structure for various NLP tasks as Text-to-Text tasks; 3) Hier-BERT [24], a hierarchical structure based on the Transformer model; 4) BERT+BiLSTM [24], a combination of BERT for text encoding and a bi-directional LSTM for deep bi-directional language representation; 5) BERT [25], a bidirectional encoder based on Transformer for text encoding;\nThe results are presented in Table 2, which indicate that the pre-trained models show inconsistent performance during the experiments, with RoBERTa exhibiting the poorest results and T5 having the highest performance with a noteworthy F1 score of 81.1. Nevertheless, Compared to a single pre-trained model, it is evident that both hier-BERT and BERT+BiLSTM, which incorporate a hierarchical structure, attain improved performance, recording F1 scores of 81.7 and 82.4, respectively.\nThe results of the experiments suggest that models incorporating a hierarchical structure provide more consistent results in the task of dialog topic detection. Moreover, our model (Ours) further outperforms the best baseline BERT+BiLSTM significantly (p < 0.01), with a 3.0 improvement in F1-score. This result verifies the effectiveness of our proposed model. In addition, we also evaluate our model and the baselines in English TIAGE as shown in Table 3. Compared with BERT, both the hierarchical structure models Hier-BERT and BERT+BiLSTM can obtain better performance. However, different from the results in Chinese, T5 is better than the other three baselines in English. Our proposed model outperforms the best baseline T5 significantly with a 2.3 improvement in F1-score. This result further verifies the effectiveness of our proposed model." }, { "figure_ref": [], "heading": "Ablation Study on Classification and Generation", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_5" ], "text": "We statistically analyzed the performance of the classification and generation modules in our proposed model. The results are shown in Table 4, where cla and gen indicate the classification module and the generation module, respectively. As shown in Table 4, it is surprising that cla and gen achieve the same precision, recall, and F1 score. This indicates that the generation model can achieve equivalent performance to the classification model. Our proposed model combining classification and generation cla + gen) is better than cla and gen. This result shows the combination of classification and generation is also an effective way for dialogue topic shift detection and these two models can interact and promote each other. The results are shown in Table 5.In the case of the single-level prompt, all the results of label-level, topic-level, and turn-level prompts are better than the basic T5, especially the topic level. This indicates that all three-level prompts are effective for dialogue topic shift detection. Moreover, the performance of the topic level reaches 83.9 in the F1 value and gains the highest improvement (+1.7). It demonstrates that the key information from the topic block has more effective topic information to enhance the model to distinguish different topic shift situations. In addition, it can be noted that both the combination of the label-level and Topiclevel prompt (Label + Topic) and the combination of the label-level and Turn-level prompt (Label + Turn) will harm the performance, in comparison with the singlelevel prompt. This indicates that the information of Label and Topic/Turn is partly crossed and even has a negative impact. It may also be caused by the different forms of target sentences at different granularities. In the case of only two granularities, the different forms of target sentences interact with each other leading to a degradation of performance. In the case of three granularities, the model is dominated by the second form of target sentences, so the performance can be improved. On the contrary, the combination of the Topic-level and Turn-level prompt (Topic + Turn) is better than the single-level prompts Topic and Turn. This indicates that these two prompts can promote each other. Moreover, if we combine all three prompts (Label + Topic + Turn), it can improve the F1 score in comparison with the above combinations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce a prompt-based model with multi-granularity to detect the topic shift in dialogues, which consists of a classification module and a generation module based on T5. Experimental results on our annotated Chinese dataset CNTD and the publicly available English TIAGE dataset show that the proposed model outperforms the baselines. Further experiments show that the information extracted at different levels of granularity effectively helps the model comprehend the conversation topics. However, when analyzing and observing the information we extracted at different granularities, it is clear that this key information existence of errors and noise. Our future work will focus on improving the reliability of dialogue information mining, and also explore the finer granularity of topic shift scenarios." }, { "figure_ref": [], "heading": "6", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank the three anonymous reviewers for their comments on this paper. This research was supported by the National Natural Science Foundation of China (Nos. 62276177, and 61836007), and Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)." } ]
The goal of dialogue topic shift detection is to identify whether the current topic in a conversation has changed or needs to change. Previous work focused on detecting topic shifts using pre-trained models to encode the utterance, failing to delve into the various levels of topic granularity in the dialogue and understand dialogue contents. To address the above issues, we take a prompt-based approach to fully extract topic information from dialogues at multiple-granularity, i.e., label, turn, and topic. Experimental results on our annotated Chinese Natural Topic Dialogue dataset CNTD and the publicly available English TIAGE dataset show that the proposed model outperforms the baselines. Further experiments show that the information extracted at different levels of granularity effectively helps the model comprehend the conversation topics.
Multi-Granularity Prompts for Topic Shift Detection in Dialogue
[ { "figure_caption": "Fig. 2 .2Fig. 2. Model architecture, which contains a classification module (left) and a generation module (right). We add the classification layer after the encoder to form the classification module. And the original decoder is used as the generation module to mine the conversation information by multiple-granularity, where 𝑇𝑇 denotes the target sentence and 𝑃𝑃 denotes the classification result.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Examples of label-level, topic-level, and turn-level target sentences, where T1 and T2 are the key information of the previous topic block Block1 and the current block Block2 respectively, S1 and S2 are the key information of Sentence1 and Sentence2 respectively, and Label represents the topic shift information. For different positions of the [MASK], we comment the corresponding hints in parentheses.", "figure_data": "and the current text is talk-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of the baselines and ours on CNTD (p < 0.01).", "figure_data": "ModelPRF1BERT82.979.280.8RoBERTa84.475.478.6T583.079.781.1BERT+BiLSTM82.882.082.4Hier-BERT85.679.081.7Ours85.783.884.7", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of the baselines and ours on TIAGE (p < 0.01).", "figure_data": "ModelPRF1BERT68.565.466.6T576.572.273.9BERT+BiLSTM75.870.872.7Hier-BERT73.869.671.2Ours73.877.276.2", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of the classification and generation on CNTD.", "figure_data": "ModelPRF1gen83.881.182.3cla83.881.182.3gen+cls(Ours)85.783.884.74.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation experiments at different levels of granularity on CNTD.", "figure_data": "ModelPRF1T584.580.582.2+Label82.981.982.4+Topic85.482.683.9+Turn82.682.982.7+Label+Topic84.582.383.4+Label+Turn83.381.782.5+Topic+Turn86.183.184.4+Label+Topic+Turn85.783.884.7", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Jiangyi Lin; Yaxin Fan; Xiaomin Chu; Peifeng Li; Qiaoming Zhu
[ { "authors": "Shuyang Dai; Guoyin Wang; Sunghyun Park; Sungjin Lee", "journal": "", "ref_id": "b0", "title": "Dialogue response generation via contrastive latent representation learning", "year": "2021" }, { "authors": "Jiaqi Li; Ming Liu; Zihao Zheng; Heng Zhang; Bing Qin; Min-Yen Kan; Ting Liu", "journal": "IEEE", "ref_id": "b1", "title": "Dadgraph: A discourse-aware dialogue graph neural network for multiparty dialogue machine reading comprehension", "year": "2021" }, { "authors": "Yiyang Li; Hai Zhao", "journal": "", "ref_id": "b2", "title": "Self-and pseudo-self-supervised prediction of speaker and keyutterance for multi-party dialogue reading comprehension", "year": "2021" }, { "authors": "Asma Ghandeharioun; Judy Hanwen Shen; Natasha Jaques; Craig Ferguson; Noah Jones; Agata Lapedriza; Rosalind Picard", "journal": "", "ref_id": "b3", "title": "Approximating interactive human evaluation with self-play for open-domain dialog systems", "year": "2019" }, { "authors": "Arash Einolghozati; Sonal Gupta; Mrinal Mohit; Rushin Shah", "journal": "", "ref_id": "b4", "title": "Improving robustness of task oriented dialog systems", "year": "2019" }, { "authors": "Bing Liu; Gokhan Tür; Dilek Hakkani-Tür; Pararth Shah; Larry Heck", "journal": "", "ref_id": "b5", "title": "Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems", "year": "2018" }, { "authors": "Linzi Xing; Giuseppe Carenini", "journal": "", "ref_id": "b6", "title": "Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring", "year": "2021" }, { "authors": "A Marti; Hearst", "journal": "Computational linguistics", "ref_id": "b7", "title": "Text tiling: Segmenting text into multi-paragraph subtopic passages", "year": "1997" }, { "authors": "Huiyuan Xie; Zhenghao Liu; Chenyan Xiong; Zhiyuan Liu; Ann Copestake", "journal": "", "ref_id": "b8", "title": "Tiage: A benchmark for topic-shift aware dialog modeling", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b9", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Yi Xu; Hai Zhao; Zhuosheng Zhang", "journal": "", "ref_id": "b10", "title": "Topic-aware multi-turn dialogue modeling", "year": "2021" }, { "authors": "Jiangyi Lin; Yaxin Fan; Feng Jiang; Xiaomin Chu; Peifeng Li", "journal": "", "ref_id": "b11", "title": "Topic shift detection in chinese dialogues: Corpus and benchmark", "year": "2023" }, { "authors": "Xiaoyang Wang; Chen Li; Jianqiao Zhao; Dong Yu", "journal": "", "ref_id": "b12", "title": "Naturalconv: A chinese dialogue dataset towards multi-turn topic-driven conversation", "year": "2021" }, { "authors": "Saizheng Zhang; Emily Dinan; Jack Urbanek; Arthur Szlam; Douwe Kiela; Jason Weston", "journal": "", "ref_id": "b13", "title": "Personalizing dialogue agents: I have a dog, do you have pets too?", "year": "2018" }, { "authors": "Tsung-Hsien Pawełbudzianowski; Bo-Hsiang Wen; Inigo Tseng; Stefan Casanueva; Ultes; Milica Osman Ramadan; Gasic", "journal": "", "ref_id": "b14", "title": "Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling", "year": "2018" }, { "authors": "Mihail Eric; Lakshmi Krishnan; Francois Charette; Christopher D Manning", "journal": "", "ref_id": "b15", "title": "Key-value retrieval networks for task-oriented dialogue", "year": "2017" }, { "authors": "Jacob Eisenstein; Regina Barzilay", "journal": "", "ref_id": "b16", "title": "Bayesian unsupervised topic segmentation", "year": "2008" }, { "authors": "Lan Du; Wray Buntine; Mark Johnson", "journal": "", "ref_id": "b17", "title": "Topic segmentation with a structured topic model", "year": "2013" }, { "authors": "Pinkesh Badjatiya; J Litton; Manish Kurisinkel; Vasudeva Gupta; Varma", "journal": "", "ref_id": "b18", "title": "Attentionbased neural text segmentation", "year": "2018" }, { "authors": "Sebastian Arnold; Rudolf Schneider; Philippe Cudré-Mauroux; Felix A Gers; Alexander Loser", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b19", "title": "Sector: A neural model for coherent topic segmentation and classification", "year": "2019" }, { "authors": "Maarten Grootendorst", "journal": "", "ref_id": "b20", "title": "Keybert: Minimal keyword extraction with bert", "year": "2020" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b21", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b22", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Michal Lukasik; Boris Dadachev; Kishore Papineni; Goncalo Simoes", "journal": "", "ref_id": "b23", "title": "Text segmentation by cross segment attention", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b24", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" } ]
[ { "formula_coordinates": [ 5, 197.88, 150.56, 272.84, 11.12 ], "formula_id": "formula_0", "formula_text": "𝐷𝐷 =<\\𝑠𝑠 > 𝑑𝑑𝑢𝑢 1 <\\𝑠𝑠 >. . . <\\𝑠𝑠 > 𝑑𝑑𝑢𝑢 𝑛𝑛 <\\𝑠𝑠 >(1)" }, { "formula_coordinates": [ 5, 226.44, 228.32, 244.28, 11.48 ], "formula_id": "formula_1", "formula_text": "𝐻𝐻𝐻𝐻𝑑𝑑𝑑𝑑𝐻𝐻𝑛𝑛 𝐸𝐸 = 𝑇𝑇5 -Encoder(𝐷𝐷) (2)" }, { "formula_coordinates": [ 5, 249.83, 315.39, 220.88, 32.76 ], "formula_id": "formula_2", "formula_text": "𝑉𝑉 𝑠𝑠 𝑖𝑖 = Con(𝐻𝐻 𝑖𝑖 ,H 𝑖𝑖+1 ) (3) 𝑉𝑉 𝑠𝑠 = [𝑉𝑉 𝑠𝑠 1 ,...,V 𝑠𝑠 𝑛𝑛 ](4)" }, { "formula_coordinates": [ 5, 248.51, 357.08, 222.21, 11.12 ], "formula_id": "formula_3", "formula_text": "𝑃𝑃 = 𝐶𝐶𝐶𝐶𝐶𝐶𝑠𝑠𝑠𝑠𝐻𝐻𝑓𝑓𝐻𝐻𝐻𝐻𝐶𝐶(𝑉𝑉 𝑠𝑠 )(5)" }, { "formula_coordinates": [ 7, 240.96, 658.88, 229.76, 11.48 ], "formula_id": "formula_5", "formula_text": "𝐻𝐻 𝑙𝑙 = 𝑇𝑇5 -Encoder(𝑇𝑇 𝑙𝑙 )(7)" }, { "formula_coordinates": [ 8, 206.4, 259.15, 264.31, 11.12 ], "formula_id": "formula_6", "formula_text": "𝐿𝐿𝑇𝑇𝑠𝑠𝑠𝑠 = 𝐿𝐿 𝐶𝐶𝑙𝑙𝐶𝐶𝑠𝑠𝑠𝑠 + 𝐿𝐿 𝐿𝐿𝐶𝐶𝐿𝐿𝐿𝐿𝑙𝑙 + 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑖𝑖𝑇𝑇 + 𝐿𝐿 𝑇𝑇𝑇𝑇𝑇𝑇𝑛𝑛(8)" } ]
10.18653/v1/2021.emnlp-main.468
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b38", "b40", "b37", "b33" ], "table_ref": [], "text": "Multi-task learning (MTL) is a machine learning paradigm where multiple learning tasks are optimized simultaneously, exploiting commonalities and differences across them (Caruana, 1997). MTL is expected to outperform single-task learning (STL) as it utilizes more training data and enables inter-task knowledge sharing (Ruder, 2017). However, MTL may also bring about multi-task conflict and negative transfer. Empirically, in many MTL systems, only a small portion of tasks benefit from MT joint training while others suffer from negative transfer (Stickland and Murray, 2019;Raffel et al., 2020;Peng et al., 2020). Therefore, it is still an open question when MTL will work.\n1 https://github.com/EdisonNi-hku/MTL4Finance." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b12", "b27", "b46", "b42", "b34", "b20", "b28", "b3", "b27", "b12", "b29", "b38", "b40", "b1" ], "table_ref": [], "text": "TSA↓ SC↑ GPT-3 Zero-Shot 0.3700 77.69% GPT-3 Few-Shot 0.3128 80.37% FinBERT Fine-Tune 0.2054 86.61%\nTable 1: Performance comparison of our method (Fin-BERT Fine-Tune) and GPT-3 (text-davinci-003) baselines. We report the rooted mean square error (↓) on the task of target-based sentiment analysis (TSA) (Cortis et al., 2017) and accuracy (↑) on sentiment classification (SC) (Malo et al., 2013). See GPT-3 prompts and settings in Appendix B.\nMTL systems have two components: MTL algorithms and the tasks included for aggregation. Recent progress in MTL has shown that appropriate MTL algorithms (e.g., architecture and optimization) can mitigate negative transfers (Yu et al., 2020;Wang et al., 2021;Pfeiffer et al., 2021;Karimi Mahabadi et al., 2021;Mao et al., 2022;Ponti et al., 2023, inter alia). However, it is still unclear when MTL works from the perspective of the relations between tasks and skills to be aggregated for better performance in a practical setting.\nTo understand this, we conduct a practical case study on Financial NLP. We choose Financial NLP mainly because (1) Financial NLP tasks are hard: GPT-3 (Brown et al., 2020) does not perform well on financial tasks (see Table 1), though it is a good zero/few-shot learner in general domains; and (2) Financial NLP datasets typically address different skills (e.g., quantitative reasoning, and sentiment analysis), and have a limited data size (Malo et al., 2013;Cortis et al., 2017;Lamm et al., 2018a;Mariko et al., 2020;Chen et al., 2019aChen et al., , 2020, inter alia), inter alia). Therefore, it is promising to aggregate Financial NLP tasks using MTL, which not only compiles and augments the small datasets, but also benefits the difficult tasks through relevant information transfer and comprehensive reasoning. However, no previous work explores the benefits of arXiv:2305.14007v1 [cs.CL] 23 May 2023 aggregating Financial NLP resources using MTL. Particularly, we explore the following hypotheses about when MTL works: H1. When various skills are included: Intuitively, positive transfers are likely to happen among tasks regarding the same skill. However, diversified skills might benefit the MTL system through implicit data augmentation, attention focusing, and feature eavesdropping (Ruder, 2017). Our empirical results also show that skill diversity benefits MTL.\nH2. When the aggregated tasks are well related:\nWe find that the close relation (measured qualitatively and quantitatively) among Financial NLP tasks explains why diversified skills help each other, and contributes to the success of MTL.\nH3. When the aggregation size matches shared capacity: Too many objectives may exhaust the MTL shared capacity and cause interference among tasks (Stickland and Murray, 2019). We find that excessive aggregation size in a limited capacity model restricts the performance of some tasks. Thus aggregation size should be appropriate for the shared capacity.\nTo facilitate exploration of H1 and H2, we survey existing Financial NLP resources and propose Fin-DATA (Financial Data And Tasks Aggregation), a collection of Financial NLP tasks covering various financial text understanding skills. To check H3, we propose SPAL-FinBERT (Shared Parallel Attention Layer with FinBERT), an MTL architecture based on pre-trained FinBERT (Araci, 2019), but is highly parameter-efficient -with 99.8% fewer trainable parameters but outperforming the vanilla FinBERT MTL on several tasks. Our contributions include 1. We conduct a case study on Financial NLP to explore what properties of task aggregation lead to the success of MTL.\n2. We survey and aggregate several existing Financial NLP tasks and datasets, illustrating that MTL can be a cheap and efficient improvement for Financial NLP performance.\n3. We propose SPAL-FinBERT, a parameterefficient MTL architecture with good performance. This model may also have broader use cases in other settings." }, { "figure_ref": [], "heading": "Background & Related Work", "publication_ref": [ "b23", "b0", "b2", "b11", "b43", "b39", "b30", "b11", "b0", "b32" ], "table_ref": [], "text": "Previous work mainly focuses on two categories of MTL practice: MTL as pre-training and MTL as auxiliary training.\nMTL as pre-training: Besides unsupervised pretraining, supervised data can also be utilized for pre-training in an MTL manner (i.e., an intermediate training stage) to improve the model's multiaspect intelligence and generalizability to unseen tasks. Such an approach has been shown beneficial for various pre-trained models, including encoderonly models (Liu et al., 2019;Aghajanyan et al., 2021), encoder-decoder models (Aribandi et al., 2022;Chung et al., 2022), and large language models (Wei et al., 2022;Sanh et al., 2021;Min et al., 2022;Chung et al., 2022). Aghajanyan et al. (2021) show that MTL pre-training does not work with small-scale task aggregation. More recent analysis shows that aggregating related tasks transfers better to a known target task (Padmakumar et al., 2022)." }, { "figure_ref": [], "heading": "MTL as auxiliary training:", "publication_ref": [ "b40", "b33", "b37", "b31" ], "table_ref": [], "text": "Instead of training a target task alone, we can jointly train it with other auxiliary tasks to improve its performance in an MTL manner (i.e., the final training stage). However, this approach does not work in most cases, especially when multiple skills are aggregated (e.g., GLUE) (Stickland and Murray, 2019;Peng et al., 2020;Raffel et al., 2020;Mueller et al., 2022 " }, { "figure_ref": [], "heading": "FinDATA Compilation", "publication_ref": [], "table_ref": [], "text": "We compile FinDATA, a task aggregation on Financial NLP, to facilitate the case study. We first set the desiderata, and then survey existing Financial NLP tasks to select those that meet these criteria." }, { "figure_ref": [], "heading": "Desiderata", "publication_ref": [], "table_ref": [], "text": "Diversified skills: We are interested in the importance of skill diversity and task-relatedness in MTL. Therefore, included tasks should cover as many Financial NLP skills as possible. If multiple tasks correspond to the same skill (e.g., sentiment analysis), we prefer smaller ones that are more worth aggregating and less likely to dominate. Some tasks can have closer relation than others (e.g., corresponding to similar skills)." }, { "figure_ref": [], "heading": "Aligned form of input:", "publication_ref": [], "table_ref": [], "text": "To enable joint training, we prefer tasks with sentences or paragraphs as inputs, instead of phrases, tables, or full reports." }, { "figure_ref": [], "heading": "Financial NLP", "publication_ref": [ "b27", "b44", "b12", "b8", "b7", "b48", "b10", "b29", "b14", "b25", "b26", "b19", "b35" ], "table_ref": [], "text": "The most prevalent Financial NLP task is sentiment analysis on financial tweets or news, as it directly contributes to automatic decision-making tools in the financial market. There are two types of financial sentiment analysis, the first of which defines sentiment analysis as a coarse-grained classification problem. Given a piece of financial news, the system only needs to classify its sentiment into positive, negative, or neutral. Most of the financial sentiment analysis are in this form, for example, Financial Phrase Bank (Malo et al., 2013), and Stock-Sen (Xing et al., 2020). The other instantiation of financial sentiment analyses has more fine-grained labels: Cortis et al. (2017) assigns different sentiment scores from -1 to 1 to different targets in financial news. Numbers are ubiquitous in all forms of financial text (e.g. news, tweets, and reports). Hence, many tasks and datasets are proposed for number semantics and numeracy. For example, FinNum shared task of recent years proposed several datasets focusing on financial number type understanding and number attachment (Chen et al., 2018(Chen et al., , 2019a(Chen et al., , 2020)). Chen et al. (2019b) further proposed Numeracy-600K for number magnitude understanding. Zhu et al. (2021) proposed TAT-QA, a Question Answering(QA) benchmark financial hybrid (tabular and text) data. Similarly, Chen et al. (2021) proposed FinQA, another QA benchmark on financial hybrid data emphasizing numeracy skills. Some datasets provide financial natural language understanding (NLU) skills other than sentiment and numbers. For instance, Lamm et al. (2018a) proposed a dataset for analogy parsing originally, which contains financial semantic role annotations and thus can be used for semantic role labeling (SRL). (Mariko et al., 2020) detects causal effect in financial news.\nNot all financial NLP tasks are sentence-level. Many tasks take entire documents as inputs, for example, narrative summarization (El-Haj et al., 2020) and table of content prediction (Maarouf et al., 2021) on financial reports. Some other tasks focus on financial concepts (phrases) (Maarouf et al., 2020;Kang et al., 2021;Pontes et al., 2022) instead of complete sentences. Appendix D covers more details regarding mentioned datasets." }, { "figure_ref": [], "heading": "FinDATA", "publication_ref": [ "b27", "b12", "b7", "b22", "b29" ], "table_ref": [ "tab_1" ], "text": "Based on our survey and desiderata, the following 4 Financial NLP skills are selected:\nFinancial sentiment analysis is a prevalent skill in the Financial NLP domain, analyzing financial news' and investors' sentiment toward particular financial objects. We select two tasks for this skill:\n(1) Financial Phrasebank sentiment classification (SC, Malo et al., 2013): given a financial news headline, classifying it into positive, negative, or neutral; and (2) SemEval-2017 target-based sentiment analysis (TSA, Cortis et al., 2017): predicting a sentiment score between -1 and 1 w.r.t. a financial news headline and a target company.\nFinancial number understanding is another important Financial NLP skill, as numbers are ubiquitous in all forms of financial text (e.g., news, tweets, and reports). We select two tasks for this skill: (1) FinNum-3 number classification (NC) (Chen et al., 2020): given a report paragraph and a target number, classifying it into monetary, percentage, temporal, and so on; and (2) FinNum-2 number attachment detection (NAD) (Chen et al., 2019a): given a financial tweet, a target number, and a cash tag, predicting whether the number is attached (i.e., related) to the cash tag.\nFinancial semantic role labeling (FSRL) is a skill aiming at understanding the quantitative semantic roles (Lamm et al., 2018b) Detection (CD, Mariko et al., 2020).\nAll our datasets are in English. Other details of included tasks can be found in Table 2. We present several examples for each FinDATA dataset in Appendix J." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Multi-Task Learning Systems", "publication_ref": [], "table_ref": [], "text": "We consider various MTL systems in the form of shared encoder + one-layer task-specific prediction headers. The MTL problem is formulated as follows:\nWe are given a joint dataset of multiple tasks D = {(X t , Y t )} t∈T where X t , Y t denotes the training corpus and labels of task t; and T denotes the task collection. We are also given a pre-trained encoder (e.g., FinBERT) f θ E (•) and task-specific prediction headers h θt (•), which are parameterized by θ = (θ E , {θ t } t∈T ). The training loss for multitask fine-tuning:\nL(θ, D) = t∈T w t • l t (h θt (f θ E (X t )), Y t ) (1)\nWhere l t denotes the loss function for task t, and w t denotes the sampling weight of task t. The generic architecture is illustrated in Figure 1. During training, a task is sampled for each training step, and the corresponding prediction header and the shared encoder are updated (e.g., the TSA example in Figure 1)." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b37" ], "table_ref": [ "tab_1" ], "text": "We fine-tune all MTL and STL models on corresponding data for 40 epochs. (Our MTL batching scheme is described in Appendix C.) For STL, we evaluate the model every 50 steps and save the checkpoint with the best validation score. For MTL, we evaluate every 200 steps, saving and reporting the best checkpoint for each task independently following the setting of Raffel et al. (2020) (i.e., each task can be viewed as the target task with others being auxiliary tasks). We follow the evaluation metrics in Table 2 to select the best checkpoints and report the test performance. All MTL and STL results are averaged over random seeds from 1 to 5 with standard deviations attached. Appendix C contains more details about data preprocessing, hyperparameters, and GPU usage." }, { "figure_ref": [], "heading": "Pre-trained Model Selection & STL Baselines:", "publication_ref": [ "b1", "b45", "b24", "b17", "b27", "b1", "b45", "b17", "b13", "b1" ], "table_ref": [ "tab_2" ], "text": "Existing financial pre-trained models (Araci, 2019;Yang et al., 2020;Liu et al., 2020;Hazourli, 2022) are usually compared on the Financial PhraseBank dataset (Malo et al., 2013). Such comparison is suboptimal because (1) Financial PhraseBank sentiment analysis has no official test set. Existing work separates test sets on their own, making the scores less comparable across different work; and\n(2) the models are not compared on benchmarks other than sentiment analysis. Therefore, we compare financial pre-trained models on all FinDATA tasks to select the best one.\nSTL results on all publicly available financial pre-trained models (P-FinBERT (Araci, 2019), Y-FinBERT (Yang et al., 2020), and FinancialBERT (Hazourli, 2022)) and BERT (Devlin et al., 2019) are presented in the first half of Table 3. P-FinBERT (Araci, 2019) outperforms other pre-trained models. Therefore, we use P-FinBERT in all subsequent experiments." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the hypotheses about when aggregating multiple skills with MTL works." }, { "figure_ref": [], "heading": "H1: Skill Diversity", "publication_ref": [ "b1" ], "table_ref": [ "tab_2" ], "text": "To verify the hypothesis that skill diversity benefits MTL, we compare the MTL results on full FinDATA and its subsets that ablate one skill or focus on one skill. Specifically, ablating a skill results in four subsets: w/o financial semantic role labeling, w/o causality detection, w/o sentiment analysis, and w/o number understanding. Focusing on a single skill results in two subsets: only sentiment analysis and only number understanding. We use FinBERT (Araci, 2019) as the shared encoder. The results are shown in Table 3. It can be observed that (1) skill diversity benefits MTL: the best MTL scores of all tasks are obtained by mixing several different skills while concentrating on sentiment/number understanding skills (w/o Sentiment and w/o Number) leads to a performance drop on corresponding tasks; and (2) ablating FSRL decreases the performance of all other tasks, illustrating that FSRL positively transfers to all other skills. Therefore, positive transfers can happen between different skills. Including skills other than the target skill in MTL is a potential way to benefit target performance." }, { "figure_ref": [], "heading": "H2: Task Relatedness", "publication_ref": [ "b31", "b16", "b41", "b16", "b38", "b46", "b42", "b31" ], "table_ref": [ "tab_2" ], "text": "Similar to FinDATA, GLUE also aggregates multiple NLU skills. However, GLUE MTL usually leads to a performance drop on most tasks (according to Stickland and Murray (2019) only RTE is improved; and according to Mueller et al. (2022), 3 out of 9 tasks are improved) while FinDATA MTL increases the scores of 4 out of 6 included tasks. Therefore, we hypothesize that FinDATA tasks are more closely related than GLUE tasks though they all cover diverse skills. We measure the relatedness among FinDATA tasks qualitatively and quantitatively:\nQualitative Analysis: Many tasks relate to each other explicitly: (1) SC and TSA: though they have different annotations, both of them predict financial sentiment. (2) FSRL and NC: \"date\" is one of the classes in NC, while FSRL helps to understand the semantic role of time numbers. These explicit transfers can be probed by different output headers of an MTL system: for an input sentence, the MTL system outputs predictions corresponding to different tasks, where the non-target headers' predictions may interpret the target prediction (Geva et al., 2021). In Appendix I, we illustrate these explicit transfers by listing examples of the prediction header's outputs.\nQuantitative Analysis: Vu et al. (2020) propose task and text embedding to measure the similarity between task objectives and texts. This embedding algorithm facilitates high-level knowledge sharing in the MTL architecture proposed by Karimi Mahabadi et al. ( 2021), which achieves superior performance. Therefore, we use these metrics to quantify the relatedness among the tasks aggregated in our MTL systems. We follow Vu et al.'s (2020) calculation setting, except that we use FinBERT instead of BERT: we first calculate task and text embeddings of FinDATA and GLUE tasks. Then we compute the cosine similarity scores among embeddings.\nFigure 2a shows the heatmap of task embedding similarity scores, indicating that FinDATA tasks are more closely clustered than GLUE tasks, illustrating why FinDATA MTL leads to more improvements than GLUE MTL. Another observation is that TSA has the lowest similarity scores with other FinDATA tasks, which possibly explains why it is not improved by MTL in Table 3. Figure 2b presents the heatmap of text embedding similarity, where financial and general data are well separated with high in-domain similarity.\nHowever, the similarity scores are symmetric metrics and thus fail to explain some asymmetric transfers (which is also observed in previous work (Geva et al., 2021)). For example, FSRL has a moderate level of text and task similarity to other tasks, but its performance is not enhanced by MTL while it boosts the performance of others. A possible explanation is that financial semantic understanding skill (provided by FSRL) is a necessary ability for other FinDATA tasks, but the skills covered by other tasks are not necessary for FSRL. Therefore, the joint training does not benefit FSRL.\nWe further analyze whether gradient similarities interpret task-relatedness and MTL transferability since many previous works attribute the negative transfer among aggregated tasks to gradient conflicts (Ruder, 2017;Yu et al., 2020;Wang et al., 2021;Mueller et al., 2022). However, our findings in Appendix F show that gradient con- flicts/similarities are not good measurements.\nIn conclusion, the degree of task-relatedness serves as a significant predictor of the MTL outcome, and can be roughly measured through quantitative and qualitative means. To better explain asymmetric transfer and analyze the inter-task relations in a finer grain, it is essential to develop asymmetric measurements. We reserve that exploration for future work." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "H3: Matched Aggregation Size with Shared Capacity", "publication_ref": [ "b18" ], "table_ref": [ "tab_3" ], "text": "We hypothesize that having too many tasks sharing limited model capacity might cause interference among tasks and result in poor MTL performance. Therefore, given a fixed pre-trained model, the task aggregation size should be appropriate for the shared capacity to achieve the best MTL practice.\nSection 6.1 shows that the task combination significantly influences the MTL performance. Altering the task aggregation may introduce unwanted positive or negative transfers. Therefore, to verify this hypothesis, we control the task aggregation (stick to FinDATA instead of adding other tasks) and reduce the shared capacity to simulate the scenario where task aggregation may exhaust the shared capacity.\nTo enable altering shared capacity, we propose SPAL-FinBERT, an architecture that both leverages a pre-trained model and has tunable shared capacity. Figure 3 illustrates the architecture. The FinBERT layers are frozen while the parallel attention layers (PALs, Stickland and Murray, 2019) are trainable. Different from original task-specific PALs, ours are shared across different tasks. Thus, we call them shared PALs (SPALs). The design is similar to Adapters (Houlsby et al., 2019): both consists of light-weighted trainable structures and a frozen pre-trained model. We choose PAL as the shared trainable structure because it has a more complicated structure than an adapter which might benefit multi-task knowledge sharing (Adapters are usually for STL). We can easily change the shared capacity by setting the SPAL hidden size to any multiple of 12 (the number of self-attention heads).\nWe run FinDATA MTL with SPAL hidden size from 12 to 816. The smallest and the largest trainable shared capacity are roughly 228K3 (0.2% of FinBERT parameters) and 47M (42.7% of Fin-BERT parameters). The results are shown in Fig- ure 4. We surprisingly find that the aggregated tasks are not equally sensitive to the change of shared capacity: negative transfer towards CD grows while the shared capacity becomes limited. However, Some tasks are not significantly restricted by the limited shared capacity: SC and NC even achieve the best scores with relatively small shared capacity.\nTo verify that aggregating too many tasks in limited capacity overwhelms CD, we gradually ablating tasks from the MTL system with minimal shared capacity. Table 4 presents the results. The CD performance gradually improves when we decrease the aggregation size (although the task combination can be a confounder for the CD performance). The highest score is achieved when only aggregating two tasks.\nTherefore, to achieve better MTL practice, the aggregation size should be appropriate for the shared capacity to avoid overwhelming tasks like CD. These tasks are sensitive to capacity sharing. Including too many auxiliary objectives might exhaust the shared capacity, distracting the MTL system from these tasks. Other tasks (e.g., SC and NC) might be more tolerant for capacity sharing, thus allowing larger-scale MTL auxiliary training." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Efficiency of SPAL-FinBERT", "publication_ref": [ "b38", "b47" ], "table_ref": [], "text": "Another observation of Figure 4 is that SPAL-FinBERT outperforms vanilla FinBERT with much fewer trainable parameters. In the most impressive case, SPAL-FinBERT outperforms vanilla Fin-BERT on four tasks with 99.8% fewer trainable parameters (when the SPAL hidden size is 12). One possible reason behind our model's performance is that the frozen FinBERT provides a strong regularization and thus reduces the representation variance of each layer. Such generalized representations are more likely to be favored by different tasks and thus benefit MTL (Ruder, 2017). To verify this explanation, we compare the representation generalizations of SPAL-FinBERT and FinBERT MTL systems.\nRepresentation generalization: Intuitively, representation generalization measures how similar an MTL system represents data of different tasks. We first compute the representations for all tasks, models, and layers, following the formula:\nR t l,M = 1 |D t | (xt ,yt )∈Dt M l (x t )(2)\nwhere R t l,M denotes task t's representation generated by layer l of MTL model M; D t denotes the dataset of task t; and (x t , y t ) denotes the data points. Then, we compute the cosine similarity score between all task representation pairs (R t 1 l,M , R t 2 l,M ), averaging the similarity scores to measure the representation generalization of model M layer l : where C denotes combination, T denotes the task collection, and cossim denotes cosine similarity.\nG l,M = 1 C 2 |T| t 1 ,t 2 ∈T cossim(R t 1 l,M , R t 2 l,M )(3)\nFigure 5 shows the representation generalization for two MTL systems at different training steps.\nFor simplicity, only higher layers' results (layer 7 to 12) are presented as they are modified more by fine-tuning (Zhou and Srikumar, 2022) and related more to the output. It can be observed that SPAL-FinBERT generates more generalized representations than FinBERT in all shown layers (especially for the highest ones).\nAnother observation is that representation generalization decreases when the training step increases. One possible explanation for this downward trend is that the MTL system is trying to learn taskspecific knowledge (especially in higher layers) as multi-task fine-tuning continues.\nIn Appendix H, we further use an ablation experiment and a probing experiment to show the contribution of the frozen FinBERT and the neces-sity of freezing." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "Suggestions for MTL practice: Based on the results of our case study, we recommend the following practices for future MTL: (1) aggregate not only the target skill but also other related skills;\n(2) select tasks for aggregation based on both their qualitative and quantitative relatedness to the target task; and (3) check if the target task is sensitive to capacity sharing, excluding redundant (e.g., distantly related) tasks to avoid distracting the target task.\nAggregating multiple skills with MTL is a potential way for better Financial NLP practice: Financial NLP tasks are more complicated than those in the general domain, and many of them suffer from a lack of data. Obtaining new Financial NLP data is expensive since such annotation usually requires domain expertise. Our results show that aggregating Financial NLP tasks using MTL can be a practical and relatively cheap way to improve their performance: SC, NC, NAD, and CD are improved by up to 1.45, 0.64, 1.09, and 0.68 percentage points accordingly through MTL auxiliary training (contributed by different MTL systems). In Appendix G, we also show that MTL pre-training with Financial NLP tasks can improve the model's generalizability to unseen tasks. Therefore, future research and practice in Financial NLP may consider MTL as a potential way to achieve better performance.\nOther possible questions: We address some other possible questions that might be of interest to our readers in Appendix A." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we conduct a case study on Financial NLP to analyze when aggregating multiple skills with MTL works from a perspective of task relations and skills to be included. We propose a parameter-efficient MTL architecture SPAL-FinBERT. Our empirical analyses point out potential directions to improve task aggregation for future MTL practice: (1) considering diversified nontarget skills that might be supportive; (2) filtering tasks with their relatedness; and (3) caring whether capacity sharing overwhelms the target task. We also show that aggregating resources through MTL can be a cheap and efficient way to improve Financial NLP performance." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b15" ], "table_ref": [ "tab_2" ], "text": "Firstly, the transferability between different tasks within an MTL system is not well measured in current work. We also find such transferability is asymmetric and thus hard to quantify using symmetric measurements such as cosine similarity between task embeddings or gradients: for example, TSA positively transfers to SC, but SC negatively transfers to TSA (see the \"Only Sentiment\" row in Table 3); FSRL positively transfer to all other tasks, but other tasks negatively affect FSRL. Future work may consider exploring better indicators that address the asymmetry of task transferability (e.g., similar to inter-task affinity scores (Fifty et al., 2021) in the CV domain).\nSecondly, some of the conclusions drawn from our case study only point in a vague direction for future MTL practice. For example, we find that some tasks are more sensitive to capacity sharing in Section 6.3. Therefore, aggregating an excessive number of tasks with those tasks might overwhelm them. However, it is hard to determine exactly each task's sensitivity to capacity sharing and the optimal number of aggregated tasks without some trials on different task combinations. Future work may explore why some tasks are easily overwhelmed by capacity sharing and propose methods to identify them.\nThirdly, in this work, we analyze the influence of multiple factors on MTL performance. However, the factors are usually entangled and confound each other. For example, we decrease the number of tasks aggregated with CD to show that too large aggregation overwhelms CD in a limited shared capacity. But the tasks included (a confounder for MTL performance) are also changed. Future work may conduct rigorous causal analyses, exploring how much each factor affects MTL performance." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Data Privacy and Bias: All datasets used in this research are published in previous studies and publicly available: datasets for TSA, SC, FSRL, CD, and Numeracy-600K can be downloaded from the internet, while datasets for NC, NAD, and Stock-Sen require signing corresponding agreements and requesting from the authors.\nLicenses: TSA is under Apache License 2.0; SC is under CC BY-NC-SA 3.0; CD and StockSen are under CC BY 4.0; and Numercay-600K, NC, and NAD are under CC BY-NC-SA 4.0. The license of FSRL data is not explicitly specified, but the author allows data usage with a proper citation in their GitHub repository.\nMost of the datasets are widely used in the Financial NLP domain (e.g., shared tasks). We also manually checked for offensive content in the data. There is no data bias against certain demographics with respect to these datasets." }, { "figure_ref": [], "heading": "Reproducibility:", "publication_ref": [], "table_ref": [], "text": "We make all of our code public on GitHub. For data, we include links to request NC, NAD, and StockSen, and provide data splits for TSA, SC, FSRL, CD, and Numeracy-600K. We also provide detailed instructions to reproduce all the experiment results on GitHub.\nPotential Use: The potential use of this study is to improve future practice in MTL and the Financial NLP domain." }, { "figure_ref": [], "heading": "A Possible Questions and Answers", "publication_ref": [], "table_ref": [], "text": "A.1 Are H1 and H2 proposed in our work conflict goals?\nH1 encourages aggregating diverse skills for better MTL practice, while H2 suggests that the relatedness among tasks is also important, which seems contrary to H1. However, these goals do not conflict with each other because: (1) skill diversity does not imply distant inter-task relationships and vice versa (e.g., NC is well related to SC and CD in Figure 2a though they correspond to different skills). ( 2) It is possible to achieve skill diversity and good task-relatedness simultaneously: realworld MTL practice can first consider the target skill and other skills that might be supportive or in the same domain. Then select the tasks that are (qualitatively or quantitatively) closely related to the target task to achieve better MTL performance.\nA.2 Our work mainly addresses Financial NLP. Are the conclusions generalizable to other domains?\nAlthough we only provide analyses in Financial NLP, the heuristics for MTL practice are generic for other domains. For H1, non-target skills in the same domain are potentially helpful as various skills are based on similar data. For H2, the qualitative and quantitative analyses for task-relatedness are domain agnostic, meaning that we can select the most related tasks from those with diversified skills. For H3, continuously increasing the aggregation size will finally reach a threshold that overwhelms some tasks if the shared capacity is fixed." }, { "figure_ref": [], "heading": "B GPT-3 Prompts", "publication_ref": [], "table_ref": [], "text": "In Table 1 we present the GPT-3 zero-shot and fewshot performance on two Financial NLP tasks. We use the official API provided by OpenAI4 to access GPT-3. We choose the GPT-3 checkpoint Davinci-003 to conduct the experiments (completion mode, max token 5, temperature 0). The example prompts we use for TSA and SC are illustrated in Table 5." }, { "figure_ref": [], "heading": "C Experimental Details", "publication_ref": [], "table_ref": [], "text": "MTL Batching Scheme: During MTL, we first randomly batchify training data of all tasks. Then, we randomly mix the mini-batches and pass them to the MTL data loader. This method is equivalent to the temperature-based batch sampling scheme of Karimi Mahabadi et al. ( 2021) where temperature T = 1 (i.e., each task is sampled proportional to its data size). We choose T = 1 as FinDATA tasks are not highly unbalanced in data size.\nData Preprocessing: SC and FSRL are in nature text classification and token classification tasks. Thus we use the raw texts from their datasets as inputs. NC, NAD, and TSA are text classification tasks, but they also require target companies or target numbers as inputs. Therefore, we use \"|COM-PANY|\" to denote target companies and \"<NUM-BER>\" to denote target numbers in input texts. CD is originally a span prediction task. For simplicity, we model it as a token classification task by converting the span labels to BIO tags (i.e., beginning and ending cause/effect spans to \"B-CAUSE I-CAUSE...\" and \"B-EFFECT I-EFFECT...\").\nHyperparameters: All models are fine-tuned with a initial learning rate of 0.00005, warm up steps of 500, and weight decay of 0.01. Batches sizes we used for TSA, SC, NC, NAD,FSRL,and CD are 16,16,24,32,16, and 16 correspondingly. For the prediction header, we use a single feed-forward layer followed by Softmax." }, { "figure_ref": [], "heading": "Evaluation Metrics Selection and Reporting:", "publication_ref": [ "b12" ], "table_ref": [], "text": "The evaluation metrics are used not only for test-ing but also for best checkpoint selection during validation. Therefore, we report single metrics for all results to reflect MTL's effect on each task. We choose Accuracy for SC, NC, and NAD since these datasets have no severe label imbalance. For simplicity, we equivalently model CD, which is originally a span prediction task, as a token classification task, and use Accuracy as the metric. TSA is officially measured with cosine similarity (Cortis et al., 2017). We find RMSE, as a regular metric for regression tasks, has a high correlation with cosine similarity (see Table 6). Therefore, RMSE is suitable for TSA measurement. Besides, we avoid reporting average scores across tasks like related work because it makes no sense to average RMSE with Accuracy and F1 scores.\nEvaluation Tools: We use sklearn 1.0.2 for sequence classification evaluation, and seqeval 1.2.2 for token classification evaluation.\nGPU Usage: Experiments are trained on NVIDIA RTX2080 GPUs. A single run of STL experiments takes 4 to 16 GPU hours (4 GPU hours for the small datasets; 16 GPU hours for the large ones).\nA single run of MTL experiments takes 16 to 96 GPU hours (16 GPU hours for the smallest subsets of FinDATA, e.g., SC and TSA; 96 GPU hours for full FinDATA)." }, { "figure_ref": [], "heading": "D Financial NLP Datasets", "publication_ref": [], "table_ref": [], "text": "The detailed information of Financial NLP datasets discussed in Section 3.2 is shown in Table 7. We only cover English datasets, and include the English subset for those multilingual datasets (e.g., FNS and FinTOC). Most of the datasets have less than 10K data points in total, with fewer samples for training. Some data sizes are even fewer than 2K." }, { "figure_ref": [], "heading": "E Dataset Splits", "publication_ref": [], "table_ref": [], "text": "For " }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "F Gradient Analysis", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We are curious whether gradient similarities reflect task-relatedness. Furthermore, do gradient conflicts/similarities interpret why some task aggregation works better than others? During MTL, we record each task's gradient (averaged over the whole training set) every 2000 training steps. Then we calculate the pairwise cosine similarity between the gradients of all task pairs.\nGradient similarity fails to reflect taskrelatedness: Figure 6 shows the gradient similarity within sentiment and number tasks (intra-skill gradient similarity), and the average pairwise gradient similarity in-between the sentiment and number tasks (inter-skill gradient similarity). It can be observed that intra-skill gradients are not significantly more similar than inter-skill gradients, indicating that gradient similarity might not be a good measurement for task-relatedness.\nGradient similarity does not indicate transferability within task aggregation: Figure 7 shows the average pairwise gradient similarity of two MTL systems with different task aggregation: one is trained on full FinDATA, and the other ablates FSRL. Although ablating FSRL leads to worse scores on all tasks (see Table 3), the gradient conflict of \"w/o FSRL\" is not significantly higher than full FinDATA. Therefore, gradient conflicts/similarities are not a good indicator of task aggregation quality. " }, { "figure_ref": [], "heading": "G MTL Pre-training & Unseen task Generalization", "publication_ref": [ "b0", "b20", "b36" ], "table_ref": [], "text": "MTL pre-training may increase the model's generalizability to unseen tasks (Aghajanyan et al., 2021;Karimi Mahabadi et al., 2021;Ponti et al., 2023), which might be extremely helpful when there is a shortage in target training data (a few-shot setting). Therefore, we test the few-shot generalizability of our MTL systems on two unseen tasks: StockSen and Numeracy-600K. StockSen is a binary (positive or negative) sentiment classification dataset on financial tweets. Numeracy-600K classifies numbers into one of seven magnitudes. It has two subtasks on different domains (financial news and market comment). We first train the models on FinDATA for 2000 steps. Then we resume the shared encoder and fine-tune it on the target unseen task for 10 epochs, reporting the best checkpoint's score. We use a few-shot setting (randomly sample 400 training and 400 validation data points) for unseen tasks to stimulate the lack of training data in the target task. For test sets, we split (with a random seed of 42) 60K samples (10% of data) for Numeracy-600K and 6.2K samples (official development set) for StockSen. The results are shown in Table 8. In all tasks, the MTL-pre-trained system beats vanilla FinBERT when generalizing to unseen tasks. " }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "H Importance of Freezing Pretrained Model", "publication_ref": [], "table_ref": [], "text": "To illustrate the importance of freezing the pretrained model, we first compare SPAL-FinBERT (SPAL hidden size = 204) with an ablation setting where FinBERT is not frozen. The comparison is shown in Table 9, where unfreezing FinBERT compromises the MTL performance drastically on most tasks (CD prefers larger shared capacity and thus benefits from unfreezing).\nThen we add weighting parameters to probe the frozen FinBERT's contribution to the layer outputs. Figure 8 shows the probing architecture. We add probing parameters a and b, which weigh the frozen FinBERT output and the SPAL output. After MTL, the contribution of each structure can be measured by the final (softmaxed) weights. The results are shown in Figure 9. In all layers except the last layer, the frozen FinBERT layers contribute more to the output than PALs, illustrating the importance of the frozen part." }, { "figure_ref": [ "fig_10" ], "heading": "I Task-relatedness Examples", "publication_ref": [], "table_ref": [], "text": "Through MT fine-tuning, the shared encoder understands an input sentence from comprehensive aspects that positively transfer to each other. To probe the explicit transfer, we analyze the nontarget output headers' outputs to illustrate that the inputs are understood comprehensively. For example, Figure 10a shows that the FSRL header correctly identifies the semantic role of \"2018\" in an NC input. Such time awareness may benefit NC " }, { "figure_ref": [], "heading": "J FinDATA Examples", "publication_ref": [ "b12", "b29" ], "table_ref": [], "text": "In this section we provide 10 examples for each FinDATA task: TSA: (Cortis et al., 2017) • Between 50% and 75% of today's workers are covered by such plans, up from 5% five years ago.\n• Cary Computer, which currently employs 241 people, said it expexts a work force of 450 by the end of 1990.\n• Colgate-Palmolive advanced 1 5/8 to 63 after saying it was comfortable with analysts' projections that third-quarter net income from continuing operations would be between 95 cents and $1.05 a share, up from 69 cents a year ago.\n• In addition, CMS reported third-quarter net of $68.2 million, or 83 cents a share, up from $66.8 million, or 81 cents a share, a year ago.\n• Chateau Yquem, the leading Sauternes, now goes for well over $100 a bottle for a lighter vintage like 1984; the spectacularly rich 1983 runs $179.\n• For the nine months, Arco reported net income of $1.6 billion, or $8.87 a share, up 33% from $1.2 billion, or $6.56 a share a year earlier.\n• Citing its reduced ownership in the Lyondell Petrochemical Co., Atlantic Richfield reported that net income slid 3.1% in the third quarter to $379 million, or $2.19 a share, from $391 million, or $2.17 a share, for the comparable period last year.\n• Quarter revenue was $232.6 million, up 12% from $206 million last year.\n• Life insurers fared similarly, with Legal & General advancing 3 to 344, although Prudential fell 2 to 184 1/2.\nCD: (Mariko et al., 2020) we use blue to denote causes, and red to denote effects:\n• Florida is unique in that it also draws a large proportion of higher net-worth individualsmore than 85 percent of its net inflow of income came from people earning at least sixfigures.\n• CLICK HERE TO GET THE FOX BUSI-NESS APP Data from the U.S. Census Bureau showed that while Florida received more movers than any other state last year, New York's outflows to the Sunshine State were the highest -63,772 people.\n• New York had the third-largest outflows of any state, with 452,580 people moving out within the past year. Individuals earning $650,000 can save more than $69,700 in taxes per year by moving from New York to Florida.\n• The stock increased 1.02% or $0.23 during the last trading session, reaching $22.69. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [ "b7", "b44" ], "table_ref": [], "text": "We sincerely thank the authors of Chen et al. (2019a), Chen et al. (2020), andXing et al. (2020) for granting us access to their proposed datasets for research." }, { "figure_ref": [], "heading": "Author Contributions", "publication_ref": [ "b27", "b44", "b12", "b12", "b14", "b10", "b48", "b8", "b7", "b29", "b26", "b19", "b25" ], "table_ref": [], "text": "Jingwei Ni designed the project and the storyline, and conducted the MTL analyses and the survey in Financial NLP.\nZhijing Jin helped design the storyline and provided essential suggestions on what experiments and analyses are important.\nQian Wang contributed to the financial background of the storyline, collected the first version of FinDATA, and gave insights on what skills are important from a financial perspective.\nMrinmaya Sachan and Markus Leippold guided the project and substantially contributed to the storyline and experiment design.\nEveryone contributed to writing the paper. (Malo et al., 2013) sentiment classification 4,837 Financial news StockSen (Xing et al., 2020) sentiment classification 20,675 Financial tweets SemEval-2017 task-5-1 (Cortis et al., 2017) target-based sentiment analysis 2,510 Financial tweets SemEval-2017 task-5-2 (Cortis et al., 2017) target-based sentiment analysis 1,647 Financial news FNS (El-Haj et al., 2020) Summarization 12,796 UK annual report FinQA (Chen et al., 2021) Numeracy question answering 8,281 Earning reports TAT-QA (Zhu et al., 2021) Tabular question answering 16,552 Financial reports FinNum-1 (Chen et al., 2018) Number classification 8,868 Financial tweets FinNum-2 (Chen et al., 2019a) Number attachement 10,340 Financial tweets FinNum-3 (Chen et al., 2020) Number classification 9,528 Analyst reports Numeracy-600K subtask-1 (Chen et al., 2019b) Number magnitude prediction 600,000 Market comments Numeracy-600K subtask-2 (Chen et al., 2019b) Number magnitude prediction 600,000 Financial news TAP (Lamm et al., 2018a) Quantitative SRL 1,100 Financial news FinCausal (Mariko et al., 2020) Causal effect detection 1,126 Financial news FinSim-2 (Maarouf et al., 2020) Financial concept understanding 199 (concepts) -FinSim-3 (Kang et al., 2021) Financial concept understanding 1,394 (concepts) -FinTOC (Maarouf et al., 2021) TOC extraction 72 (documents) Financial prospectuses Table 9: comparison between SPAL-FinBERT (SPAL hidden size = 204) with frozen and unfrozen FinBERT. Metrics reported for FinDATA tasks are the same as Table 3." } ]
Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work -sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for skills relevant to the domain, such as numeric reasoning and sentiment analysis. Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work. Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity. Specifically, MTL works well when tasks are diverse but related, and when the size of the task aggregation and the shared capacity of the model are balanced to avoid overwhelming certain tasks. 1
When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP
[ { "figure_caption": "Figure 1 :1Figure 1: An illustration of MTL system with shared encoder and task-specific prediction headers.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: Heatmaps of cosine similarity between TaskEmbs and TextEmbs. FinDATA tasks are highlighted in red on both axes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization of a SPAL-FinBERT layer.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: FinDATA MTL results with different shared capacities. The solid blue lines \"-\" denote the average SPAL-FinBERT MTL results of 5 random seeds and their standard deviations. The dashed red lines \"--\" denote the STL results. The dashed green lines \"--\" denote the vanilla FinBERT MTL results.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Representation generalization of SPAL-FinBERT and vanilla FinBERT at different training steps, measured with a SPAL hidden size of 204 (PAL setting recommended by Stickland and Murray (2019)).", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Intra-skill and inter-skill average gradient cosine similarity. All gradient similarities are measured on an MTL system including all FinDATA tasks (with a random seed of 1).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Average gradient similarities of two MTL systems: full FinDATA and ablating FSRL. Both are trained for 40 epochs, recording gradients every 2000 steps.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The contribution probing architecture where a and b denote the attention parameters; w and 1 -w denote the weights after softmax.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Contributions of the frozen part and the PAL to the layer output in each layer.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "(a) Input a sentence from NC test set, where the target number is 2018. FSRL (non-target) header's output shows that time awareness is injected.(b) Input a sentence from SC test set, TSA (non-target) header's output shows that sentiment analysis skill is enhanced by TSA.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Some explicit positive transfer examples.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "FSRL output: [('now', 'O'), ('before', 'O'), ('i', 'O'), ('turn', 'O'), ('it', 'O'), ('over', 'O'), ('to', 'O'), ('carroll', 'O'), ('just', 'O'), ('a', 'O'), ('few', 'O'), ('comments', 'O'), ('on', 'O'), ('our', 'I-QUANT'), ('improved', 'I-QUANT'), ('2018', 'I-QUANT'), ('outlook', 'I-QUANT'), ('and', 'O'), ('some', 'O'), ('early', 'O'), ('thoughts', 'I-QUANT'), ('on', 'I-QUANT'), ('<', 'I-TIME'), ('2019', 'I-TIME'), ('.', 'I-TIME'), ('>', 'I-TIME')...", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Statistics of all FinDATA tasks and datasets. We report the sizes of train, development, and test splits. If there is no official test or development set, we split the training set by ourselves (more details in Appendix E).", "figure_data": "such as quantity, value,location, date, theme, etc. We include Lamm et al.'s(2018a) dataset 2 for this skill.Causality understanding aims at understandingthe causal relationship between financial facts. Forthis skill, we include FinCausal 2020 Causality", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The first half (STL in the method column) shows the performance of various STL baselines. We use the best-performing one, P-FinBERT, as the backbone of our MTL model. The second half (MTL in the method column) shows the MTL results on FinDATA and its subsets. The bold numbers denote the best scores obtained in all settings. The dashed underline numbers denote the best STL baselines. The underlined numbers denote the best scores obtained with MTL.", "figure_data": "MethodSTL Model or MTL SubsetSentiment TSA↓SCNCNumber NAD-FSRL-CDBERT-cased0.2320±0.0082 86.57±0.886.19±1.1 85.43±0.871.30±3.576.73±1.0BERT-uncased0.2069±0.0027 86.08±0.687.09±0.6 85.69±0.370.89±1.176.70±0.7STLFinancialBERT0.2500±0.0062 84.96±0.683.53±0.9 85.90±0.367.52±1.675.59±1.1Y-FinBERT0.2275±0.0061 85.62±1.286.55±0.6 85.66±0.665.45±2.574.75±1.3P-FinBERT0.2054±0.0057 86.61±0.587.67±0.6 85.74±0.572.66±3.377.12±0.8Full FinDATA0.2151±0.0089 87.06±1.187.51±0.7 86.52±0.469.88±1.577.80±0.8w/o FSRL0.2156±0.0099 85.91±1.487.41±0.8 86.11±0.6-76.53±0.8w/o CD0.2077±0.0032 86.36±0.887.49±0.4 85.63±0.671.32±1.8-MTLw/o Sentiment--87.79±1.0 86.49±0.570.60±3.078.40±1.0w/o Number0.2083±0.0046 86.49±1.2--71.08±2.678.26±1.0Only Sentiment0.2159±0.0120 86.69±1.1----Only Number--87.25±1.0 85.70±0.5--", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "MTL results on SPAL-FinBERT with minimal shared capacity (SPAL hidden size = 12). Gradually decreasing the number of aggregated tasks improves CD performance in general.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "No dead time 4 Thanksgiving $BTC.X $LTC.X $|ETH.X| $DASH.X $XRP.X $BCH.X $TSLA $MNKD Label: unattached • 2nd TP for $|JDST| is 94.05 according to my algo. Take it to the bank. Gold headed <4> major intermediate bottom b4 spike in Jan 2018. $JNUG Label: unattached • 2nd TP for $|JDST| is 94.05 according to my algo. Take it to the bank. Gold headed 4 major intermediate bottom b4 spike in Jan |2018|. $JNUG Label: unattached • $|BABA| hit +$<3>pre-market -Futures up 100 -$BSTI Big Buying @ close after Rebound Holds 2nd day Heading Back to $13 Label: attached • $|BABA| hit +$3 pre-market -Futures up <100>-$BSTI Big Buying @ close after Rebound Holds 2nd day Heading Back to $13 Label: attached • $|BABA| hit +$3 pre-market -Futures up 100 -$BSTI Big Buying @ close after Rebound Holds <2>nd day Heading Back to $13 Label: attached • $|BABA| hit +$3 pre-market -Futures up 100 -$BSTI Big Buying @ close after Rebound Holds 2nd day Heading Back to $<13> Label: attached FSRL: (Lamm et al., 2018a) we use different colors to denote different semantic roles: purple for WHOLE, red for THEME, blue for MANNER, forestgreen for VALUE, orange for TIME, goldenrod for QUANT, pink for AGENT, cyan for SOURCE, and sepia for CAUSE. For a detailed definition of each semantic role, please refer to Lamm et al. (2018b). • Commodities: Dow Jones futures index 129.72, off 0.15; spot index 130.16, up 0.91.", "figure_data": "• Circulation revenue has increased by 5% in to the year-over-year improvement. Non-continued strong momentum of the LEAP en-Finland and 4% in Sweden in 2008. Label: GAAP operating expenses of $4.2 billion in-gine program up 56% versus the prior year.positive creased 4% year-over-year primarily drivenMilitary engine orders were up 69% driven byby higher R&D expense reflecting increased • The changes will take effect on 1 January investments in early drug development. Taken 2010, and they are not estimated to have an together we earned $1.11 per share on a non-impact on the number of employees. Label: GAAP basis up 4% excluding exchange. Note neutral that our GAAP EPS loss of $0.02 reflects thethe F414 and service orders grew 7%. Rev-enues of $8.5 billion grew <21>%. Equipment revenues were up 13% on higher commercial engines partially offset by lower military vol-ume. Label: relativecharge of $2.35 billion related to the formation • F-Secure Internet Security 2010 is a security of the strategic oncology collaboration with service for surfing the web, online banking AstraZeneca announced earlier in the quarter. and shopping, e-mail, and other online activi-ties. Label: neutral Label: absolute• We'll release 2 new movies from Pixar in fis-cal <2018>. We're thrilled with the early re-and we're also looking forward to the summer action to Coco which opens at Thanksgiving• Earnings per share (EPS) were EUR0.03, up NAD: (Chen et al., 2019a) target numbers and cashrelease of The Incredibles 2. Label: datefrom the loss of EUR0.083. Label: positive • Production capacity will increase from 36000 tags are indicated by \"< >\" and \"| |\" correspond-ingly:• I would like to remind you that some of the statements that we make during today'sto 85000 tonnes per year and the raw material will continue to be recycled paper and board. Label: positive NC: (Chen et al., 2020) the targeted numbers are • $|XXII| Scott Gottlieb, Commissioner of FDA speech transcript from November <3>rd, less than 2 months left in year then. Label: at-tached• Drugmaker Shire to buy |Baxalta| for $32 bil-lion after 6-month pursuit. Label: 0.75 call may be considered forward-looking state-ments within the meaning of the safe harbor provision of the U.S. Private Securities Litiga-tion Reform Act of <1995>. Label: otherenclosed by \"< >\": • Finally we experienced roughly $<104> mil-lion of hurricane-related expenses in the quar-• $|DPW| that was quite a roller coaster. Glad it ended well. Should see <5>in 7 days Label: attached ter for items like people-cost increased secu-rity in our affected stores and storm damage. So while our year-over-year sales growth was positively impacted by the hurricanes our op-erating profit was negatively impacted by $51 million. Label: money • Took me <5>minutes to conclude: #Snooze-fest \\ud83d \\ude34\\ud83d \\ude34 \\ud83d \\ude34 \\ud83d \\ude34 Advancers 6 to Declin-ers 5 NYSE + NASDAQ $|SPY| $QQQ $DIA $IWM Label: unattached• Centrica extends gas deals with Gazprom, |Statoil|. Label: 0.239 • |Aggreko| 2015 Profit Declines -Quick Facts. Label: -0.441 • |HSBC| shakes up board with two new busi-ness chiefs, three departures. Label: -0.074 SC (Malo et al., 2013): • We ended 2017 with franchised restaurants representing <92>% of our total restaurant base up from 81% 3 years ago. As a result franchise margins now comprise more than 80% of our total restaurant margin dollars. For the fourth quarter franchise margin dollars increased across all segments reflecting sales-driven performance and the shift to a more heavily franchised system. Label: absolutethe target companies are • NYSE owner |ICE| considers offer for LSE. enclosed by \"| |\": Label: 0.096 • NYSE owner ICE considers offer for |LSE|. • In Asia we expect to acquire 51% of our Philippines bottler from Coca-Cola FEMSA sulting in a minimal structural impact in our actions should roughly offset each other re-Southeast Asian bottlers. These <2>trans-is now comprised primarily of Southwest and part of our Bottling Investments Group which during the fourth quarter. This will become a • Take moment <2>note $Crypto Superiority trades 24/7 365• The business to be divested generates consol-idated net sales of EUR 60 million annually and currently has some 640 employees. Label: neutral • Svyturys-Utenos Alus, which is controlled by the Nordic group Baltic Beverages Holding (BBH), posted a 6.1 percent growth in beer money billion in share repurchase capacity. Label: authorization giving us approximately $18 an additional $<10> billion share repurchase of 2019. In addition, the board has approved • Today we announced that we will increase our quarterly dividend by 15% or by $0.07 to $0.55 per share beginning in the first quarterLabel: 0.396 • |Diageo| sales disappoint as currency and com-P&L in 2019. Label: money Label: quan-tity_absolutesales for January-September to 101.99 million liters. Label: positive • Looking back on 2017 I could not be more proud of our team and all they have ac-paratives leave bitter taste. Label: -0.545 • AB InBev attacks |SABMiller| bid rebuffal. • From a capital allocation perspective year-to-date we have generated $6.3 billion of free cash flow returned $<8.6>billion to sharehold-Label: -0.158 ers including $2.8 billion in dividends andcomplished. As I look to our <50>th year • The Department Store Division's sales fell by I'm more optimistic and confident than I've 8.6% to EUR 140.2 mn. Label: negative ever been about Intel's future. Label: quan-• Production capacity will rise gradually from tity_absolute• Are ARM Holdings plc, |Domino's Pizza Group plc| and ASOS plc 3 must-have growth $5.8 billion in buybacks repurchasing 117 mil-lion shares. Label: money stocks?. Label: 0.063 • Next on Aviation which had another great170,000 tonnes to 215,000 tonnes. Label: pos-itive • Non-GAAP gross margin was <76>% in the quarter an increase of roughly 70 basis • Rautalinko was resposnible also for Mobility points versus the third quarter of 2016. Fa-• Drugmaker |Shire| to buy Baxalta for $32 bil-quarter. Orders of $8.8 billion were up 12%.Services, and his job in this division will be vorable product mix driven by KEYTRUDAlion after 6-month pursuit. Label: 0.437 Equipment orders grew 20% driven by thecontinued by Marek Hintze. Label: neutral and ZEPATIER was the largest contributor", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Investors sentiment increased to 1.25 in Q2 2019. Its up 0.38, from 0.87 in 2019Q1. It increased, as 23 investors sold SBRA shares while 68 reduced holdings. • It also reduced its holding in Qualcomm Inc. (NASDAQ:QCOM) by 24,294 shares in the quarter, leaving it with 158,167 shares, and cut its stake in Wells Fargo& Co (New) (NYSE:WFC). • Investors sentiment decreased to 1.02 in 2019 Q2. Its down 0.11, from 1.13 in 2019Q1. It worsened, as 43 investors sold WY shares while 242 reduced holdings.", "figure_data": "• (NASDAQ:SBRA) has declined 1.62% sinceSeptember 21, 2018 and is downtrending. Ithas underperformed by 1.62% the S&P500.• Weyerhaeuser Company (NYSE:WY) has de-clined 25.53% since September 21, 2018 andis downtrending. It has underperformed by25.53% the S&P500.• After $0.46 actual EPS reported by SabraHealth Care REIT, Inc. for the previous quar-ter, Wall Street now forecasts 2.17% EPSgrowth.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Jingwei Ni; Zhijing Jin; Qian Wang; Mrinmaya Sachan; Markus Leippold
[ { "authors": "Armen Aghajanyan; Anchit Gupta; Akshat Shrivastava; Xilun Chen; Luke Zettlemoyer; Sonal Gupta", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Muppet: Massive multi-task representations with pre-finetuning", "year": "2021" }, { "authors": "Dogu Araci", "journal": "", "ref_id": "b1", "title": "Finbert: Financial sentiment analysis with pre-trained language models", "year": "2019" }, { "authors": "Vamsi Aribandi; Yi Tay; Tal Schuster; Jinfeng Rao; Huaixiu Steven Zheng; Sanket Vaibhav Mehta; Honglei Zhuang; Q Vinh; Dara Tran; Jianmo Bahri; Jai Ni; Kai Gupta; Sebastian Hui; Donald Ruder; Metzler", "journal": "", "ref_id": "b2", "title": "Ext5: Towards extreme multi-task scaling for transfer learning", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Rich Caruana", "journal": "Machine Learning", "ref_id": "b5", "title": "Multitask learning", "year": "1997" }, { "authors": "Chung-Chi Chen; Hen-Hsen Huang; Hsin-Hsi Chen", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "a. Numeral attachment with auxiliary tasks", "year": "2019" }, { "authors": "Chung-Chi Chen; Hen-Hsen Huang; Hsin-Hsi Chen", "journal": "Association for Computing Machinery", "ref_id": "b7", "title": "Numclaim: Investor's fine-grained claim detection", "year": "2020" }, { "authors": "Chung-Chi Chen; Hen-Hsen Huang; Yow-Ting Shiue; Hsin-Hsi Chen", "journal": "", "ref_id": "b8", "title": "Numeral understanding in financial tweets for fine-grained crowd-based forecasting", "year": "2018" }, { "authors": "Chung-Chi Chen; Hen-Hsen Huang; Hiroya Takamura; Hsin-Hsi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Numeracy-600K: Learning numeracy for detecting exaggerated information in market comments", "year": "2019" }, { "authors": "Zhiyu Chen; Wenhu Chen; Charese Smiley; Sameena Shah; Iana Borova; Dylan Langdon; Reema Moussa; Matt Beane; Ting-Hao Huang; Bryan Routledge; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "FinQA: A dataset of numerical reasoning over financial data", "year": "2021" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Sharan Chowdhery; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b11", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Keith Cortis; André Freitas; Tobias Daudert; Manuela Huerlimann; Manel Zarrouk; Siegfried Handschuh; Brian Davis", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "SemEval-2017 task 5: Finegrained sentiment analysis on financial microblogs and news", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mahmoud El-Haj; Ahmed Abura'ed; Marina Litvak; Nikiforos Pittaras; George Giannakopoulos", "journal": "COLING", "ref_id": "b14", "title": "The financial narrative summarisation shared task (FNS 2020)", "year": "2020" }, { "authors": "Christopher Fifty; Ehsan Amid; Zhe Zhao; Tianhe Yu; Rohan Anil; Chelsea Finn", "journal": "", "ref_id": "b15", "title": "Efficiently identifying task groupings for multi-task learning", "year": "2021" }, { "authors": "Mor Geva; Uri Katz; Aviv Ben-Arie; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "What's in your head? Emergent behaviour in multi-task transformer models", "year": "2021" }, { "authors": "Ahmed Hazourli", "journal": "", "ref_id": "b17", "title": "Financialbert -a pretrained language model for financial text mining", "year": "2022" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b18", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "Juyeon Kang; Ismail El Maarouf; Sandra Bellato; Mei Gan", "journal": "", "ref_id": "b19", "title": "FinSim-3: The 3rd shared task on learning semantic similarities for the financial domain", "year": "2021" }, { "authors": "Rabeeh Karimi Mahabadi; Sebastian Ruder; Mostafa Dehghani; James Henderson", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Parameterefficient multi-task fine-tuning for transformers via shared hypernetworks", "year": "2021" }, { "authors": "Matthew Lamm; Arun Chaganty; Christopher D Manning; Dan Jurafsky; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Textual analogy parsing: What's shared and what's compared among analogous facts", "year": "2018" }, { "authors": "Matthew Lamm; Arun Tejasvi Chaganty; Dan Jurafsky; Christopher D Manning; Percy Liang", "journal": "", "ref_id": "b22", "title": "Qsrl : A semantic role-labeling schema for quantitative facts", "year": "2018" }, { "authors": "Xiaodong Liu; Pengcheng He; Weizhu Chen; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Multi-task deep neural networks for natural language understanding", "year": "2019" }, { "authors": "Zhuang Liu; Degen Huang; Kaiyu Huang; Zhuang Li; Jun Zhao", "journal": "", "ref_id": "b24", "title": "FinBERT: A pre-trained financial language representation model for financial text mining", "year": "2020" }, { "authors": "Ismail El Maarouf; Juyeon Kang; Abderrahim Ait Azzi; Sandra Bellato; Mei Gan; Mahmoud El-Haj", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "The financial document structure extraction shared task (FinTOC2021)", "year": "2021" }, { "authors": "Ismail El Maarouf; Youness Mansar; Virginie Mouilleron; Dialekti Valsamou-Stanislawski", "journal": "", "ref_id": "b26", "title": "The FinSim 2020 shared task: Learning semantic representations for the financial domain", "year": "2020" }, { "authors": "Pekka Malo; Ankur Sinha; Pyry Takala; J Pekka; Jyrki Korhonen; Wallenius", "journal": "", "ref_id": "b27", "title": "Good debt or bad debt: Detecting semantic orientations in economic texts", "year": "2013" }, { "authors": "Yuren Mao; Zekai Wang; Weiwei Liu; Xuemin Lin; Pengtao Xie", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "MetaWeighting: Learning to weight tasks in multi-task learning", "year": "2022" }, { "authors": "Dominique Mariko; Hanna Abi-Akl; Estelle Labidurie; Stephane Durfort; Hugues De Mazancourt; Mahmoud El-Haj", "journal": "COLING", "ref_id": "b29", "title": "The financial document causality detection shared task (FinCausal 2020)", "year": "2020" }, { "authors": "Sewon Min; Mike Lewis; Luke Zettlemoyer; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "MetaICL: Learning to learn in context", "year": "2022" }, { "authors": "David Mueller; Nicholas Andrews; Mark Dredze", "journal": "", "ref_id": "b31", "title": "Do text-to-text multi-task learners suffer from task conflict?", "year": "2022" }, { "authors": "Vishakh Padmakumar; Leonard Lausen; Miguel Ballesteros; Sheng Zha; He He; George Karypis", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Exploring the role of task transferability in largescale multi-task learning", "year": "2022" }, { "authors": "Yifan Peng; Qingyu Chen; Zhiyong Lu", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "An empirical study of multi-task learning on BERT for biomedical text mining", "year": "2020" }, { "authors": "Jonas Pfeiffer; Aishwarya Kamath; Andreas Rücklé; Kyunghyun Cho; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "AdapterFusion: Non-destructive task composition for transfer learning", "year": "2021" }, { "authors": "Linhares Elvys; Mohamed Pontes; Jose G Benjannet; Antoine Moreno; Doucet", "journal": "", "ref_id": "b35", "title": "Using contextual sentence analysis models to recognize esg concepts", "year": "2022" }, { "authors": "Maria Edoardo; Alessandro Ponti; Yoshua Sordoni; Siva Bengio; Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Combining parameterefficient modules for task-level generalisation", "year": "2023" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b37", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sebastian Ruder", "journal": "", "ref_id": "b38", "title": "An overview of multi-task learning in deep neural networks", "year": "2017" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Teven Le Scao; Arun Raja; Manan Dey; M Saiful; Canwen Bari; Urmish Xu; Shanya Thakker; Eliza Sharma; Taewoon Szczechla; Gunjan Kim; Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Stella Biderman; Leo Gao; Tali Bers; Thomas Wolf; Alexander M Rush", "journal": "", "ref_id": "b39", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "Asa ; Cooper Stickland; Iain Murray", "journal": "", "ref_id": "b40", "title": "BERT and pals: Projected attention layers for efficient adaptation in multi-task learning", "year": "2019" }, { "authors": "Tu Vu; Tong Wang; Tsendsuren Munkhdalai; Alessandro Sordoni; Adam Trischler; Andrew Mattarella-Micke; Subhransu Maji; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Exploring and predicting transferability across NLP tasks", "year": "2020" }, { "authors": "Zirui Wang; Yulia Tsvetkov; Orhan Firat; Yuan Cao", "journal": "", "ref_id": "b42", "title": "Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models", "year": "2021" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b43", "title": "Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Frank Xing; Lorenzo Malandri; Yue Zhang; Erik Cambria", "journal": "International Committee on Computational Linguistics", "ref_id": "b44", "title": "Financial sentiment analysis: An investigation into common mistakes and silver bullets", "year": "2020" }, { "authors": "Yi Yang; Mark Christopher Siy; Allen Uy; Huang", "journal": "", "ref_id": "b45", "title": "Finbert: A pretrained language model for financial communications", "year": "2020" }, { "authors": "Tianhe Yu; Saurabh Kumar; Abhishek Gupta; Sergey Levine; Karol Hausman; Chelsea Finn", "journal": "", "ref_id": "b46", "title": "Gradient surgery for multi-task learning", "year": "2020" }, { "authors": "Yichu Zhou; Vivek Srikumar", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "A closer look at how fine-tuning changes BERT", "year": "2022" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Youcheng Huang; Chao Wang; Shuo Zhang; Jiancheng Lv; Fuli Feng; Tat-Seng Chua", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 85.63, 484.55, 204.24, 21.4 ], "formula_id": "formula_0", "formula_text": "L(θ, D) = t∈T w t • l t (h θt (f θ E (X t )), Y t ) (1)" }, { "formula_coordinates": [ 7, 345.66, 586.43, 179.48, 29.26 ], "formula_id": "formula_1", "formula_text": "R t l,M = 1 |D t | (xt ,yt )∈Dt M l (x t )(2)" }, { "formula_coordinates": [ 7, 313.78, 746.76, 211.36, 29.47 ], "formula_id": "formula_2", "formula_text": "G l,M = 1 C 2 |T| t 1 ,t 2 ∈T cossim(R t 1 l,M , R t 2 l,M )(3)" } ]
10.1145/3580305.3599303
2023-05-24
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b20", "b22", "b31", "b9", "b28", "b13", "b31", "b31", "b9", "b35", "b1", "b13", "b43", "b15", "b50", "b51", "b13", "b1" ], "table_ref": [], "text": "Machine Learning (ML) has proven to be successful in a wide range of tasks such as image classification, natural language processing, and time series forecasting. In a supervised learning setup practitioners need to design a sequence of choices comprising algorithms that transform the data (e.g. imputation, scaling) and produce an estimation (e.g. through a classifier or regressor). Unfortunately, manually configuring the design choices is a tedious and error-prone task. The field of AutoML aims at researching methods for automatically discovering the optimal design choices of ML pipelines [21,23]. As a result, Pipeline Optimization [32] or pipeline synthesis [10,29] is the primary open challenge of AutoML.\nPipeline Optimization (PO) techniques need to capture the complex interaction between the algorithms of a Machine Learning (ML) pipeline and their hyperparameter configurations. Previous work demonstrates that the pipeline search can be automatized and achieve state-of-the-art predictive performance [14,32]. Some of these approaches include Evolutionary Algorithms [32], Reinforcement Learning [10,36] or Bayesian Optimization [2,14,44]. Additionally, transfer learning has been shown to improve decisively PO by transferring efficient pipelines evaluated on other similar datasets [16,51,52].\nUnfortunately, no prior method uses Deep Learning to encapsulate the interaction between pipeline components. Existing techniques train traditional models as performance predictors on the concatenated hyperparameter space of all algorithms, such as Random Forests [14], or Gaussian Processes with additive kernels [2]. In this paper, we hypothesize that we need Deep Learning, not only at the basic supervised learning level, but also at a meta-level for capturing the interaction between ML pipeline components (e.g. the deep interactions of the hyperparameters of preprocessing, augmentation, and modeling stages).\nAs a result, we introduce DeepPipe, a neural network architecture for embedding pipeline configurations on a latent space. Such deep representations are combined with Gaussian Processes (GP) for tuning pipelines with Bayesian Optimization (BO). We exploit the knowledge of the hierarchical search space of pipelines by mapping the hyperparameters of every algorithm through per-algorithm encoders to a hidden representation, followed by a fully connected network that receives the concatenated representations as input. An illustration of the mechanism is presented in Figure 1. Additionally, we show that meta-learning this network through evaluations on auxiliary tasks improves the PO quality. Experiments on three large-scale meta-datasets show that our method achieves new stateof-the-art Pipeline Optimization.\nOur contributions are as follows:\n• We introduce DeepPipe, a surrogate for BO that achieves state-of-the-art performance when optimizing a pipeline for a new dataset through transfer learning.\n• We present a novel and modular architecture that applies different encoders per stage and yields better generalization in low meta-data regimes, i.e. few/no auxiliary tasks. • We conduct extensive evaluations against seven baselines on three large meta-datasets, and we further compare against rival methods in OpenML datasets to assess their performances under time constraints. • We demonstrate that our pipeline representation helps achieve state-of-the-art results in optimizing pipelines for fine-tuning deep computer vision networks." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b5", "b39", "b4", "b40", "b41", "b49", "b25", "b34", "b45", "b48", "b14", "b7", "b17", "b47", "b33", "b40", "b34", "b46", "b48", "b13", "b23", "b23", "b13", "b43", "b37", "b8", "b9", "b35", "b31", "b30", "b42", "b26", "b1", "b28", "b16", "b1", "b13", "b12", "b50", "b51", "b15" ], "table_ref": [], "text": "Hyperparameter Optimization (HPO) has been well studied over the past decade [6]. Techniques relying on Bayesian Optimization (BO) employ surrogates to approximate the response function of Machine Learning models, such as Gaussian Processes [40], Random Forests [5] or Bayesian neural networks [41,42,50]. Further improvements have been achieved by applying transfer learning, where existing evaluations on auxiliary tasks help pre-training or meta-learning the surrogate. In this sense, some approaches use pretrained neural networks with uncertainty outputs [26,35,46,49], or ensembles of Gaussian Processes [15]. Deep Kernels propose combining the benefits of stochastic models such as Gaussian Processes with neural networks [8,18,48]. Follow-up work has applied this combination for training fewshot classifiers [34]. In the area of Hyperparameter Optimization, a successful option is to combine the output layer of a deep neural network with a Bayesian linear regression [41]. Related studies [35] extended this idea by pre-training the Bayesian network with auxiliary tasks. Recent work proposed using non-linear kernels, such as the Matérn kernel, on top of the pre-trained network to improve the performance of BO [47,49]. However, to the best of our knowledge, we are the first to apply Deep Kernels for optimizing pipelines.\nFull Model Selection (FMS) is also referred to as Combined Algorithm Selection and Hyperparameter optimization (CASH) [14,24]. FMS aims to find the best model and its respective hyperparameter configuration [24]. A common approach is to use Bayesian Optimization with surrogates that can handle conditional hyperparameters, such as Random Forest [14], tree-structured Parzen estimators [44], or ensembles of neural networks [38].\nPipeline Optimization (PO) is a generalization of FMS where the goal is to find the algorithms and their hyperparameters for different stages of a Machine Learning Pipeline. Common approaches model the search space as a tree structure and use reinforcement learning [9,10,36], evolutionary algorithms [32], or Hierarchical Task Networks [31] for searching pipelines. Other approaches use Multi-Armed Bandit strategies to optimize the pipeline, and combine them with Bayesian Optimization [43] or multi-fidelity optimization [27]. Alaa and van der Schaar [2] use additive kernels on a Gaussian Process surrogate to search pipelines with BO that groups the algorithms in clusters and fit their hyperparameters on independent Gaussian Processes, achieving an effectively lower dimensionality per input. By formulating the Pipeline Optimization as a constrained optimization problem, Liu et al [29] introduce an approach based on the alternating direction method of multipliers (ADMM) [17].\nTransfer Learning for Pipeline Optimization and CASH leverages information from previous (auxiliary) task evaluations. A few approaches use dataset meta-features to warm-start BO with good configurations from other datasets [2,14]. As extracting metafeatures demands computational time, follow-up works find a portfolio based on these auxiliary tasks [13]. Another popular approach is to use collaborative filtering with a matrix of pipelines vs task evaluations to learn latent embeddings of pipelines. OBOE obtains the embeddings by applying a QR decomposition of the matrix on a time-constrained formulation [51]. By recasting the matrix as a tensor, Tensor-OBOE [52] finds latent representations via the Tucker decomposition. Furthermore, Fusi et al. [16] apply probabilistic matrix factorization for finding the latent pipeline representations. Subsequently, they use the latent representations as inputs for Gaussian Processes and explore the search space using BO. However, these methods using matrix factorization obtain latent representations of the pipelines that neglect the interactions of the hyperparameters between the pipeline's components." }, { "figure_ref": [], "heading": "PRELIMINARIES 3.1 Pipeline Optimization", "publication_ref": [ "b51" ], "table_ref": [], "text": "The pipeline of a ML system consists of a sequence of 𝑁 stages (e.g. dimensionality reducer, standardizer, encoder, estimator [52]). At each stage 𝑖 ∈ {1 . . . 𝑁 } a pipeline includes one algorithm1 from a set of 𝑀 𝑖 choices (e.g. the estimator stage can include the algorithms {SVM, MLP, RF}). Algorithms are tuned through their hyperparameter search spaces, where 𝜆 𝑖,𝑗 denotes the configuration of the 𝑗-th algorithm in the 𝑖-th stage. Furthermore, let us denote a pipeline 𝑝 as the set of indices for the selected algorithm at each stage, i.e. 𝑝 := (𝑝 1 , . . . , 𝑝 𝑁 ), where 𝑝 𝑖 ∈ {1 . . . 𝑀 𝑖 } represents the index of the selected algorithm at the 𝑖-th pipeline stage. The hyperparameter configuration of a pipeline is the unified set of the configurations of all the algorithms in a pipeline, concretely 𝜆(𝑝) := 𝜆 1,𝑝 1 , . . . , 𝜆 𝑁 ,𝑝 𝑁 , 𝜆 𝑖,𝑝 𝑖 ∈ Λ 𝑖,𝑝 𝑖 . Pipeline Optimization demands finding the optimal pipeline 𝑝 * and its optimal configuration 𝜆(𝑝 * ) by minimizing the validation loss of a trained pipeline on a dataset D as shown in Equation 1.\n𝑝 * , 𝜆 𝑝 * = arg min 𝑝 ∈ {1...𝑀 1 } ו••× {1...𝑀 𝑁 }, 𝜆 (𝑝 ) ∈Λ 1,𝑝 1 ו••×Λ 𝑁 ,𝑝 𝑁 L val 𝑝, 𝜆(𝑝), D(1)\nFrom now we will use the term pipeline configuration for the combination of a sequence of algorithms 𝑝 and their hyperparameter configurations 𝜆(𝑝), and denote it simply as 𝑝 𝜆 := (𝑝, 𝜆(𝑝))." }, { "figure_ref": [], "heading": "Bayesian Optimization", "publication_ref": [ "b1", "b13", "b15", "b21", "b37", "b0", "b39" ], "table_ref": [], "text": "Bayesian optimization (BO) is a mainstream strategy for optimizing ML pipelines [2,14,16,22,38]. Let us start with defining a history of 𝑄 evaluated pipeline configurations as H = {(𝑝 (1) 𝜆 , 𝑦 (1) \nWe fit a surrogate iteratively using the observed configurations and their response in BO. Posteriorly, its probabilistic output is used to query the next configuration to evaluate by maximizing an acquisition function [40]. A common choice for the acquisition is Expected Improvement, defined as:\nEI(𝑝 𝜆 |H ) = E max 𝑦 min-𝜇 (𝑝 𝜆 ) , 0(3)\nwhere 𝑦 min is the best-observed response in the history H and 𝜇 is the posterior of the mean predicted performance given by the surrogate, computed using Equation 2. A common choice as a surrogate is Gaussian Processes, but for Pipeline Optimization we introduce DeepPipe." }, { "figure_ref": [], "heading": "DEEP-PIPE: BO WITH DEEP PIPELINE CONFIGURATIONS", "publication_ref": [], "table_ref": [], "text": "To apply BO to Pipeline Optimization (PO) we must define a kernel function that computes the similarity of pipeline configurations, i.e." }, { "figure_ref": [], "heading": "𝑘 𝑝", "publication_ref": [ "b1", "b13", "b31", "b37", "b47", "b48", "b18", "b47", "b48", "b36" ], "table_ref": [], "text": "(𝑞)\n𝜆 , 𝑝(ℓ )\n𝜆 ; 𝜃 = ?. Prior work exploring BO for PO use kernel functions directly on the raw concatenated vector space of selected algorithms and their hyperparameters [2] or use surrogates without dedicated kernels for the conditional search space [14,32,38].\nHowever, we hypothesize that these approaches cannot capture the deep interaction between pipeline stages, between algorithms inside a stage, between algorithms across stages, and between different configurations of these algorithms. In order to address this issue we propose a simple, yet powerful solution to PO: learn a deep embedding of a pipeline configuration and apply BO with a deep kernel [48,49]. This is done by DeepPipe, which searches pipelines in a latent space using BO with Gaussian Processes. We use a neural network 𝜙 (𝑝 𝜆 ; 𝜃 ) : dom(𝑝 𝜆 ) → R 𝑍 with weights 𝜃 to project a pipeline configuration to a 𝑍 -dimensional space. Then, we measure the pipelines' similarity in this latent space as 𝑘 𝜙 (𝑝 (𝑞) 𝜆 ; 𝜃 ), 𝜙 (𝑝 (ℓ ) 𝜆 ; 𝜃 ) using the popular Matérn 5/2 kernel [19]. Once we compute the parameters of the kernel similarity function, we can obtain the GP's posterior and conduct PO with BO as specified in Section 3.2.\nIn this work, we exploit existing deep kernel learning machinery [48,49] to train the parameters 𝜃 of the pipeline embedding neural network 𝜙, and the parameters 𝛾 of the kernel function 𝑘, by maximizing the log-likelihood of the observed validation losses 𝑦 of the evaluated pipeline configurations 𝑝 𝜆 . The objective function for training a deep kernel is the log marginal likelihood of the Gaussian Process [37] with covariance matrix entries\n𝑘 𝑞,ℓ = 𝑘 𝜙 (𝑝 (𝑞) 𝜆 ; 𝜃 ), 𝜙 (𝑝 (ℓ ) 𝜆 ; 𝜃 ) ." }, { "figure_ref": [ "fig_0" ], "heading": "Pipeline Embedding Network", "publication_ref": [], "table_ref": [], "text": "The main piece of the puzzle is: How to define the pipeline configuration embedding 𝜙?\nOur DeepPipe embedding is composed of two parts (i) per-algorithm neural network encoders, and (ii) a pipeline aggregation network. A visualization example of our DeepPipe embedding architecture is provided in Figure 1. We define an encoder 𝜉 (𝑖,𝑗 ) for the hyperparameter configurations of each 𝑗-th algorithm, in each 𝑖-th stage, as a plain multi-layer perceptron (MLP). Every encoder, parameterized by weights 𝜃 enc (𝑖,𝑗 ) , maps the algorithms' configurations to a 𝐿 𝑖 -dimensional vector space:\n𝜉 (𝑖,𝑗 ) 𝜆 𝑖,𝑗 ; 𝜃 enc 𝑖,𝑗 = MLP 𝜆 𝑖,𝑗 ; 𝜃 enc 𝑖,𝑗 , 𝜉 (𝑖,𝑗 ) : Λ 𝑖,𝑗 → R 𝐿 𝑖(4)\nFor a pipeline configuration 𝑝 𝜆 , represented with the indices of its algorithms 𝑝, and the configuration vectors of its algorithms 𝜆(𝑝), we project all the pipeline's algorithms' configurations to their latent space using the algorithm-specific encoders. Then, we concatenate their latent encoder vectors, where our concatenation notation is R 𝐿 𝑖 ⊕ R 𝐿 𝑘 := R 𝐿 𝑖 +𝐿 𝑘 . Finally, the concatenated representation is embedded to a final R 𝑍 space via an aggregation MLP 𝜓 with parameters 𝜃 aggr as denoted below:\n𝜙 (𝑝 𝜆 ) := 𝜓 𝜉 (1,𝑝 1 ) (𝜆 1,𝑝 1 ) ⊕ • • • ⊕ 𝜉 (𝑁 ,𝑝 𝑁 ) (𝜆 𝑁 ,𝑝 𝑁 ) | 𝜃 aggr 𝜓 : R 𝑖 𝐿 𝑖 → R 𝑍 (5)\nWithin the 𝑖-th stage, only the output of one encoder is concatenated, therefore the output of the Selector corresponds to the active algorithm in the 𝑖-th stage and can be formalized as 𝜉 (𝑖,𝑝 𝑖 ) (𝜆 𝑖,𝑝 𝑖 ) = 𝑀 𝑖 𝑗=1 I( 𝑗 = 𝑝 𝑖 ) • 𝜉 (𝑖,𝑗 ) 𝜆 𝑖,𝑗 , where I denotes the indicator function. Having defined the embedding 𝜙 in Equations 4-5, we can plug it into the kernel function, optimize it minimizing the negative loglikelihood of the GP with respect to 𝜃 = {𝜃 enc , 𝜃 aggr }, and conduct BO as in Section 3.2. In Appendix C, we discuss further how the different layers allow DeepPipe to learn the interactions among components and stages." }, { "figure_ref": [], "heading": "Meta-learning our pipeline embedding", "publication_ref": [ "b33", "b48", "b48" ], "table_ref": [], "text": "In many practical applications, there exist computed evaluations of pipeline configurations on previous datasets, leading to the possibility of transfer learning for PO. Our DeepPipe can be easily meta-learned from such past evaluations by pre-training the pipeline embedding network. Let us denote the meta-dataset of pipeline evaluations on 𝑇 datasets (a.k.a. auxiliary tasks) as 1) , 𝑦 (𝑡,1) ), . . . , (𝑝 𝜆 (𝑡,𝑄 𝑡 ) , 𝑦 (𝑡,𝑄 𝑡 ) )}, 𝑡 ∈ {1, . . . ,𝑇 }, where 𝑄 𝑡 is the number of existing evaluations for the 𝑡-th dataset. As a result, we meta-learn our method's parameters to minimize the meta-learning objective of Equation 6. This objective function corresponds to the negative log-likelihood of the Gaussian Processes using DeepPipe's extracted features as input to the kernel [34,49].\nH 𝑡 = {(𝑝 𝜆 (𝑡,\narg min 𝛾,𝜃 𝑇 ∑︁ 𝑡 =1 𝑦 (𝑡 ) T 𝐾 (𝑡 ) (𝜃, 𝛾) -1 𝑦 (𝑡 ) + log 𝐾 (𝑡 ) (𝜃, 𝛾)(6)\nThe learned parameters are used as initialization for the surrogate. We sample batches from the meta-training tasks and make gradient steps that maximize the marginal log-likelihood in Equation 6, similar to previous work [49]. The training algorithm for the surrogate is detailed in Algorithm 1. Every epoch, we perform the following operations for every task 𝑡 ∈ 1...𝑇 : (i) Draw a set of 𝑏 observations (pipeline configuration and performance), (ii) Compute the negative log marginal likelihood (our loss function) as in Equation 6, (iii) compute the gradient of the loss with respect to the DeepPipe parameters and (iv) update DeepPipe parameters. Additionally, we apply Early Convergence by monitoring the performance on the validation meta-dataset. " }, { "figure_ref": [], "heading": ";", "publication_ref": [], "table_ref": [], "text": "When a new pipeline is to be optimized on a new dataset (task), we apply BO (see Algorithm 2). Every iteration we update the surrogate by fine-tuning the kernel parameters. However, the parameters of the MLP layers 𝜃 can be also optimized, as we did in Experiment 1, in which case the parameters were randomly initialized." }, { "figure_ref": [], "heading": "EXPERIMENTS 5.1 Meta-Datasets", "publication_ref": [ "b15", "b15", "b51", "b32", "b19", "b11", "b32", "b38", "b0", "b10" ], "table_ref": [], "text": "A meta-dataset is a collection of pipeline configurations and their respective performance evaluated in different tasks (i.e. datasets).\nIn our experiments, we use the following meta-datasets.\nPMF contains 38151 pipelines (after filtering out pipelines with only NaN entries), and 553 datasets [16]. Although not all the pipelines were evaluated in all tasks (or datasets), it still has a total of 16M evaluations. The pipeline search space has 2 stages (preprocessing and estimator) with 2 and 11 algorithms respectively. Following the setup in the original paper [16], we take 464 tasks for meta-training and 89 for meta-test. As the authors do not specify a validation meta-dataset, we sample randomly 15 tasks out of the meta-training dataset.\nTensor-OBOE provides 23424 pipelines evaluated on 551 tasks [52]. It contains 11M evaluations, as there exist sparse evaluations of pipelines and tasks. The pipelines include 5 stages: Imputator (1 algorithm), Dimensionality-Reducer (3 algorithms), Standardizer (1 algorithm), Encoder (1 algorithm), and Estimator (11 algorithms). We assign 331 tasks for meta-training, 110 for meta-validation, and 110 for meta-testing. ZAP is a benchmark that evaluates deep learning pipelines on fine-tuning state-of-the-art computer vision tasks [33]. The metadataset contains 275625 evaluated pipeline configurations on 525 datasets and 525 different Deep Learning pipelines (i.e. the best pipeline of a dataset was evaluated also on all other datasets). From the set of datasets, we use 315 for meta-training, 45 for metavalidation and 105 for meta-test, following the protocol of the original paper.\nIn addition, we use OpenML datasets. It comprises 39 curated datasets [20] and has been used in previous work for benchmarking [12]. Although this collection of datasets does not contain pipeline evaluations like the other three meta-datasets, we use it for evaluating the Pipeline Optimization in time-constrained settings [33].\nInformation about the search space of every meta-dataset is clarified in Appendix I, and the splits of tasks per meta-dataset are found in Appendix J. All the tasks in the meta-datasets correspond to classification. We use the meta-training set for Pipeline Optimization (PO) methods using transfer learning or meta-learning, and the meta-validation set for tuning some of the hyper-parameters of the PO methods. Finally, we assess their performance on the meta-test set.\nMeta-Datasets Preprocessing. We obtained the raw data for the meta-datasets from the raw repositories of PMF [39] , TensorOBOE [1] and ZAP [11]. PMF and ZAP repositories provide an accuracy matrix, while Tensor-OBOE specifies the error erorr. Moreover, the pipelines configurations are available in different formats for every meta-dataset, e.g. JSON or YAML. Therefore, we firstly convert all the configurations into a tabular format, and the performance matrices are converted to accuracies. Then, we proceed with the following steps: 1) One-Hot encode the categorical hyperparameters, 2) apply a log transformation 𝑥 𝑛𝑒𝑤 = ln(𝑥) to the hyperparameters whose value is greater than 3 standard deviations, 3) scale all the values to be in the range [0,1]." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b5", "b15", "b38", "b50", "b51", "b50", "b37", "b37", "b14", "b3", "b39", "b40", "b27", "b21", "b13", "b1", "b6", "b31" ], "table_ref": [], "text": "We assess the performance of DeepPipe by comparing it with the following set of baselines, which comprises transfer and non-transfer methods.\nRandom Search (RS) selects pipeline configurations by sampling randomly from the search space [6].\nProbabilistic Matrix Factorization (PMF) uses a surrogate model that learns shallow latent representation for every pipeline using the performance matrix of meta-training tasks [16]. We follow the setting for the original PMF for AutoML implementation [39].\nOBOE also uses matrix factorization for optimizing pipelines, but they aim to find fast and informative algorithms to initialize the matrix [51]. We use the settings provided by the authors.\nTensor-OBOE formulates PO as a tensor factorization, where the rank of the tensor is equal to 1 + 𝑁 , for 𝑁 being the number of stages in the pipeline [52]. We use the setting provided by the original implementation [51]. We do not evaluate TensorOBOE on the ZAP and PMF meta-datasets because their performance matrix do not factorize into a tensor.\nFactorized Multilayer Perceptron (FMLP) creates an ensemble of neural networks with a factorized layer [38]. The inputs of the neural network are the one-hot encodings of the algorithms and datasets, in addition to the algorithms' hyperparameters. We use 100 networks with 5 neurons and ReLU activations as highlighted in the author's paper [38].\nRGPE builds an ensemble of Gaussian Processes using auxiliary tasks [15]. The ensemble weights the contributions of every base model and the new model fits the new task. We used the implementation from Botorch [4].\nGaussian Processes (GP) are a standard and strong baseline in hyperparameter optimization [40]. In our experiments, we used Matérn 5/2 kernel.\nDNGO uses neural networks as basis functions with a Bayesian linear regressor at the output layer [41]. We use the implementation provided by Klein and Zela [28], and its default hyperparameters.\nSMAC uses Random Forest with 100 trees for predicting uncertainties [22], with minimal samples leaf and split equal to 3. They have proven to handle well conditional search spaces [14].\nAutoPrognosis [2] uses Structured Kernel Learning (SKL) and meta-learning for optimizing pipelines. We also compare AutoPrognosis against the meta-learned DeepPipe by limiting the search space of classifiers to match the classifiers on the Tensor-OBOE meta-dataset 2 . Additionally, we compare SKL with our non-metalearned DeepPipe version using the default strategy for searching the additive kernels. For these experiments, we use the implementation in the respective author's repository [7].\nTPOT is an AutoML system that conducts PO using evolutionary search [32]. We use the original implementation but adopted the search space to fit the Tensor-OBOE meta-dataset (see Appendix I)." }, { "figure_ref": [], "heading": "Experimental Setup for DeepPipe", "publication_ref": [ "b15", "b29" ], "table_ref": [], "text": "The encoders and the aggregation layers are Multilayer Perceptrons with ReLU activations. We keep an architecture that is proportional to the input size, such that the number of neurons in the hidden layers for the encoder of algorithm 𝑗-th in 𝑖-th stage with |Λ 𝑖,𝑗 | hyperparameters is 𝐹 • |Λ 𝑖,𝑗 |, given an integer factor 𝐹 . The output dimension of the encoders of the 𝑖-th stage is defined as 𝐿 𝑖 = max 𝑗 |Λ 𝑖,𝑗 |. The number of total layers (i.e. encoder and aggregation layers) is fixed to 4 in all experiments, thus the number In all experiments (except Experiment 1), we meta-train the surrogate following Algorithm 1 for 10000 epochs with the Adam optimizer and a learning rate of 10 -4 , batch size 1000, and the Matérn kernel for the Gaussian Process. During meta-testing, when we perform BO to search for a pipeline, we fine-tune only the kernel parameters 𝛾 for 100 gradient steps. In the non-transfer experiments (Experiment 1) we tuned the whole network for 10000 iterations, while the rest of the training settings are similar to the transfer experiments. In Experiment 5 we fine-tune the whole network for 100 steps when no encoders are used. Otherwise, we fine-tune only the encoder associated with the omitted estimator and freeze the rest of the network. We ran all experiments on a CPU cluster, where each node contains two Intel Xeon E5-2630v4 CPUs with 20 CPU cores each, running at 2.2 GHz. We reserved a total maximum memory of 16GB. Further details on the architectures for each search space are specified in Appendix F. Finally, we use the Expected Improvement as an acquisition function for DeepPipe and all the baselines. Initial Configurations. All the baselines use the same five initial configurations, i.e. 𝐼 = 5 in Algorithm 2. For the experiments with the PMF-Dataset, we choose these configurations with the same procedure as the authors [16], where they use dataset meta-features to find the most similar auxiliary task to initialize the search on the test task. Since we do not have meta-features for the Tensor-OBOE meta-dataset, we follow a greedy initialization approach [30]. This was also applied to the ZAP-Dataset. Specifically, we select the bestperforming pipeline configuration by ranking their performances on the meta-training tasks. Subsequently, we iteratively choose four additional configurations that minimize 𝑡 ∈Tasks r𝑡 , where r𝑡 = min 𝑝 ∈ X 𝑟 𝑡,𝑝 , given that 𝑟 𝑡,𝑝 is the rank of the pipeline 𝑝 on task 𝑡. Additional details on the setup can be found in our source code 3 ." }, { "figure_ref": [], "heading": "Research Hypotheses and Associated Experiments", "publication_ref": [ "b4", "b36", "b40", "b21", "b1", "b15", "b37", "b50", "b51", "b14", "b2", "b31", "b50", "b51", "b21", "b15", "b24", "b44" ], "table_ref": [], "text": "We describe the different hypotheses and experiments for testing the performance of DeepPipe. Hypothesis 1: DeepPipe outperforms standard PO baselines. Experiment 1: We evaluate the performance of DeepPipe when no meta-training data is available. We compare against four baselines: Random Search (RS) [5], Gaussian Processes (GP) [37], DNGO [41], SMAC [22] and SKL [2]. We evaluate their performances on the aforementioned PMF, Tensor-OBOE and ZAP meta-datasets. In Experiments 1 and 2 (below), we select 5 initial observations to warm-start the BO, then we run 95 additional iterations.\nHypothesis 2: Our meta-learned DeepPipe outperforms stateof-the-art transfer-learning PO methods.\nExperiment 2: We compare our proposed method against baselines that use auxiliary tasks (a.k.a. meta-training data) for improving the performance of Pipeline Optimization: Probabilistic Matrix Factorization (PMF) [16], Factorized Multilayer Perceptron (FMLP) [38], OBOE [51] and Tensor OBOE [52]. Moreover, we compare to RGPE [15], an effective baseline for transfer HPO [3]. We evaluate the performances on the PMF and Tensor-OBOE meta-datasets.\nHypothesis 3: DeepPipe leads to strong any-time results in a time-constrained PO problem.\nExperiment 3: Oftentimes practitioners need AutoML systems that discover efficient pipelines within a small time budget. To test the convergence speed of our PO method we ran it on the aforementioned OpenML datasets for a budget of 10 minutes, and also 1 hour. We compare against five baselines: (i) TPOT [32] adapted to the search space of Tensor-OBOE (see Appendix I), (ii) OBOE and Tensor-OBOE [51,52] using the time-constrained version provided by the authors, (iii) SMAC [22], and (iv) PMF [16]. The last three had the same five initial configurations used to warm-start BO as detailed in Experiment 1. Moreover, they were pre-trained with the 3 The code is available in this repository: https://github.com/releaunifreiburg/DeepPipe Tensor-OBOE meta-dataset and all the method-specific settings are the same as in Experiment 2. We also compared DeepPipe execution time with AutoPrognosis [25], and report the performances after 50 and 100 BO iterations.\nHypothesis 4: Our novel encoder layers of DeepPipe enable an efficient PO when the pipeline search space changes, i.e. when developers add a new algorithm to an ML system. Experiment 4: A major obstacle to meta-learning PO solutions is that they do not generalize when the search space changes, especially when the developers of ML systems add new algorithms. Our architecture quickly adapts to newly added algorithms because only an encoder sub-network for the new algorithm should be trained. To test the scenario, we ablate the performance of five versions of DeepPipe and try different settings when we remove a specific algorithm (an estimator) either from meta-training, meta-testing, or both.\nHypothesis 5: The encoders in DeepPipe introduce an inductive bias where latent representation vectors of an algorithm's configurations are co-located and located distantly from the representations of other algorithms' configurations. Formally, given three pipelines 𝑝 (𝑙 ) , 𝑝 (𝑚) , 𝑝 (𝑛) if 𝑝 (𝑙 ) ) -𝜙 (𝑝 (𝑚) )|| < ||𝜙 (𝑝 (𝑚) ) -𝜙 (𝑝 (𝑛) )|| with higher probability when using encoder layers, given that 𝑝 (𝑛) 𝑖 is the index of the algorithm in the 𝑖-th stage. Furthermore, we hypothesize that the less number of tasks during pre-training, the more necessary this inductive bias is.\n(𝑙 ) 𝑖 = 𝑝 (𝑚) 𝑖 , 𝑝 (𝑙 ) 𝑖 ≠ 𝑝 (𝑛) 𝑖 then ||𝜙 (𝑝\nExperiment 5: We sample 2000 pipelines of 5 estimation algorithms on the TensorOBOE dataset. Subsequently, we embed the pipelines using a DeepPipe with 0, 1, and 2 encoder layers, and weights 𝜃 , initialized such that 𝜃 𝑖 ∈ 𝜃 are independently identically distributed 𝜃 𝑖 ∼ N (0, 1). Finally, we visualize the embeddings with T-SNE [45] and compute a cluster metric to assess how close pipelines with the same algorithm are in the latent space:\nE 𝑝 (𝑙 ) ,𝑝 (𝑚) ,𝑝 (𝑛) (I(||𝜙 (𝑝 (𝑙 ) ) -𝜙 (𝑝 (𝑚) )|| < ||𝜙 (𝑝 (𝑚) ) -𝜙 (𝑝 (𝑛) )||)).\nTo test the importance of the inductive bias vs the number of pretraining tasks, we ablate the performance of DeepPipe for different percentages of pre-training tasks (0.5%, 1%, 5%, 10%, 50%, 100%) under different values of encoder layers." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [ "b24" ], "table_ref": [ "tab_2", "tab_3", "tab_4", "tab_8", "tab_4", "tab_8" ], "text": "We present the results for Experiments 1 and 2 in Figures 2 and3, respectively. In both cases, we compute the ranks of the classification accuracy achieved by the discovered pipelines of each technique, averaged across the meta-testing datasets. The shadowed lines correspond to the 95% confidence intervals. Additional results showing the mean regret are included in Appendix E. In Experiment 1 (standard/non-transfer PO) DeepPipe achieved the best performance for both meta-datasets, whereas SMAC attained the second place.\nIn Experiment 2 DeepPipe strongly outperforms all the transferlearning PO baselines in all meta-datasets. Given that DeepPipe yields state-of-the-art PO results on both standard and transferlearning setups, we conclude that our pipeline embedding network computes efficient representations for PO with Bayesian Optimization. In particular, the results on the ZAP meta-dataset indicate the efficiency of DeepPipe in discovering state-of-the-art Deep Learning pipelines for computer vision. We discuss additional ablations and comparisons in Appendix E.\nExperiment 3 conducted on the OpenML datasets shows that DeepPipe performs well under restricted budgets, as reported in Table 1. We present the values for the average rank and the average number of observed pipelines after 10 and 60 minutes. Additionally, Table 2 shows the number of pipelines observed by AutoPrognosis and DeepPipe during the execution, demonstrating that DeepPipe manages to explore a relatively high number of pipelines while attaining the best performance. Although our method does not incorporate any direct way to handle time constraints, it outperforms other methods that include heuristics for handling a quick convergence, such as OBOE and Tensor-OBOE.\nAdditionally, we compare DeepPipe with the AutoPrognosis 2.0 library [25] on the Open ML datasets, where we run both methods for 50 and 100 BO iterations (𝐸 𝐵𝑂 ). We report the average and standard deviation for rank, accuracy, and time. DeepPipe achieves the best average rank, i.e. a lower average rank than AutoPrognosis. This is complemented by having the highest average accuracy. Interestingly, our method is approximately one order of magnitude faster than AutoPrognosis. We note this is due to the time overhead introduced by their Gibbs sampling strategy for optimizing the structured kernel, whereas our approach uses gradient-based optimization.\nFurthermore, the results reported in Tables 3 and4 for Experiment 4 indicate that our DeepPipe embedding quickly adapts to incrementally-expanding search spaces, e.g. when the developers of an ML system add new algorithms. In this circumstance, existing transfer-learning PO baselines do not adapt easily, because they assume a static pipeline search space. As a remedy, we propose that when a new algorithm is added to the system after meta-training, we train only a new encoder from scratch (randomly initialized) for that new algorithm. Additionally, the meta-learned parameters for the other encoders and the aggregation layer are frozen. In this experiment, we run our method on variants of the search space when one algorithm at a time is introduced to the search space (for instance an estimator, e.g. MLP, RF, etc., is not known during meta-training, but added new to the meta-testing).\nIn Tables 3 and4 (in Appendix), we report the results in Experiment 4 by providing the values of the average rank among five different configurations for DeepPipe. We compare among meta-trained versions (denoted by ✓in the column MTd.) that omit specific estimators during meta-training (MTr.=✓), or during meta-testing (MTe.=✓). We also account for versions with one encoder layer denoted by ✓in the column Enc.\nThe best in all cases is the meta-learned model that did not omit the estimator (i.e. algorithm known and prior evaluations with that algorithm exist). Among the versions that omitted the estimator in the meta-training set (i.e. algorithm added new), the best configuration was the DeepPipe which fine-tuned a new encoder for that algorithm (line Enc=✓, MTd.=✓, MTr.=✓, MTe.=✗). This version of DeepPipe performs better than ablations with no encoder layers (i.e. only aggregation layers 𝜙), or the one omitting the algorithm during meta-testing (i.e. pipelines that do not use the new algorithm at all). The message of the results is simple: If we add a new algorithm to an ML system, instead of running PO without meta-learning (because the search space changes and existing transfer PO baselines are not applicable to the new space), we can use a meta-learned DeepPipe and only fine-tune an encoder for a new algorithm." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "On the Inductive Bias and Meta-Learning", "publication_ref": [], "table_ref": [], "text": "The results of Experiment 5 on effect of the inductive bias introduced by the encoders are presented in Figure 4. The pipelines with the same active algorithm in the estimation stage, but with different hyperparameters, lie closer in the embedding space created by a random initialized DeepPipe, forming compact clusters characterized by the defined cluster metric (value below the plots). We formally demonstrate in Appendix H that, in general, a single encoder layer is creating more compact clusters than a fully connected linear layer.\nIn another experiment, we assess the performance of DeepPipe with different network sizes and meta-trained with different percentages of meta-training tasks: 0.5%, 1%, 5%, 10%, 50%, and 100%. As we use the Tensor-OBOE meta-dataset, this effectively means that we use 1, 3, 16, 33, 165, and 330 tasks respectively. We ran the experiment for three values of 𝐹 . The presented scores are the average ranks among the three DeepPipe configurations (row-wise). The average rank is computed across all the meta-test tasks and across 100 BO iterations.\nThe results reported in Figure 5 indicate that deeper encoders achieve a better performance when a small number of meta-training tasks is available. In contrast, shallower encoders are needed if more meta-training tasks are available. Apparently the deep aggregation layers 𝜙 already capture the interaction between the hyperparameter configurations across algorithms when a large meta-dataset of evaluated pipelines is given. The smaller the meta-data of evaluated pipeline configurations, the more inductive bias we need to implant in the form of per-algorithm encoders." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "Visualizing the learned embeddings", "publication_ref": [], "table_ref": [], "text": "We are interested in visualizing how the pipeline's representations cluster in the embedding space. Therefore, we train a DeepPipe with 2-layer encoders, 2 aggregation layers, 20 output size, and 𝐹 = 8. To project the 20-dimensional embeddings into 2 dimensions, we apply TSNE (T-distributed Stochastic Neighbor Embedding). As plotted in Figure 6, the pipelines with the same estimator and dimensionality reducer are creating clusters. Note that embeddings of the same algorithms are forming clusters and capturing the similarity between other algorithms. The groups in this latent space are also indicators of performance on a specific task. In Figure 7 we show the same pipeline's embeddings with a color marker indicating its accuracy on two meta-testing tasks. Top-performing pipelines (yellow color) are relatively close to each other in both tasks and build up regions of good pipelines. These groups of good pipelines are different for every task, which indicates that there is not a single pipeline that works for all tasks. Such results demonstrate how DeepPipe maps the pipelines to an embedding space where it is easier to assess the similarity between pipelines and therefore to search for well-performing pipelines." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we have shown that efficient Machine Learning pipeline representations can be computed with deep modular networks. Such representations help discover more accurate pipelines compared to the state-of-art approaches because they capture the interactions of the different pipelines algorithms and their hyperparameters via meta-learning and/or the architecture. Moreover, we show that introducing per-algorithm encoders helps in the case of limited meta-trained data, or when a new algorithm is added to the search space. Overall, we demonstrate that our method Deep-Pipe achieves the new state-of-the-art in Pipeline Optimization. Future work could extend our representation network to model more complex use cases such as parallel pipelines or ensembles of pipelines. " }, { "figure_ref": [], "heading": "A POTENTIAL NEGATIVE SOCIETAL IMPACTS", "publication_ref": [], "table_ref": [], "text": "The meta-training is the most demanding computational step, thus it can incur in high energy consumption. Additionally, DeepPipe does not handle fairness, so it may find pipelines that are biased by the data." }, { "figure_ref": [], "heading": "B LICENCE CLARIFICATION", "publication_ref": [ "b38", "b0", "b10" ], "table_ref": [], "text": "The results of this work (code, data) are under license BSD-3-Clause license. Both the PMF [39], Tensor-OBOE [1] and ZAP [11] datasets hold the same license." }, { "figure_ref": [ "fig_9" ], "heading": "C DISCUSSION ON THE INTERACTIONS AMONG COMPONENTS", "publication_ref": [], "table_ref": [], "text": "The encoder and aggregation layers capture interactions among the pipeline components and therefore are important to attain good performance. These interactions are reflected in the features extracted by these layers, i.e. the pipeline representations obtained by DeepPipe. These representations lie on a metric space that captures relevant information about the pipelines and which can be used on the kernel for the Gaussian Process. Using the original input space does not allow the extraction of rich representations. To test this idea, we meta-train four versions of DeepPipe with and without encoder and aggregation layers on our TensorOBOE meta-train set and then test on the meta-test split. In Figure 8, we show that the best version is obtained when using both encoder (Enc.) and aggregation (Agg.) layers (green line), whereas the worst version is obtained when using the original input space, i.e. no encoder and no aggregation layers. Having an encoder helps more than otherwise, thus it is important to capture interactions among hyperparameters in the same stage. As having an aggregation layer is better than not, capturing interactions among components from different stages is important." }, { "figure_ref": [], "heading": "D ARCHITECTURAL IMPLEMENTATION", "publication_ref": [], "table_ref": [], "text": "DeepPipe's architecture (encoder layers + aggregated layers) can be formulated as a Multilayer Perceptron (MLP) comprising three" }, { "figure_ref": [], "heading": "Input Layer", "publication_ref": [], "table_ref": [], "text": "Encoder Layer" }, { "figure_ref": [], "heading": "Selection and Concatenation", "publication_ref": [], "table_ref": [], "text": "Aggregation Layer" }, { "figure_ref": [], "heading": "Learneable Weights", "publication_ref": [], "table_ref": [], "text": "Weights set to zero Weights set to one parts (Figure 9). The first part of the network that builds the layers with encoders is implemented as a layer with masked weights. We connect the input values corresponding to the hyperparameters 𝜆 (𝑖,𝑗 ) of the 𝑗-th algorithm of the 𝑖-th stage to a fraction of the neurons in the following layer, which builds the encoder. The rest of the connections are dropped. The second part is a layer that selects the output of the encoders associated with the active algorithms (one per stage) and concatenates their outputs (Selection & Concatenation). The layer's connections are fixed to be either one or zero during forward and backward passes. Specifically, they are one if they connect outputs of active algorithms' encoders, and zero otherwise. The last part, an aggregation layer, is a fully connected layer that learns interactions between the concatenated output of the encoders. By implementing the architecture as an MLP instead of a multiplexed list of components(e.g. with a module list in PyTorch), faster forward and backward passes are obtained. We only need to specify the selected algorithms in the forward pass so that the weights in the Encoder Layer are masked and the ones in the Selection & Concatenation are accordingly set. After this implementation, notice that DeepPipe can be interpreted as an MLP with sparse connections. Further details on the architecture are discussed in Appendix F." }, { "figure_ref": [ "fig_11", "fig_3" ], "heading": "E ADDITIONAL RESULTS", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In this section, we present further results. Firstly, we show an ablation of the factor that determines the number of hidden units (𝐹 ) in Figure 10. It shows that 𝐹 = 8 attains the best performance after exploring 100 pipelines in both datasets. Additionally, we present the average regret for the ablation of 𝐹 , and the results of Experiment 1 and 2 in Figures 11, 12 and 13 respectively. The average regret is defined as 𝑦 𝑚𝑎𝑥 -𝑦 * , where 𝑦 𝑚 𝑎𝑥 is the maximum accuracy possible within the task and 𝑦 * is the maximum observed accuracy. Table 4 presents the extended results of omitting estimators in the PMF Dataset. From these, we draw the same conclusion as in the same paper: having encoders help to obtain better performance when a new algorithm is added to a pipeline.\nWe carry out an ablation to understand the difference between the versions of Deep Pipe with/without encoder and with/without transfer-learning using ZAP Meta-dataset. As shown in Figure 14, the version with transfer learning and one encoder performs the best, thus, highlighting the importance of encoders in transfer learning our DeepPipe surrogate. " }, { "figure_ref": [ "fig_1" ], "heading": "F ARCHITECTURE DETAILS", "publication_ref": [], "table_ref": [], "text": "The input to the kernel has a dimensionality of 𝑍 =20. We fix it, to be the same as the output dimension for PMFs. The number of neurons per layer, as mentioned in the main paper, depends on 𝐹 . Consider an architecture with with no encoder layers and ℓ 𝑎 aggregation layers, and hyperparameters Λ 𝑖,𝑗 , 𝑖 ∈ {1 . . . 𝑁 }, 𝑗 ∈ {1 . . . 𝑀 𝑖 } (following the notation in section 4.1) with 𝐿 𝑖 = max 𝑗 |Λ 𝑖,𝑗 |, then the number of weights (omitting biases for the sake of simplicity) will be:\n∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | • 𝐹 • ∑︁ 𝑖 𝐿 𝑖 + (ℓ 𝑎 -1) 𝐹 • ∑︁ 𝑖 𝐿 𝑖 2(7)\nIf the architecture has ℓ 𝑒 encoder layers and ℓ 𝑎 aggregation layers, then the number of weights is given by: In the search space for PMF, we group the algorithms related to Naive Bayers (MultinomialNB, BernoulliNB, GaussianNB) in a single encoder. In this search space, we also group LDA and QDA. In the search space of TensorOboe, we group GaussianNB and Perceptron as they do not have hyperparameters. Given these considerations, we can compute the input size and the weights per search space as function of ℓ 𝑎 , ℓ 𝑒 , 𝐹 as follows:\n(i) Input size:\n# Input size (PMF) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 72 # Input (TensorOboe) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 37 # Input (ZAP) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 35(9)\n(ii) Number of weights for architecture without encoder layers:\n# Weights (PMF) = 720 • 𝐹 + 256 • (ℓ 𝑎 -1) • 𝐹 2 # Weights (TensorOboe) = 444 • 𝐹 + 144 • (ℓ 𝑎 -1) • 𝐹 2 # Weights (ZAP) = 1085 • 𝐹 + 961 • (ℓ 𝑎 -1) • 𝐹 2(10)\n(iii) Number of weights for architecture with encoder layers:\n# Weights (PMF) = 886 • 𝐹 + (1376 • (ℓ 𝑒 -1) + 256 • ℓ 𝑎 ) • 𝐹 2 # Weights (TensorOboe) = 161 • 𝐹 + (271 • (ℓ 𝑒 -1) + 144 • ℓ 𝑎 ) • 𝐹 2 # Weights (ZAP) = 35 • 𝐹 + (965 • (ℓ 𝑒 -1) + 961 • ℓ 𝑎 ) • 𝐹 2(11)\nAccording the previous formulations, Figure 15 shows how many parameters (only weights) the MLP has given a specific value of F and of encoder layers. We fix the total number of layers to four. Notice that the difference in the number of parameters between an architecture with 1 and 2 encoder layers is small in both search spaces. Notice that we associate algorithms with no hyperparameters to the same encoder in our experiments (Appendix I). Moreover, we found that adding the One-Hot-Encoding of the selected algorithms per stage as an additional input is helpful. Therefore, the input dimensionality of the aggregated layers is equal to the dimension after concatenating the encoder's output 𝐹 • 𝑖 (𝑄 𝑖 + 𝑀 𝑖 )." }, { "figure_ref": [], "heading": "G ABBREVIATIONS", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "(i) Abbreviations in Table 2: 1) ET: ExtraTrees, 2) GBT: Gradient Boosting, 3) Logit: Logistict Regression 4) MLP: Multilayer Perceptron 5) RF: Random Forest, 6) lSVM: Linear Support Vector Machine, 7) kNN: k Nearest Neighbours, 8) DT: Decision Trees, 9) AB: AdaBoost, 10) GB/PE= Gaussian Naive Bayes/Perceptron.\n(ii) Abbreviations in Table 3: 1) ET: ExtraTrees, 2) RF: Random Forest , 3) XGBT: Extreme Gradient Boosting, 4) kNN: K-Nearest Neighbours, 5) GB: Gradient Boosting, 6) DT: Decision Trees, 7) Q/LDA: Quadratic Discriminant Analysis/ Linear Discriminant Analysis, 8) NB: Naive Bayes." }, { "figure_ref": [], "heading": "H THEORETICAL INSIGHT OF HYPOTHESIS 5", "publication_ref": [], "table_ref": [], "text": "Here, we formally demonstrate that the DeepPipe with encoder layers is grouping hyperparameters from the same algorithm in the latent space, better than DeepPipe without encoders, formulated on Corollary H.4, which is supported by Proposition H.3. Lemma H.1. Given 𝒘 ∈ R 𝑀 , a vector of weights with independent and identically distributed components 𝑤 𝑖 ∈ {𝑤 1 , ..., 𝑤 𝑀 } such that 𝑤 𝑖 ∼ 𝑝 (𝑤), the expected value of the square of the norm E 𝑝 (𝑤 ) (||𝒘 || 2 ) is given by 𝑀 • (𝜇 2 𝑤 +𝜎 2 𝑤 ), where 𝜇 𝑤 and 𝜎 𝑤 are the mean and standard deviation of 𝑝 (𝑤) respectively.\nProof.\nE 𝑝 (𝑤 ) ||𝒘 || 2 = E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 𝑤 2 𝑖 (12) = 𝑀 ∑︁ 𝑖=1 E 𝑝 (𝑤 ) (𝑤 2 𝑖 )(13)\n= 𝑀 ∑︁ 𝑖=1 𝜇 2 𝑤 + 𝜎 2 𝑤 (14) = 𝑀 • (𝜇 2 𝑤 + 𝜎 2 𝑤 )(15)\n□ Lemma H.2. Consider a linear function with scalar output 𝑧 = 𝒘 𝑇 𝒙 where 𝒘 ∈ R 𝑀 ×1 is the vector of weights with components 𝑤 𝑖 , 𝑖 ∈ {1, ..., 𝑀 }, 𝒙 ∈ R 𝑀 ×1 are the input features. Moreover, consider the weights are independently and identically distributed 𝑤 𝑖 ∼ 𝑝 (𝑤). The expected value of the norm of the output is given by\nE 𝑝 (𝑤 ) ||𝒘 𝑇 𝒙 || 2 = (𝜇 2 𝑤 + 𝜎 2 𝑤 ) • ||𝒙 || 2 + 𝜇 2 𝑤 • 𝑀 𝑖=1 𝑖 -1 𝑗=1 𝑥 𝑖 • 𝑥 𝑗 .\nProof.\nE 𝑝 (𝑤 ) (𝒘 𝑇 𝒙) 2 ( 16)\n= E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 𝑤 𝑖 • 𝑥 𝑖 2 (17) = E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 (𝑤 𝑖 • 𝑥 𝑖 ) 2 + 𝑀 ∑︁ 𝑖=1 𝑖 -1 ∑︁ 𝑗=1 𝑤 𝑖 • 𝑤 𝑗 • 𝑥 𝑖 • 𝑥 𝑗 (18) = 𝑀 ∑︁ 𝑖=1 E 𝑝 (𝑤 ) (𝑤 2 𝑖 ) • 𝑥 2 𝑖 + 2 • 𝑀 ∑︁ 𝑖=1 𝑖 -1 ∑︁ 𝑗=1 E 𝑝 (𝑤 ) (𝑤 𝑖 • 𝑤 𝑗 ) • 𝑥 𝑖 • 𝑥 𝑗 (19)(20)\nSince 𝑤 𝑖 , 𝑤 𝑗 are independent then\nE 𝑝 (𝑤 ) (𝑤 𝑖 • 𝑤 𝑗 ) = E 𝑝 (𝑤 ) (𝑤 𝑖 ) • E 𝑝 (𝑤 ) (𝑤 𝑗 ) = 𝜇 2 𝑤 .\nMoreover, with a slight abuse in notation, we denote 𝑀 𝑖=1 𝑖 -1 𝑗=1 𝑥 𝑖 • 𝑥 𝑗 = 𝒙 ⊗ 𝒙. Given lemma H.1, we obtain:\nE 𝑝 (𝑤 ) (𝒘 𝑇 𝒙) 2 = (𝜇 2 𝑤 + 𝜎 2 𝑤 ) • ||𝒙 || 2 + 2 • 𝜇 2 𝑤 • 𝒙 ⊗ 𝒙 = 𝐷 𝑤 (𝒙)(21)\nwhere 𝐷 𝑤 (•) is introduced as an operation to simplify the notation. □ Proposition H.3. Consider two vectors 𝒙 ′ , x ∈ R 𝑀 , and two weight vectors ŵ and 𝒘 ′ , ŵ𝑇 x ∈ R, 𝒘 ′𝑇 𝒙 ′ ∈ R, such that the weights are iid. Then\nE 𝑝 (𝑤 ) ( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 > E 𝑝 (𝑤 ) ( ŵ𝑇 x -ŵ𝑇 𝒙 ′ ) 2 .\nProof. Using lemma H.2 and decomposition the argument within square:\nE 𝑝 (𝑤 ) (( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 ) (23) = E 𝑝 (𝑤 ) ( ŵ𝑇 x) 2 + (𝒘 ′𝑇 𝒙 ′ ) 2 -2 • ŵ𝑇 x • 𝒘 ′𝑇 𝒙 ′ (24) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( ŵ𝑇 x • 𝒘 ′𝑇 𝒙 ′ ) (25) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( M ∑︁ 𝑖=1 ŵ𝑖 • x𝑖 𝑀 ′ ∑︁ 𝑗=1 𝑤 𝑗 ′ • 𝑥 𝑗 ′ ) (26) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 𝑤 𝑗 ′ • 𝑥 𝑗 ′ • ŵ𝑖 • x𝑖 ) (27) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 E 𝑝 (𝑤 ) (𝑤 𝑗 ′ • ŵ𝑖 ) • 𝑥 𝑗 ′ • x𝑖 (28) Since ŵ and𝒘 ′ are independent, then E 𝑝 (𝑤 ) (𝑤 𝑗 ′ • ŵ𝑖 ) = E 𝑝 (𝑤 ) (𝑤 𝑗 ′ )• E 𝑝 (𝑤 ) ( ŵ𝑖 ) = 𝜇 2 𝑤 . Thus, E 𝑝 (𝑤 ) ( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • 𝜇 2 𝑤 • M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 𝑥 𝑗 ′ • x𝑖(29)\nWhen computing E 𝑝 (𝑤 ) ( ŵ𝑇 x -ŵ𝑇 𝒙 ′ ) \n□ Corollary H.4. A random initialized DeepPipe with encoder layers induces an assumption that two hyperparameter configurations of an algorithm should have more similar performance than hyperparameter configurations from different algorithms.\nProof. Given two hyperparameter configurations 𝜆 (𝑙 ) , 𝜆 (𝑚) from an algorithm, and a third hyperparameter configuration 𝜆 (𝑛) from a different algorithm, every random initialized encoder layer from DeepPipe maps the hyperparameters 𝜆 (𝑙 ) , 𝜆 (𝑚) to latent dimensions 𝑧 (𝑙 ) , 𝑧 (𝑚) that are closer to each other than to 𝑧 (𝑛) , i.e. the expected distance among the output of the encoder layer will be E 𝑝 (𝑤 ) (||𝑧 𝑙 -𝑧 𝑚 ||) < E 𝑝 (𝑤 ) (||𝑧 𝑙 -𝑧 𝑛 ||) based on Proposition H.3. Since DeepPipe uses a kernel such that 𝜅 (𝒙, 𝒙 ′ ) = 𝜅 (𝒙 -𝒙 ′ ), their similarity will increase, when the distance between two configurations decreases. Thus, according to Equation 2, they will have correlated performance. □" }, { "figure_ref": [], "heading": "I META-DATASET SEARCH SPACES", "publication_ref": [], "table_ref": [], "text": "We detail the search space composition in Tables 5 (PMF), 6 (Ten-sorOBOE) and 7 (ZAP). We specify the stages, algorithms, hyperparameters, number of components per stage 𝑀 𝑖 , the number of hyperparameters per algorithm |𝜆 𝑖,𝑗 |, and the maximum number of hyperparameters found in an algorithm per stage 𝑄 𝑖 .\nFor the ZAP meta-dataset, we defined a pipeline with two stages: (i) Architecture, which specifies the type or architecture used (i.e. ResNet18, EfficientNet-B0, EfficientNet-B1, EfficientNet-B2), and (ii) Optimization-related Hyperparameters that are shared by all the architectures. " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant number 417962828 and grant INST 39/963-1 FUGG (bwForCluster NEMO). In addition, Josif Grabocka acknowledges the support of the BrainLinks-BrainTools Center of Excellence, and the funding of the Carl Zeiss foundation through the ReScaLe project." } ]
Automated Machine Learning (AutoML) is a promising direction for democratizing AI by automatically deploying Machine Learning systems with minimal human expertise. The core technical challenge behind AutoML is optimizing the pipelines of Machine Learning systems (e.g. the choice of preprocessing, augmentations, models, optimizers, etc.). Existing Pipeline Optimization techniques fail to explore deep interactions between pipeline stages/components. As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline. We propose embedding pipelines into a latent representation through a novel per-component encoder mechanism. To search for optimal pipelines, such pipeline embeddings are used within deep-kernel Gaussian Process surrogates inside a Bayesian Optimization setup. Furthermore, we meta-learn the parameters of the pipeline embedding network using existing evaluations of pipelines on diverse collections of related datasets (a.k.a. meta-datasets). Through extensive experiments on three large-scale meta-datasets, we demonstrate that pipeline embeddings yield stateof-the-art results in Pipeline Optimization.
Deep Pipeline Embeddings for AutoML
[ { "figure_caption": "Figure 1 :1Figure 1: An example architecture for DeepPipe on a search space with two stages {Preprocessing, Classification}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 : 515DeepPipe Meta-Training Input: Learning rate 𝜂, meta-training data with 𝑇 tasks H = 𝑡 =1..𝑇 H (𝑡 ) , number of epochs 𝐸, batch size 𝑏 Output: Parameters 𝛾 and 𝜃 = {𝜃 agg , 𝜃 enc } 1 Initialize 𝛾 and 𝜃 at random; 2 for 1, ..., 𝐸 do 3 for 𝑡 ∈ {1, ...,𝑇 } do 4 Sample batch B = {(𝑝 (𝑡,𝑖 ) 𝜆 , 𝑦 (𝑡,𝑖 ) )} 𝑖=1,...,𝑏 ∼ H (𝑡 ) ; Compute negative log-likelihood L on B. (Eq. 6);", "figure_data": "", "figure_id": "fig_1", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 2 :Figure 3 :23Figure 2: Comparison of DeepPipe vs. standard PO methods (Experiment 1). Shaded lines indicate 95% confidence interval.", "figure_data": "", "figure_id": "fig_2", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Embeddings of Pipelines produced by a random initialized DeepPipe (after applying T-SNE). The color indicates the active algorithm in the Estimation stage of Tensor-OBOE Meta-Dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison of the average rank for DeepPipe with a different number of encoders under different percentages of meta-train data. The total number of layers is always the same.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Pipeline embeddings produced by a meta-learned DeepPipe using Tensor-OBOE meta-dataset. We define color markers for estimators (left) and dimensionality reducers (right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Pipeline embeddings produced by a meta-learned DeepPipe using Tensor-OBOE meta-dataset. The color indicates the accuracy level of every pipeline on two different meta-testing tasks.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Average rank for DeepPipe with and without encoder and and aggregation layers.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 : 1 .91Figure 9: Example of the Implementation of DeepPipe as MLP. 𝜆 (𝑘 ) 𝑖,𝑗 indicates the 𝑘-th hyperparameter of the 𝑗-th algorithm in the 𝑖-th stage. In this architecture, the first stage has two algorithms, thus two encoders. The algorithm 1 is active for stage 1. The second stage has only one algorithm.", "figure_data": "", "figure_id": "fig_10", "figure_label": "91", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Comparison of different 𝐹 values in DeepPipe (Rank).", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Comparison of different 𝐹 values in DeepPipe (Regret).", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Comparison of DeepPipe vs. non transfer-learning PO methods in Experiment 1 (Regret)", "figure_data": "", "figure_id": "fig_13", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Number of weights in the MLP for a given value of 𝐹 and encoder layers.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "𝜆 , 𝑞 ∈ {1 . . . 𝑄 }. Such validation loss is approximated with a surrogate model, typically a Gaussian process (GP) regressor. We measure the similarity between pipelines via a kernel function 𝑘 : dom (𝑝 𝜆 ) × dom (𝑝 𝜆 ) → R >0 parameterized with 𝜃 , and represent similarities as a matrix 𝐾", "figure_data": "), . . . ,(𝑝(𝑄 ) 𝜆 , 𝑦 (𝑄 ) )}, where 𝑦 (𝑞) ∼ N (𝑓 (𝑝(𝑞) 𝜆 ), 𝜎 2 𝑞 ) is a probabilistic mod-eling of the validation loss 𝑓 (𝑝(𝑞) 𝜆 ) achieved with the 𝑞-th eval-uated pipeline configuration 𝑝(𝑞)′𝑞,ℓ :=𝑘 (𝑝(𝑞) 𝜆 , 𝑝(ℓ )", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "𝜃 agg ← 𝜃 agg -𝜂∇ 𝜃 agg L; 𝑖=1,...,𝐼 , meta-learned surrogate with parameters 𝜃 and 𝛾, number of surrogate updates 𝐸 𝑇 𝑒𝑠𝑡 , BO iterations 𝐸 𝐵𝑂 , search space of pipelines P, new task or dataset D Output: Pipeline Configuration 𝑝 * 𝜆 1 Function FineTune (H, 𝛾, 𝜂, 𝐸 𝑇 𝑒𝑠𝑡 ):", "figure_data": "5end6return 𝛾", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average Rank and Number of Observed Pipelines (# Pips.) on OpenML Datasets after Experiment 3. DeepPipe 2.74 ± 0.12 94 ± 128 2.89 ± 0.13 356 ± 379", "figure_data": "Method10 Mins.1 HourRank# Pips.Rank# Pips.TPOT3.20 ± 0.1945 ± 463.35 ± 0.1970 ± 41T-OBOE 4.38 ± 0.1784 ± 574.36 ± 0.20178 ± 69OBOE3.99± 0.19120 ± 704.08 ± 0.21 467 ± 330SMAC3.24± 0.1681 ± 1153.16 ± 0.14 452 ± 637PMF3.04± 0.15 126 ± 197 2.93 ± 0.15 523 ± 663", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison AutoPrognosis (AP) vs DeepPipe (DP)", "figure_data": "E BO Alg.RankAcc.Time (Min.)50AP DP1.558 ± 0.441 1.441 ± 0.441 0.869 ± 0.111 0.863 ± 0.114161 ± 105 15 ± 25100AP DP1.513 ± 0.469 1.486 ± 0.469 0.873 ± 0.097 0.871 ± 0.095308 ± 186 37 ± 90of encoders ℓ", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average rank among DeepPipe variants for newly-added algorithms (Tensor-OBOE)", "figure_data": "Omitted inOmitted EstimatorEnc. MTd.MTr. MTe.ETGBTLogitMLPRFlSVMKNNDTABGB/PE✓✓✓✓3.2398 3.1572 3.0503 3.1982 3.4135 3.3589 3.2646 3.2863 3.1580 3.3117✓✗✓✗3.5319 3.0934 3.6362 3.4780 3.4712 3.3829 3.6312 3.3691 3.6333 3.4642✓✓✗✗2.5582 2.6773 2.7086 2.5761 2.6485 2.6938 2.6812 2.5596 2.5936 2.5546✗✓✓✗2.9247 3.0743 2.8802 3.0423 2.6691 2.8026 2.7408 2.9161 2.9214 2.8689✓✓✓✗2.7455 2.9978 2.7248 2.7054 2.7978 2.7619 2.6822 2.8688 2.6938 2.8007", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "F=402123450 Encoder Depth 1 2", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "F=602.12345012", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average rank among DeepPipe variants for newly-added algorithms (PMF)", "figure_data": "Omitted inOmitted EstimatorEnc. MTd.MTr. MTe.ETRFXGBTKNNGBDTQ/LDANB✓✓✓✓3.1527 3.1645 3.2109 3.2541 3.2874 3.2741 3.1911 3.0263✓✗✓✗3.2462 3.3208 3.2592 3.3180 3.2376 3.2249 3.3557 3.3993✓✓✗✗2.5710 2.5996 2.4011 2.5947 2.6301 2.5664 2.6252 2.6214✗✓✓✗3.0464 2.8550 3.0850 2.8845 2.9397 3.0316 2.9530 3.0596✓✓✓✗2.9838 3.0601 3.0439 2.9486 2.9051 2.9029 2.8750 2.8934Tensor-OBOE Meta-DatasetPMF Meta-DatasetZAP Meta-DatasetAverage Regret0.02 0.03 0.040.15 0.20 0.250.002 0.0040.010.105 20406080 1005 20406080 1005 20406080 100No. of Explored PipelinesNo. of Explored PipelinesNo. of Explored PipelinesDeepPipeT-OBOEFMLPRSPMFOBOERGPEFigure 13: Comparison of Regret in DeepPipe vs. transfer-learning PO methods in Experiment 2 (Regret)", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "|Λ 𝑖,𝑗 | • (𝐹 • 𝐿 𝑖 ) + (ℓ 𝑒 -1) ∑︁ 𝑖 𝑀 𝑖 • (𝐹 • 𝐿 𝑖 ) 2 + ℓ 𝑎 𝐹 • ∑︁ In other words, the aggregation layers have 𝐹 • 𝑖 𝐿 𝑖 hidden neurons, whereas every encoder from the 𝑖-th stage has 𝐹 •𝑙 𝑖 neurons per layer. The input sizes are 𝑖,𝑗 |Λ 𝑖,𝑗 | and |Λ 𝑖,𝑗 | for both cases respectively. The specific values for |Λ 𝑖,𝑗 | and 𝐿 𝑖 per search space are specified in Appendix I.", "figure_data": "2∑︁𝐿 𝑖(8)𝑖,𝑗𝑖Ablation of EncodersAblation of Transfer2.4Average Rank1.8 2.0 2.2Average Rank2.00 2.25 2.50 2.751.65 204060801005 20406080100No. of Explored PipelinesNo. of Explored PipelinesOne EncoderRSDeepPipe TransferGPNo EncoderDeepPipe Non-TransferRSFigure 14: Ablations on the ZAP meta-dataset", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "2 , we see that the weights are not independent, thus E 𝑝 (𝑤 ) ( ŵ𝑖 • ŵ𝑖 ) = 𝜇 2 𝑤 + 𝜎 2 𝑤 , andE 𝑝 (𝑤 ) ( ŵ𝑇 x -ŵ𝑇 𝒙 ′ ) 2 (30) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • (𝜇 2 𝑤 + 𝜎 2 𝑤 ) • < 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • 𝜇 2 𝑤 • < E 𝑝 (𝑤 ) ( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2", "figure_data": "M𝑀 ′∑︁∑︁𝑥 ′ 𝑗 • x𝑖(31)𝑖=1𝑗=1M𝑀 ′∑︁∑︁𝑥 𝑗′ • x𝑖(32)𝑖=1𝑗=1", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Search Space for PMF Meta-Dataset", "figure_data": "Stage𝐿 𝑖𝑀 𝑖Algorithm|Λ 𝑖,𝑗 |HyperparametersPolynomial3include_bias, interaction_only, degreePreprocessor32PCA2keep_variance, whitenbootstrap, min_samples_leaf, n_estimators, max_features,ExtraTrees9min_weight_fraction_leaf, min_samples_split, max_depthbootstrap, min_samples_leaf, n_estimators, max_features,RandomForest10min_weight_fraction_leaf, min_samples_split, max_depth,criterion_entropy, criterion_giniEstimator138reg_alpha, col_sample_bytree, colsample_bylevel, scale_pos_weight,learning_rate,XgradientBoosting13max_delta_step, base_score, n_estimators, subsample,reg_lambda, min_child_weight, max_depth, gammakNN4p, n_neighbors, weights_distance, weights_uniformmax_leaf_nodes, learning_rate, min_samples_leaf,GradientBoosting10n_estimators, subsample, min_weight_fraction_leaf, max_features,min_samples_split, max_depth, loss_deviancemax_leaf_nodes, min_samples_leaf, max_features,DecisionTree9min_weight_fraction_leaf, min_samples_split, max_depth,splitter_best, criterion_entropy, criterion_ginishrinkage_factor, n_components, tol, shrinkage_-1,LDA6shrinkage_auto, shrinkage_manualQDA1reg_paramBernoulliNB2alpha, fit_priorMultinomialNB2alpha, fit_priorGaussianNB1apply_gaussian_nb", "figure_id": "tab_11", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Search Space for Tensor-OBOE Meta-Dataset", "figure_data": "Stage𝐿 𝑖𝑀 𝑖Algorithm|Λ 𝑖,𝑗 |HyperparametersStrategy_constant, Strategy_mean,Strategy_median,Imputer41SimpleImputer4Strategy_most_frequentEncoder11OneHotEncoder1Handle_unknown_ignoreScaler11StandardScaler1-PCA1N_componentsDim. Reducer13SelectKBest1KVarianceThreshold1-ExtraTrees3min_samples_split, criterion_entropy, criterion_ginilearning_rate, max_depth, max_features_None,Gradient Boosting4max_features_log2Logit5C, penalty_l1, penalty_l2, sovler_liblinear, solver_sagaEstimator510alpha, learning_rate_init, learning_rate_adaptive,MLP5solver_adam, solver_sgdRandom Forest3min_samples_split, criterion_entropy, criterion_ginilSVM1CkNN2n_neighbors, pDecision Trees1min_samples_splitAdaBoost2learning_rate, n_estimatorsGaussianNB1-Perceptron1-", "figure_id": "tab_12", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Search Space for ZAP Meta-Dataset", "figure_data": "Stage𝐿 𝑖𝑀 𝑖Algorithm|Λ 𝑖,𝑗 |HyperparametersResNet1IsActiveArchitecture14EfficientNet-B01IsActiveEfficientNet-B11IsActiveEfficientNet-B21IsActiveearly_epoch, first_simple_model,max_inner_loop_ratio,skip_valid_score_threshold, test_after_at_least_seconds,test_after_at_least_seconds_max,test_after_at_least_seconds_step,batch_size, cv_valid_ratio, max_size,max_valid_count, steps_per_epoch,train_info_sample,Common Hyperparameters311-31optimizer.amsgrad, optimizer.freeze_portion, optimizer.lr,optimizer.min_lr, optimizer.momentum, optimizer.nesterov,optimizer.warm_up_epoch,warmup_multiplier, optimizer.wd,simple_model_LR, simple_model_NuSVC, simple_model_RF,simple_model_SVC, optimizer.scheduler_cosine,optimizer.scheduler_plateau,optimizer.type_Adam,optimizer.type_AdamW", "figure_id": "tab_13", "figure_label": "7", "figure_type": "table" } ]
Sebastian Pineda
[ { "authors": "Yuji Akimoto; Chengrun Yang", "journal": "", "ref_id": "b0", "title": "", "year": "2020" }, { "authors": "Ahmed M Alaa; Mihaela Van Der Schaar", "journal": "", "ref_id": "b1", "title": "AutoPrognosis: Automated Clinical Prognostic Modeling via Bayesian Optimization with Structured Kernel Learning", "year": "2018-07-10" }, { "authors": "Sebastian Pineda Arango; S Hadi; Martin Jomaa; Josif Wistuba; Grabocka", "journal": "", "ref_id": "b2", "title": "HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML", "year": "2021" }, { "authors": "Maximilian Balandat; Brian Karrer; Daniel R Jiang; Samuel Daulton; Benjamin Letham; Andrew Gordon Wilson; Eytan Bakshy", "journal": "", "ref_id": "b3", "title": "BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization", "year": "2020-12-06" }, { "authors": "James Bergstra; Rémi Bardenet; Yoshua Bengio; Balázs Kégl", "journal": "", "ref_id": "b4", "title": "Algorithms for Hyper-Parameter Optimization", "year": "2011-12-14" }, { "authors": "James Bergstra; Yoshua Bengio", "journal": "J. Mach. Learn. Res", "ref_id": "b5", "title": "Random Search for Hyper-Parameter Optimization", "year": "2012" }, { "authors": "Lasse Hansen; Bogdan Cebere", "journal": "", "ref_id": "b6", "title": "AutoPrognosis2", "year": "2022" }, { "authors": "Roberto Calandra; Jan Peters; Carl Edward Rasmussen; Marc Peter; Deisenroth ", "journal": "", "ref_id": "b7", "title": "Manifold Gaussian processes for regression", "year": "2016" }, { "authors": "Alex Guimarães; Cardoso De Sá; Walter José; G S Pinto; Luiz Otávio Vilas Boas; Gisele L Oliveira; Pappa", "journal": "", "ref_id": "b8", "title": "RECIPE: A Grammar-Based Framework for Automatically Evolving Classification Pipelines", "year": "2017-04-19" }, { "authors": "Iddo Drori; Yamuna Krishnamurthy; Rémi Rampin; Raoni De; Paula Lourenço; Jorge Piazentin Ono; Kyunghyun Cho; Cláudio T Silva; Juliana Freire", "journal": "", "ref_id": "b9", "title": "AlphaD3M: Machine Learning Pipeline Synthesis", "year": "2021" }, { "authors": "Fabio Ferreira; Ekrem Örztürk", "journal": "", "ref_id": "b10", "title": "Zero Shot AUtoML with Pretrained Models", "year": "2021" }, { "authors": "Nick Erickson; Jonas Mueller; Alexander Shirkov; Hang Zhang; Pedro Larroy; Mu Li; Alexander Smola", "journal": "", "ref_id": "b11", "title": "Autogluon-tabular: Robust and accurate automl for structured data", "year": "2020" }, { "authors": "Matthias Feurer; Katharina Eggensperger; Stefan Falkner; Marius Lindauer; Frank Hutter", "journal": "", "ref_id": "b12", "title": "Auto-sklearn 2.0: Hands-free automl via meta-learning", "year": "2020" }, { "authors": "Matthias Feurer; Aaron Klein; Jost Eggensperger; Katharina Springenberg; Manuel Blum; Frank Hutter", "journal": "", "ref_id": "b13", "title": "Efficient and Robust Automated Machine Learning", "year": "2015" }, { "authors": "Matthias Feurer; Benjamin Letham; Eytan Bakshy", "journal": "", "ref_id": "b14", "title": "Scalable metalearning for bayesian optimization using ranking-weighted gaussian process ensembles", "year": "2018" }, { "authors": "Nicoló Fusi; Rishit Sheth; Melih Elibol", "journal": "", "ref_id": "b15", "title": "Probabilistic Matrix Factorization for Automated Machine Learning", "year": "2018-12-03" }, { "authors": "Daniel Gabay; Bertrand Mercier", "journal": "Computers & mathematics with applications", "ref_id": "b16", "title": "A dual algorithm for the solution of nonlinear variational problems via finite element approximation", "year": "1976" }, { "authors": "Marta Garnelo; Jonathan Schwarz; Dan Rosenbaum; Fabio Viola; Danilo J Rezende; S M Eslami; Yee Whye Teh", "journal": "", "ref_id": "b17", "title": "Neural processes", "year": "2018" }, { "authors": "G Marc; Genton", "journal": "", "ref_id": "b18", "title": "Classes of Kernels for Machine Learning: A Statistics Perspective", "year": "2002-03" }, { "authors": "Pieter Gijsbers; Erin Ledell; Janek Thomas; Sébastien Poirier; Bernd Bischl; Joaquin Vanschoren", "journal": "", "ref_id": "b19", "title": "An open source AutoML benchmark", "year": "2019" }, { "authors": "Xin He; Kaiyong Zhao; Xiaowen Chu", "journal": "Knowledge-Based Systems", "ref_id": "b20", "title": "AutoML: A survey of the stateof-the-art", "year": "2021" }, { "authors": "Frank Hutter; H Holger; Kevin Hoos; Leyton-Brown", "journal": "", "ref_id": "b21", "title": "Sequential Model-Based Optimization for General Algorithm Configuration", "year": "2011-01-17" }, { "authors": "", "journal": "Springer", "ref_id": "b22", "title": "Automated Machine Learning -Methods, Systems, Challenges", "year": "2019" }, { "authors": "", "journal": "Springer", "ref_id": "b23", "title": "Automated Machine Learning -Methods, Systems, Challenges", "year": "2019" }, { "authors": "Fergus Imrie; Bogdan Cebere; Eoin F Mckinney; Mihaela Van Der Schaar", "journal": "", "ref_id": "b24", "title": "AutoPrognosis 2.0: Democratizing Diagnostic and Prognostic Modeling in Healthcare with Automated Machine Learning", "year": "2022" }, { "authors": "Abdus Salam Khazi; Sebastian Pineda Arango; Josif Grabocka", "journal": "", "ref_id": "b25", "title": "Deep Ranking Ensembles for Hyperparameter Optimization", "year": "2023" }, { "authors": "Akihiro Kishimoto; Djallel Bouneffouf; Radu Marinescu; Parikshit Ram; Ambrish Rawat; Martin Wistuba; Paulito Pedregosa Palmes; Adi Botea", "journal": "", "ref_id": "b26", "title": "Bandit Limited Discrepancy Search and Application to Machine Learning Pipeline Optimization", "year": "2021" }, { "authors": "Aaron Klein; Arber Zela", "journal": "", "ref_id": "b27", "title": "PyBNN", "year": "2020" }, { "authors": "Sijia Liu; Parikshit Ram; Deepak Vijaykeerthy; Djallel Bouneffouf; Gregory Bramble; Horst Samulowitz; Dakuo Wang; Andrew Conn; Alexander G Gray", "journal": "AAAI Press", "ref_id": "b28", "title": "An ADMM Based Framework for AutoML Pipeline Configuration", "year": "2020-02-07" }, { "authors": "Luke Metz; Niru Maheswaranathan; Ruoxi Sun; C Daniel Freeman; Ben Poole; Jascha Sohl-Dickstein", "journal": "", "ref_id": "b29", "title": "Using a thousand optimization tasks to learn hyperparameter search strategies", "year": "2020" }, { "authors": "Felix Mohr; Marcel Wever; Eyke Hüllermeier", "journal": "Mach. Learn", "ref_id": "b30", "title": "ML-Plan: Automated machine learning via hierarchical planning", "year": "2018" }, { "authors": "Randal S Olson; Jason H Moore", "journal": "", "ref_id": "b31", "title": "TPOT: A Tree-based Pipeline Optimization Tool for Automating Machine Learning", "year": "2016-06-24" }, { "authors": "Ekrem Ozturk; Fábio Ferreira; Samer Hadi; Lars Jomaa; Josif Schmidt-Thieme; Frank Grabocka; Hutter", "journal": "", "ref_id": "b32", "title": "Zero-Shot AutoML with Pretrained Models", "year": "2022" }, { "authors": "Massimiliano Patacchiola; Jack Turner; Elliot J Crowley; O' Michael; Amos J Boyle; Storkey", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Bayesian meta-learning for the few-shot setting via deep kernels", "year": "2020" }, { "authors": "Rodolphe Valerio Perrone; Matthias W Jenatton; Cédric Seeger; Archambeau", "journal": "", "ref_id": "b34", "title": "Scalable Hyperparameter Transfer Learning", "year": "2018-12-03" }, { "authors": "Marc Herilalaina Rakotoarison; Michèle Schoenauer; Sebag", "journal": "", "ref_id": "b35", "title": "Automated Machine Learning with Monte-Carlo Tree Search", "year": "2019-08-10" }, { "authors": "Carl Edward Rasmussen; Christopher K I Williams", "journal": "MIT Press", "ref_id": "b36", "title": "Gaussian Processes for Machine Learning", "year": "2006" }, { "authors": "Nicolas Schilling; Martin Wistuba; Lucas Drumond; Lars Schmidt-Thieme", "journal": "IEEE Computer Society", "ref_id": "b37", "title": "Joint Model Choice and Hyperparameter Optimization with Factorized Multilayer Perceptrons", "year": "2015-11-09" }, { "authors": "Rishit Sheth", "journal": "", "ref_id": "b38", "title": "pmf-automl", "year": "2018" }, { "authors": "Jasper Snoek; Hugo Larochelle; Ryan P Adams", "journal": "", "ref_id": "b39", "title": "Practical Bayesian Optimization of Machine Learning Algorithms", "year": "2012-12-03" }, { "authors": "Jasper Snoek; Oren Rippel; Kevin Swersky; Ryan Kiros; Nadathur Satish; Narayanan Sundaram; Md Mostofa Ali Patwary; Ryan P Prabhat; Adams", "journal": "", "ref_id": "b40", "title": "Scalable Bayesian Optimization Using Deep Neural Networks", "year": "2015-06-11" }, { "authors": "Jost Tobias Springenberg; Aaron Klein; Stefan Falkner; Frank Hutter", "journal": "", "ref_id": "b41", "title": "Bayesian Optimization with Robust Bayesian Neural Networks", "year": "2016-12-05" }, { "authors": "Thomas Swearingen; Will Drevo; Bennett Cyphers; Alfredo Cuesta-Infante; Arun Ross; Kalyan Veeramachaneni", "journal": "IEEE Computer Society", "ref_id": "b42", "title": "ATM: A distributed, collaborative, scalable system for automated machine learning", "year": "2017-12-11" }, { "authors": "Chris Thornton; Frank Hutter; H Holger; Kevin Hoos; Leyton-Brown", "journal": "", "ref_id": "b43", "title": "Auto-WEKA: Automated Selection and Hyper-Parameter Optimization of Classification Algorithms", "year": "2012" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b44", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "Ying Wei; Peilin Zhao; Junzhou Huang", "journal": "PMLR", "ref_id": "b45", "title": "Meta-learning Hyperparameter Performance Prediction with Neural Processes", "year": "2021" }, { "authors": "Ying Wei; Peilin Zhao; Junzhou Huang", "journal": "PMLR", "ref_id": "b46", "title": "Meta-learning Hyperparameter Performance Prediction with Neural Processes", "year": "2021" }, { "authors": "Andrew Gordon Wilson; Zhiting Hu; Ruslan Salakhutdinov; Eric P Xing", "journal": "PMLR", "ref_id": "b47", "title": "Deep Kernel Learning", "year": "2016" }, { "authors": "Martin Wistuba; Josif Grabocka", "journal": "", "ref_id": "b48", "title": "Few-Shot Bayesian Optimization with Deep Kernel Surrogates", "year": "2021-05-03" }, { "authors": "Martin Wistuba; Arlind Kadra; Josif Grabocka", "journal": "", "ref_id": "b49", "title": "Supervising the Multi-Fidelity Race of Hyperparameter Configurations", "year": "2022" }, { "authors": "Chengrun Yang; Yuji Akimoto; Dae Won Kim; Madeleine Udell", "journal": "ACM", "ref_id": "b50", "title": "OBOE: Collaborative Filtering for AutoML Model Selection", "year": "2019-08-04" }, { "authors": "Chengrun Yang; Jicong Fan; Ziyang Wu; Madeleine Udell", "journal": "ACM", "ref_id": "b51", "title": "AutoML Pipeline Selection: Efficiently Navigating the Combinatorial Space", "year": "2020-08-23" }, { "authors": "J Meta-Dataset Splits", "journal": "PMF Meta-Dataset Meta", "ref_id": "b52", "title": "We specify the IDs of the task used per split. The ID of the tasks are taken from the original meta-dataset creators", "year": "1005" } ]
[ { "formula_coordinates": [ 3, 79.68, 101.23, 214.91, 28.51 ], "formula_id": "formula_0", "formula_text": "𝑝 * , 𝜆 𝑝 * = arg min 𝑝 ∈ {1...𝑀 1 } ו••× {1...𝑀 𝑁 }, 𝜆 (𝑝 ) ∈Λ 1,𝑝 1 ו••×Λ 𝑁 ,𝑝 𝑁 L val 𝑝, 𝜆(𝑝), D(1)" }, { "formula_coordinates": [ 3, 106.68, 534.57, 187.9, 9.57 ], "formula_id": "formula_2", "formula_text": "EI(𝑝 𝜆 |H ) = E max 𝑦 min-𝜇 (𝑝 𝜆 ) , 0(3)" }, { "formula_coordinates": [ 3, 68.56, 662.74, 28.8, 13.79 ], "formula_id": "formula_3", "formula_text": "𝜆 , 𝑝(ℓ )" }, { "formula_coordinates": [ 3, 317.6, 331.83, 116.08, 14.06 ], "formula_id": "formula_4", "formula_text": "𝑘 𝑞,ℓ = 𝑘 𝜙 (𝑝 (𝑞) 𝜆 ; 𝜃 ), 𝜙 (𝑝 (ℓ ) 𝜆 ; 𝜃 ) ." }, { "formula_coordinates": [ 3, 370.93, 499.9, 187.81, 29.47 ], "formula_id": "formula_5", "formula_text": "𝜉 (𝑖,𝑗 ) 𝜆 𝑖,𝑗 ; 𝜃 enc 𝑖,𝑗 = MLP 𝜆 𝑖,𝑗 ; 𝜃 enc 𝑖,𝑗 , 𝜉 (𝑖,𝑗 ) : Λ 𝑖,𝑗 → R 𝐿 𝑖(4)" }, { "formula_coordinates": [ 3, 325.24, 640.67, 233.5, 29.42 ], "formula_id": "formula_6", "formula_text": "𝜙 (𝑝 𝜆 ) := 𝜓 𝜉 (1,𝑝 1 ) (𝜆 1,𝑝 1 ) ⊕ • • • ⊕ 𝜉 (𝑁 ,𝑝 𝑁 ) (𝜆 𝑁 ,𝑝 𝑁 ) | 𝜃 aggr 𝜓 : R 𝑖 𝐿 𝑖 → R 𝑍 (5)" }, { "formula_coordinates": [ 4, 54.02, 247.57, 240.02, 19.9 ], "formula_id": "formula_7", "formula_text": "H 𝑡 = {(𝑝 𝜆 (𝑡," }, { "formula_coordinates": [ 4, 79.27, 345.7, 215.32, 25.96 ], "formula_id": "formula_8", "formula_text": "arg min 𝛾,𝜃 𝑇 ∑︁ 𝑡 =1 𝑦 (𝑡 ) T 𝐾 (𝑡 ) (𝜃, 𝛾) -1 𝑦 (𝑡 ) + log 𝐾 (𝑡 ) (𝜃, 𝛾)(6)" }, { "formula_coordinates": [ 7, 318.31, 433.93, 239.89, 23.84 ], "formula_id": "formula_9", "formula_text": "(𝑙 ) 𝑖 = 𝑝 (𝑚) 𝑖 , 𝑝 (𝑙 ) 𝑖 ≠ 𝑝 (𝑛) 𝑖 then ||𝜙 (𝑝" }, { "formula_coordinates": [ 7, 317.96, 581.96, 240.09, 11.29 ], "formula_id": "formula_10", "formula_text": "E 𝑝 (𝑙 ) ,𝑝 (𝑚) ,𝑝 (𝑛) (I(||𝜙 (𝑝 (𝑙 ) ) -𝜙 (𝑝 (𝑚) )|| < ||𝜙 (𝑝 (𝑚) ) -𝜙 (𝑝 (𝑛) )||))." }, { "formula_coordinates": [ 13, 92.05, 481.68, 202.53, 26.89 ], "formula_id": "formula_11", "formula_text": "∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | • 𝐹 • ∑︁ 𝑖 𝐿 𝑖 + (ℓ 𝑎 -1) 𝐹 • ∑︁ 𝑖 𝐿 𝑖 2(7)" }, { "formula_coordinates": [ 13, 367.17, 423.11, 191.57, 71.8 ], "formula_id": "formula_12", "formula_text": "# Input size (PMF) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 72 # Input (TensorOboe) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 37 # Input (ZAP) = ∑︁ 𝑖,𝑗 |Λ 𝑖,𝑗 | = 35(9)" }, { "formula_coordinates": [ 13, 333.27, 521.42, 225.47, 39.99 ], "formula_id": "formula_13", "formula_text": "# Weights (PMF) = 720 • 𝐹 + 256 • (ℓ 𝑎 -1) • 𝐹 2 # Weights (TensorOboe) = 444 • 𝐹 + 144 • (ℓ 𝑎 -1) • 𝐹 2 # Weights (ZAP) = 1085 • 𝐹 + 961 • (ℓ 𝑎 -1) • 𝐹 2(10)" }, { "formula_coordinates": [ 13, 318.7, 596.73, 240.04, 49.53 ], "formula_id": "formula_14", "formula_text": "# Weights (PMF) = 886 • 𝐹 + (1376 • (ℓ 𝑒 -1) + 256 • ℓ 𝑎 ) • 𝐹 2 # Weights (TensorOboe) = 161 • 𝐹 + (271 • (ℓ 𝑒 -1) + 144 • ℓ 𝑎 ) • 𝐹 2 # Weights (ZAP) = 35 • 𝐹 + (965 • (ℓ 𝑒 -1) + 961 • ℓ 𝑎 ) • 𝐹 2(11)" }, { "formula_coordinates": [ 14, 114.41, 581.83, 180.17, 56.6 ], "formula_id": "formula_15", "formula_text": "E 𝑝 (𝑤 ) ||𝒘 || 2 = E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 𝑤 2 𝑖 (12) = 𝑀 ∑︁ 𝑖=1 E 𝑝 (𝑤 ) (𝑤 2 𝑖 )(13)" }, { "formula_coordinates": [ 14, 168.88, 645.53, 125.7, 39.82 ], "formula_id": "formula_16", "formula_text": "= 𝑀 ∑︁ 𝑖=1 𝜇 2 𝑤 + 𝜎 2 𝑤 (14) = 𝑀 • (𝜇 2 𝑤 + 𝜎 2 𝑤 )(15)" }, { "formula_coordinates": [ 14, 317.96, 142.82, 226.03, 11.75 ], "formula_id": "formula_17", "formula_text": "E 𝑝 (𝑤 ) ||𝒘 𝑇 𝒙 || 2 = (𝜇 2 𝑤 + 𝜎 2 𝑤 ) • ||𝒙 || 2 + 𝜇 2 𝑤 • 𝑀 𝑖=1 𝑖 -1 𝑗=1 𝑥 𝑖 • 𝑥 𝑗 ." }, { "formula_coordinates": [ 14, 329.29, 204.39, 229.45, 106.94 ], "formula_id": "formula_18", "formula_text": "= E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 𝑤 𝑖 • 𝑥 𝑖 2 (17) = E 𝑝 (𝑤 ) 𝑀 ∑︁ 𝑖=1 (𝑤 𝑖 • 𝑥 𝑖 ) 2 + 𝑀 ∑︁ 𝑖=1 𝑖 -1 ∑︁ 𝑗=1 𝑤 𝑖 • 𝑤 𝑗 • 𝑥 𝑖 • 𝑥 𝑗 (18) = 𝑀 ∑︁ 𝑖=1 E 𝑝 (𝑤 ) (𝑤 2 𝑖 ) • 𝑥 2 𝑖 + 2 • 𝑀 ∑︁ 𝑖=1 𝑖 -1 ∑︁ 𝑗=1 E 𝑝 (𝑤 ) (𝑤 𝑖 • 𝑤 𝑗 ) • 𝑥 𝑖 • 𝑥 𝑗 (19)(20)" }, { "formula_coordinates": [ 14, 317.96, 323.21, 240.15, 21.84 ], "formula_id": "formula_19", "formula_text": "E 𝑝 (𝑤 ) (𝑤 𝑖 • 𝑤 𝑗 ) = E 𝑝 (𝑤 ) (𝑤 𝑖 ) • E 𝑝 (𝑤 ) (𝑤 𝑗 ) = 𝜇 2 𝑤 ." }, { "formula_coordinates": [ 14, 325.14, 370.88, 233.6, 20.15 ], "formula_id": "formula_20", "formula_text": "E 𝑝 (𝑤 ) (𝒘 𝑇 𝒙) 2 = (𝜇 2 𝑤 + 𝜎 2 𝑤 ) • ||𝒙 || 2 + 2 • 𝜇 2 𝑤 • 𝒙 ⊗ 𝒙 = 𝐷 𝑤 (𝒙)(21)" }, { "formula_coordinates": [ 14, 364.83, 472.2, 192.6, 11.48 ], "formula_id": "formula_22", "formula_text": "E 𝑝 (𝑤 ) ( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 > E 𝑝 (𝑤 ) ( ŵ𝑇 x -ŵ𝑇 𝒙 ′ ) 2 ." }, { "formula_coordinates": [ 14, 317.96, 525.97, 251.73, 184.47 ], "formula_id": "formula_23", "formula_text": "E 𝑝 (𝑤 ) (( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 ) (23) = E 𝑝 (𝑤 ) ( ŵ𝑇 x) 2 + (𝒘 ′𝑇 𝒙 ′ ) 2 -2 • ŵ𝑇 x • 𝒘 ′𝑇 𝒙 ′ (24) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( ŵ𝑇 x • 𝒘 ′𝑇 𝒙 ′ ) (25) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( M ∑︁ 𝑖=1 ŵ𝑖 • x𝑖 𝑀 ′ ∑︁ 𝑗=1 𝑤 𝑗 ′ • 𝑥 𝑗 ′ ) (26) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • E 𝑝 (𝑤 ) ( M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 𝑤 𝑗 ′ • 𝑥 𝑗 ′ • ŵ𝑖 • x𝑖 ) (27) = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 E 𝑝 (𝑤 ) (𝑤 𝑗 ′ • ŵ𝑖 ) • 𝑥 𝑗 ′ • x𝑖 (28) Since ŵ and𝒘 ′ are independent, then E 𝑝 (𝑤 ) (𝑤 𝑗 ′ • ŵ𝑖 ) = E 𝑝 (𝑤 ) (𝑤 𝑗 ′ )• E 𝑝 (𝑤 ) ( ŵ𝑖 ) = 𝜇 2 𝑤 . Thus, E 𝑝 (𝑤 ) ( ŵ𝑇 x -𝒘 ′𝑇 𝒙 ′ ) 2 = 𝐷 𝑤 ( x) + 𝐷 𝑤 (𝒙 ′ ) -2 • 𝜇 2 𝑤 • M ∑︁ 𝑖=1 𝑀 ′ ∑︁ 𝑗=1 𝑥 𝑗 ′ • x𝑖(29)" } ]
2023-05-23
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b15", "b20", "b39", "b28", "b26", "b37", "b21", "b13" ], "table_ref": [], "text": "Counterfactual reasoning captures human tendency to create possible alternatives to past events and imagine the consequences of something that is contrary to what actually happened or is factually true (Hoch, 1985). It has long been considered a necessary part of a complete system for AI. However, few NLP resources have been developed for evaluating models' counterfactual reasoning abilities, especially in open-domain question answering (QA). Instead, existing formulations of opendomain QA tasks mainly focus on questions whose answer can be deduced directly from global, factual knowledge (e.g., What was the occupation of Lovely Rita according to the song by the Beatles?) available on the Internet (Joshi et al., 2017;Kwiatkowski et al., 2019;Yang et al., 2018). Counterfactual presupposition in open-domain QA can be viewed as a causal intervention. Such intervention entails altering the outcome of events based on the given presuppositions, while obeying the human readers' shared background knowledge of how the world works. To answer such questions, models must go beyond retrieving direct factual knowledge from the Web. They must identify the right information to retrieve and reason about an imagined situation that may even go against the facts built into their parameters.\nAlthough some recent work has attempted to answer questions based on counterfactual evidence in the reading comprehension setting (Neeman et al., 2022), or identified and corrected a false presupposition in a given question (Min et al., 2022), none of existing works have been developed for evaluating and improving counterfactual reasoning capabilities in open-domain QA scenarios. To fill this gap, we present a new benchmark dataset, named IfQA, where each of over 3,800 questions is based on a counterfactual presupposition defined via an \"if\" clause. Two examples are given in Figure 1. IfQA combines causal inference questions with factual text sources that are comprehensible to a layman without an understanding of formal causation. It also allows us to evaluate the capabilities and limitations of recent advances in question answering methods in the context of counterfactual reasoning.\nWe observe that IfQA introduces new challenges for answering open-domain questions in both retrieval and reading. For example, to answer the 2nd example question in Figure 1, \"If the movement of the earth's crust caused the height of Mount Everest to drop by 300 meters, which mountain would be the highest mountain in the world?\", the To establish an initial performance level on IfQA, we evaluate both state-of-the-art close-book and open-book models. Close-book models, such as chain-of-thought (CoT) reasoning with GPT-3 (Wei et al., 2022), generate answers and optionally intermediate reasoning steps, without access to external evidence. On the contrary, open-book models, such as RAG (Lewis et al., 2020) and FiD (Izacard and Grave, 2021), first leverage a retriever over a large evidence corpus (e.g. Wikipedia) to fetch a set of relevant documents, then use a reader to peruse the retrieved documents and predict an answer.\nOur experiments demonstrate that IfQA is a challenging dataset for both retrieval, reading and reasoning. Specifically, we make the following observations. First, in retrieval, traditional dense retrieval methods based on semantic matching cannot well capture the discrepancy between counterfactual presuppositions and factual evidence, resulting failing to retrieve the gold passages in nearly 35% of the examples. Second, state-of-the-art reader models, such as FiD, achieve an F1 score of only 50% even when the gold passage is contained in the set of retrieved passages. Third, close-book CoT reasoning can effectively improve the end-QA performance, but still heavily lags behind open-book models. Lastly, combining passage retrieval and large model reasoner achieves the best results (51% F1), but still leaves a vast room for improvement.\nWe hope the new challenges posed by IfQA will help push open-domain QA research towards more effective retrieval and reasoning methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Open-domain Question Answering", "publication_ref": [ "b27", "b2", "b15", "b20", "b1", "b39", "b36", "b25", "b33", "b22", "b7", "b18", "b5", "b16", "b4", "b38", "b17", "b31", "b0", "b24", "b41", "b47", "b17", "b11", "b9", "b21", "b13", "b14", "b43" ], "table_ref": [], "text": "The task of answering questions using a large collection of documents (e.g., Wikipedia) of diversified topics, has been a longstanding problem in NLP, information retrieval (IR), and related fields (Moldovan et al., 2000;Brill et al., 2002;Yu et al., 2022c). A large number of QA benchmarks have been released in this space, spanning the different types of challenges represented behind them, including single-hop questions (Joshi et al., 2017;Kwiatkowski et al., 2019;Berant et al., 2013), multi-hop questions (Yang et al., 2018;Trivedi et al., 2022), ambiguous questions (Min et al., 2020), multi-answer questions (Rubin et al., 2022;Li et al., 2022), multi-modal questions (Chen et al., 2020;Zhu et al., 2021a), real time questions (Chen et al., 2021;Kasai et al., 2022), etc.\nTo the best of our knowledge, all existing formulations assume that each question is based on factual presuppositions of global knowledge. In contrast, the questions in our IfQA dataset are given counterfactual presuppositions for each question, so the model needs to reason and produce answers based on the given presuppositions combined with the retrieved factual knowledge.\nMainstream open-domain QA methods employ a retriever-reader architecture, and recent followup work has mainly focused on improving the retriever or the reader (Chen and Yih, 2020;Zhu et al., 2021b;Ju et al., 2022). For the retriever traditional methods such as TF-IDF and BM25 explore sparse retrieval strategies by matching the overlapping contents between questions and passages (Chen et al., 2017;Yang et al., 2019). DPR (Karpukhin et al., 2020) revolutionized the field by utilizing dense contextualized vectors for passage indexing. Furthermore, other research improved the performance by better training strategies (Qu et al., 2021;Asai et al., 2022), passage re-ranking (Mao et al., 2021;Yu et al., 2022a) and etc. Recent work has found that large language models have strong factual memory capabilities, and can directly generate supporting evidence in some scenarios, thereby replacing retrievers (Yu et al., 2022b;Ziems et al., 2023). Whereas for the reader, extractive readers aimed to locate a span of words in the retrieved passages as answer (Karpukhin et al., 2020;Iyer et al., 2021;Guu et al., 2020). On the other hand, FiD and RAG, current state-of-the-art readers, leveraged encoder-decoder models such as T5 to generate answers (Lewis et al., 2020;Izacard and Grave, 2021;Izacard et al., 2022;Zhang et al., 2022)." }, { "figure_ref": [], "heading": "Counterfactual Thinking and Causality", "publication_ref": [ "b29", "b8", "b19", "b46", "b23", "b30", "b35", "b30", "b35" ], "table_ref": [], "text": "Causal inference involves a question about a counterfactual world created by taking an intervention, which have recently attracted interest in various fields of machine learning (Niu et al., 2021), including natural language processing (Feder et al., 2022). Recent work shows that incorporating counterfactual samples into model training improves the generalization ability (Kaushik et al., 2019), inspiring a line of research to explore incorporating counterfactual samples into different learning paradigms such as adversarial training (Zhu et al., 2020) and contrastive learning (Liang et al., 2020). These work lie in the orthogonal direction of incorporating counterfactual presuppositions into a model's decision-making process.\nIn the field of NLP, existing counterfactual inferences are ubiquitous in many common inference scenarios, such as counterfactual story generation (Qin et al., 2019), procedural text generation (Tandon et al., 2019). For example, in TIME-TRAVEL, given an original story and an intervening counterfactual event, the task is to minimally revise the story to make it compatible with the given counterfactual event (Qin et al., 2019). In WIQA, given a procedural text and some perturbations to steps mentioned in the procedural, the task is to predict whether the effects of perturbations to the process can be predicted (Tandon et al., 2019). However, to the best of our knowledge, none of existing benchmark datasets was built for the open-domain QA.\n3 IfQA: Task and Dataset" }, { "figure_ref": [], "heading": "Dataset Collection", "publication_ref": [], "table_ref": [], "text": "All questions and answers in our IfQA dataset were collected on the Amazon Mechanical Turk (AMT)1 , a crowdsourcing marketplace for individuals to outsource their jobs to a distributed workforce who can perform these tasks. We offered all AMT workers $15 to $20 per hour. To maintain the diversity of labeled questions, we set a limit of 30 questions per worker. In the end, the dataset was annotated by a total of 188 different crowdworkers.\nOur annotation protocol consists of three phases. First, we automatically extract passages from Wikipedia which are expected to be amenable to counterfactual questions. Second, we crowdsource question-answer pairs on these passages, eliciting questions which require counterfactual reasoning. Finally, we validate the correctness and quality of annotated questions by one or two additional workers. These phases are described below in detail." }, { "figure_ref": [], "heading": "Question and Answer Annotation", "publication_ref": [], "table_ref": [], "text": "(1) Passage Selection. Creating a counterfactual presupposition based on a given Wikipedia page is a non-trivial task, requiring both the rationality of the counterfactual presupposition and the predictability of alternative outcomes. Since the entire Wikipedia has more than 6 million entries, we first perform a preliminary screening to filter out passages that are not related to describing causal events. Specifically, we exploit keywords to search Wikipedia for passages on causality (e.g., lead to, cause, because, due to, originally, initially) Basal sauropodomorph on events, particularly with a high proportion of past tense, as our initial pilots indicated that these passages were the easiest to provide a counterfactual presupposition about past events. Compared with randomly passage selection, this substantially reduces the difficulty of question annotation.\n(2) Question Annotation. To allow some flexibility in this question annotation process, in each human intelligence task (HIT), the worker received a random sample of 20 Wikipedia passages and was asked to select at least 10 passages from them to annotate relevant questions.\nDuring the early-stage annotation, we found that the quality of annotation was significantly low when no examples annotated questions provided. Therefore, we provided workers with five questions at the beginning of each HIT to better prompt them to annotate questions and answers. However, we noticed that fixed examples might bring some bias to annotation workers. For example, when we provided the following example: If German football club RB Leipzig doubled their donation to the city of Leipzig in August 2015 to help asylum seekers, how many euros would they donate in total? The workers would be more inclined to mimic the sentence pattern to annotate questions, such as: If Wells Fargo doubled its number of ATMs worldwide by 2022, how many ATMs would it have? In order to increase the diversity of annotated questions, we later chose to sample combinations of different examples from the example question pool, in which each combination includes five examples.\nAdditionally, we allow workers to write their own questions if they want to do so or if they find it difficult to ask questions based on a given Wikipedia passage. Such annotation process can prevent the workers from reluctantly asking a question for a given passage. At the same time, workers can be encouraged to ask interesting questions and increase the diversity of data. We require that this self-proposed question must also be based on Wikipedia, and the worker is required to provide the URL of Wikipedia page and copy the corresponding paragraph. Ultimately, 20.6% of the questions were annotated in this free-form annotation.\n(3) Answer Annotation. Workers then are required to give answers to the annotated questions. We provided additional answer boxes where they could add other possible valid answers, when appropriate." }, { "figure_ref": [], "heading": "Question and Answer Verification", "publication_ref": [ "b25" ], "table_ref": [], "text": "The verification step mainly evaluates three dimensions of the labelled questions in the first step. Q1: Is this a readable, passage-related question?\nThe first question is used to filter mislabeled questions, such as unreadable questions and questions irrelevant to the passage. For example, we noticed that very few workers randomly write down questions, in order to get paid for the task. Q2: Is the question not well-defined without the Wikipedia passage? I.e., can the question not be properly understood without the passage as the context? If not, could you modify the question to make it context-free? This ensures that the questions are still answerable without the given passage, to avoid ambiguity (Min et al., 2020). Q3: Is the given answer correct? If not, could you provide the correct answer to the question?\nThe third question is to ensure the correctness of the answer. If the answer annotated in the first step is incorrect, it can be revised in time from the second step. If the workers submit a different answer, we further add one more worker, so that a total of three workers answered the question, thereby selecting the final answer by voting." }, { "figure_ref": [], "heading": "Answer Post-processing", "publication_ref": [], "table_ref": [], "text": "Since the answers are in free forms, different surface forms of the same word or phrase can make syntactic matching based end-QA evaluation unreliable. Therefore, we further normalize the different types of answers as follows and include them in addition to the original article span.\nEntity. Entities often have other aliases. For example, the aliases of \"United States\" include \"United States of America\", \"USA\", \"U.S.A\", \"America\", \"US\" and etc. The same entity often exists with different aliases in different Wikipedia pages. Therefore, in addition to the entity aliases currently shown in the given passage, we add the canonical form of the entity -the title of the Wikipedia page to which the entity corresponds. Number. A number could be written in numeric and textual forms, such as \"5\" and \"five\", \"30\" and \"thirty\". When the number has a unit, such as \"5 billion\", it is difficult for us to traverse all possible forms, such as \"5,000 million\" and \"5,000,000 thousand\", so we annotate the answer based on the unit that appears in the given Wikipedia passage, for example, if the word \"billion\" appears in the given passage, we take \"5\" as the numeric part, so only \"5 billion\" is provided as an additional answer.\nDate. In addition of keeping the original format mentioned in the given passage, we use the ISO 86012 standard to add an additional answer, namely \"Month Day, Year\", such as \"May 18, 2022\"." }, { "figure_ref": [], "heading": "Dataset Analysis", "publication_ref": [], "table_ref": [ "tab_0", "tab_0" ], "text": "Answer Type and Length. Figure 2: Retrieval and end-QA performance using the retrieve-then-read models on the IfQA-S split. For retrieval, BM25 demonstrates superior performance than DPR. For end-QA, FiD-l demonstrates the best performance.\nnotation is based on the given Wikipedia passage, most answers (75.1%) in the dataset are text spans extracted from the provided passage. Non-span answers usually require some mathematical reasoning (e.g., the 2nd example in Table 1) or combining multiple text spans in the passage (e.g., the 3rd example in Table 1) as the final answer. Number of Answers. The case of multiple valid answers also exists in our dataset, representing multiple possibilities for possible alternative outcomes. However, the proportion of questions with multiple valid answers is only 11.2%, and the remaining 88.8% of questions have only one valid answer." }, { "figure_ref": [], "heading": "Dataset Splits", "publication_ref": [], "table_ref": [], "text": "We provide two official splits of our dataset. The first one is a regular split for supervised learning (IfQA-S). " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Retrieval Corpus", "publication_ref": [ "b17", "b21" ], "table_ref": [], "text": "We use Wikipedia as the retrieval corpus. The Wikipedia dump we used is dated 2022-05-013 and has 6,394,490 pages in total. We followed prior work (Karpukhin et al., 2020;Lewis et al., 2020) to preprocess Wikipedia pages, splitting each page into disjoint 100-word passages, resulting in 27,572,699 million passages in total." }, { "figure_ref": [], "heading": "Comparison Systems", "publication_ref": [ "b3", "b37", "b41", "b17", "b32", "b21", "b13" ], "table_ref": [], "text": "Closed-book models are pre-trained models that store knowledge in their own parameters. When answering a question, close-book models, such as GPT-3 (Brown et al., 2020), only encode the given question and predict an answer without access to any external non-parametric knowledge.\nWe compared with two recent GPT-3 variants, code-davinci-002 and text-davinci-003. Instead of directly generating the answer, chain-of-thought (CoT) leverages GPT-3 to generate a series of intermediate reasoning steps before presenting the final answer (Wei et al., 2022). Similarly, GENREAD prompts GPT-3 to first generate relevant contextual documents, and then read the generated document to produce the final answer (Yu et al., 2022b).\nOpen-Book models first leverage a retriever over a large evidence corpus (e.g. Wikipedia) to fetch a set of relevant documents that may contain the answer, then a reader to peruse the retrieved documents and predict an answer. The retriever could be sparse retrievers, such as BM25, and also dense retrievers, such as DPR (Karpukhin et al., 2020), which a dual-encoder based model. Whereas for the reader, FiD and RAG, current state-of-the-art readers, leveraged encoder-decoder models, such as T5 (Raffel et al., 2020), to generate answers (Lewis et al., 2020;Izacard and Grave, 2021).\nTable 3: End-QA performance on both IfQA-S and IfQA-F splits. We can observe that combining passage retrieval and large model reasoner can achieve the best performance, as the entire pipeline can enjoy both the factual evidence provided by the retriever and the powerful deductive reasoning ability of the large language model." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "IfQA-S: Supervised Setting IfQA-F: Few-shot Setting code-davinci-002 text-davinci-003 code-davinci-002 text-davinci-003\nEM | F1 EM | F1 EM | F1 EM | F1\n*without retriever, and not using external documents GPT-3 (QA prompt) " }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b17", "b12", "b34" ], "table_ref": [], "text": "Retrieval Performance. We employ Recall@K (short as R@K) as an intermediate evaluation metric, measured as the percentage of top-K retrieved passage that contain the ground truth passage.\nEnd-QA Performance. We use two commonly used metrics to evaluate the end-QA performance: exact match (EM) and F1 score (Karpukhin et al., 2020;Izacard and Grave, 2020;Sachan et al., 2022). EM measures the percentage of predictions having an exact match in the acceptable answer list. F1 score measures the token overlap between the prediction and ground truth answer. We take the maximum F1 over all of the ground truth answers for a given question, and then average over all questions." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [ "b13", "b37" ], "table_ref": [ "tab_1" ], "text": "(1) Retrieval in IfQA is challenging. As shown in Figure 2, when retrieving 20 Wikipedia passages, both sparse and dense searchers could only achieve Recall@20 scores of about 60%, so the reader model cannot answer the remaining 40% of questions based on accurate supportive evidence.\nAlthough recall goes higher when more number of passages retrieved, it would significantly increase the memory cost of the reader model, making it hard to further add complex reasoning modules. This phenomenon of rapid increase in memory cost is also observed in FiD (Izacard and Grave, 2021), i.e., when reading 100 passages, 64 V100 GPUs are required to train the model. Besides, when using large language models for in-context learning, more input passages lead to an increase in the number of input tokens, limiting the number of in-context demonstrations. For example, the latest variants of GPT-3, such as code-davinci and text-davinci, have an input limit of 4096 tokens.\nFurthermore, the IfQA benchmark has some unique features in terms of retrieval compared to existing open-domain QA benchmarks. On one hand, questions in IfQA datasets are usually longer than many existing QA datasets (e.g. NQ and Triv-iaQA), because each question in IfQA contains a clause mentioning counterfactual presuppositions. The average question length of questions in IfQA (as shown in Table 2) is 22.2 words, which is much higher than the question length in NQ (9.1 words), TriviaQA (13.9 words), HotpotQA (15.7 words) and etc. Longer questions make current retrieval methods based on keyword matching (e.g., BM25) easier because more keywords are included in the question, but make latent semantic matching (e.g., DPR) methods harder because a single embedding vector cannot well represent enough Information. On the other hand, in many cases, the retriever suffers from fetching relevant documents by simple semantic matching because of the discrepancies between counterfactual presuppositions and factual evidence. For example, in the question \"If the sea level continues to rise at an accelerated rate, which country is likely to be submerged first?\", the targeted passage for retrieval might not directly mention \"sea level\", \"rise\", and \"submergerd\", where the question is essentially to ask \"which country is the lowest-lying one in the world\".\n(2) Reading and reasoning in IfQA are challenging. Deriving answers from retrieved passages requiring reader models to reason over counterfactual presuppositions in questions and retrieved factual Wikipedia passages.\nEven the state-of-the-art reader model FiD cannot achieve satisfactory performance. We first se- lect a subset of examples where the golden passages were contained in the retrieved passage set, and then evaluate the end-QA performance in the subset. Under the supervised data splitting, there are 540 examples where the golden passages were contained in the retrieved passage set, but only 225 (41.7%) of the answers are correct. Therefore, we can see that without any reasoning module, although FiD can achieve state-of-the-art performance on many open-domain QA benchmarks, it cannot achieve great performance on IfQA. We also find that the FiD model performs worse (31.5%) on questions that require some complex reasoning, such as numerical reasoning examples.\n(3) Chain-of-thought improve counterfactual reasoning performance in IfQA for LLMs. LLMs have been widely proven to perform well on QA tasks in existing literature, especially equipped with chain-of-thought (Wei et al., 2022) to generate a series of intermediate reasoning steps before presenting the final answer. Since IfQA requires models to reason over counterfactual presuppositions, we hypothesize that such a reasoning process would also be effective in helping to answer counterfactual questions. As shown in Table 3, we found that chain-of-thought generation, which was mainly evaluated in complex multi-step reasoning questions before, can effectively improve the performance of LLMs on IfQA. However, since LLMs are closed-book models, they still lack nonparametric knowledge. Therefore, their overall performance still lags behind state-of-the-art retrievethen-read methods, such as FiD.\n(4) Passage retriever + Large model reasoner performs the best on IfQA. We saw that passage retrieval is a necessary step for IfQA. In the absence of grounding evidence, it is difficult for even LLMs to accurately find relevant knowledge from parameterized memory, and accurately predict an-swer. From the results, the performance of closebook models on IfQA data is also far behind the retrieve-then-read models. However, an inherent disadvantage of relying on small readers is that they do not enjoy the world knowledge or deductive power of LLMs, making reasoning based on retrieved passages perform poorly. Therefore, we provided in-context demonstrations to GPT-3, and prompt it to read the retrieved passages, so that the entire pipeline can enjoy both the factual evidence provided by the retriever and the powerful reasoning ability of the large language reader. As shown in Table 3, we found that the combination of BM25 (as retriever) and GPT-3 (as reader) can achieve the best model performance on the IfQA dataset." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We demonstrate the prediction results of different baseline models on a case question in Table 4. First, GPT-3 (both QA prompt and chain-ofthought) hallucinated factual events (the game released in North America on October 31, 2000, andin Europe on March 9, 2001), which leads to wrong answer predictions. Second, even though BM25 + FiD incorporated retrieved passages during answer prediction, due to insufficient counterfactual reasoning ability, it still believes that 2001 is the correct answer. Third, combining retrieval and LLM produces the correct answer, by combining both factual evidence and stronger reasoning ability." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce IfQA, a new dataset with over 3,800 questions, each of which is based on a counterfactual presupposition and has an \"if\" clause. Our empirical analysis reveals that IfQA is highly challenging for existing open-domain QA methods in both retrieval and reasoning process, which would push open-domain QA research on both retrieval and counterfactual reasoning fronts.\nThe main limitation of IfQA dataset is that it only covers event-based questions, due to the nature of creating counterfactual presuppositions. Therefore, our dataset is not intended for training general opendomain QA models or evaluate their capabilities.\nFor data collection, we relied heavily on human annotators, both for question annotation and verification. Despite our efforts to mitigate annotator bias by providing explicit instructions and examples and by sampling annotators from diverse populations, it is not possible to completely remove this bias. In addition, we use heuristic rules to select only a small portion of Wikipedia passages and then present them to human annotators (as mentioned in Section 3.1.1), which might lead to pattern-oriented bias in the annotated data.\nFor evaluated models, large language models performance on our dataset may preserve biases learned from the web text during pre-training or and make biased judgments as a result." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Like any work relying on crowdsourced data, it is possible that the IfQA dataset reflects social, ethical, and regional biases of the workers who created and validated questions." } ]
Although counterfactual reasoning is a fundamental aspect of intelligence, the lack of largescale counterfactual open-domain questionanswering (QA) benchmarks makes it difficult to evaluate and improve models on this ability. To address this void, we introduce the first such dataset, named IfQA, where each question is based on a counterfactual presupposition via an "if" clause. For example, if Los Angeles was on the east coast of the U.S., what would be the time difference between Los Angeles and Paris? Such questions require models to go beyond retrieving direct factual knowledge from the Web: they must identify the right information to retrieve and reason about an imagined situation that may even go against the facts built into their parameters. The IfQA dataset contains over 3,800 questions that were annotated annotated by crowdworkers on relevant Wikipedia passages. Empirical analysis reveals that the IfQA dataset is highly challenging for existing open-domain QA methods, including supervised retrieve-then-read pipeline methods (EM score 36.2), as well as recent few-shot approaches such as chain-of-thought prompting with GPT-3 (EM score 27.4). The unique challenges posed by the IfQA benchmark will push open-domain QA research on both retrieval and counterfactual reasoning fronts.
IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions
[ { "figure_caption": "\"Figure 1 :1Figure 1: In the IfQA dataset, each question is based on a counterfactual presupposition via an \"if\" clause. To answer the question, one needs to retrieve relevant facts from Wikipedia and perform counterfactual reasoning.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "performance, measured by EM and F1.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Example questions from the IfQA dataset, with the proportions with different types of answers.", "figure_data": "Answer TypePassage (some parts shortened)QuestionAnswerEntity (49.7%)LeBron James: ... On June 29, 2018, James optedIf LeBron James had not been(Cleveland)out of his contract with the Cavaliers and becametraded to the Los Angeles Lak-Cavaliersan unrestricted free agent. On July 1, his manage-ers, which team would he havement company, Klutch Sports, announced that heplayed for in 2018-2019 season?would sign with the Los Angeles Lakers.Number (15.9%) 7-Eleven: ... Japan Co., Ltd. in 2005, and is nowIf 7-Eleven expanded its reach22 (countries)held by Chiyoda, Tokyo-based Seven & i Hold-to five more countries in 2020,ings. 7-Eleven operates, franchises, and licenseshow many countries would have71,100 stores in 17 countries as of July 2020.7-Eleven by the end of the year?Date (14.5%)2020 Summer Olympics: ... originally scheduledIf Covid-19 hadn't spread rapidlyJuly 24, 2020to take place from 24 July to 9 August 2020, theacross the globe, when would theevent was postponed to 2021 in March 2020 as aTokyo Olympics in Japan start?result of the COVID-19 pandemic, ...Others (19.9%)1991 Belgian Grand Prix: Patrese's misfortuneIf Gerhard Berger and Nelson Pi-promoted Prost to second, with Nigel Mansellquet had switched starting posi-third, Gerhard Berger fourth, Alesi fifth, and Nel-tion at the 1991 Belgian Grandson Piquet sixth while the sensation of qualifying,Prix, what would have been Nel-Schumacher, was an amazing seventh ...son Piquet's starting position?", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Data statistics of IfQA, for both supervised and few-shot settings.", "figure_data": "IfQA-S: Supervised SettingIfQA-F: Few-shot SettingTrainDev.TestTrainDev.TestNumber of examples240170170120013021301Question length (words)22.0522.4222.1221.6521.8222.34Answer length (words)1.811.801.811.871.831.80Vocabulary size11,16445,244,5801,6657,19910,911", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Case Study. Due to the lack of retrieved evidence, GPT-3 and Chain-of-thought can hallucinate factual events, resulting in wrong answers. DPR-FiD leverages retrieved passages, so generated answers are more closely related to the factual evidence, but due to the insufficient reasoning ability, it also leads to output wrong answers.Question: If the Game Boy Color video game Elmo's ABCs was released in North America at the same time as it was released in Europe, in what year was it released in Europe? Wikipedia passage: Elmo's ABCs is a 1999 educational Game Boy Color video game. It was released in March 1999 in North America. Like \"Elmo's 123s\", it was also released on April 6, 2001 in Europe. ... GPT-3 (QA prompt): 2000; BM25 + FiD: 2001; BM25 + GPT-3: 1999 Chain-of-thought (CoT): The Game Boy Color video game Elmo's ABCs was released in North America on October 31, 2000, and in Europe on March 9, 2001. So, if the game was released in Europe at the same time as North America, it would be available in Europe in 2000. The answer is 2000;", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Wenhao Yu; Meng Jiang; Peter Clark; Ashish Sabharwal
[ { "authors": "Akari Asai; Timo Schick; Patrick Lewis; Xilun Chen; Gautier Izacard; Sebastian Riedel; Hannaneh Hajishirzi; Wen-Tau Yih", "journal": "", "ref_id": "b0", "title": "Task-aware retrieval with instructions", "year": "2022" }, { "authors": "Jonathan Berant; Andrew Chou; Roy Frostig; Percy Liang", "journal": "", "ref_id": "b1", "title": "Semantic parsing on freebase from question-answer pairs", "year": "2013" }, { "authors": "Eric Brill; Susan Dumais; Michele Banko", "journal": "", "ref_id": "b2", "title": "An analysis of the askmsr question-answering system", "year": "2002" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "", "ref_id": "b4", "title": "Reading wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b5", "title": "Open-domain question answering", "year": "2020" }, { "authors": "Wenhu Chen; Ming-Wei Chang; Eva Schlinger; William Yang; Wang ; William W Cohen", "journal": "", "ref_id": "b6", "title": "Open question answering over tables and text", "year": "2020" }, { "authors": "Wenhu Chen; Xinyi Wang; William Yang; Wang ", "journal": "", "ref_id": "b7", "title": "A dataset for answering time-sensitive questions", "year": "2021" }, { "authors": "Amir Feder; Katherine A Keith; Emaad Manzoor; Reid Pryzant; Dhanya Sridhar; Zach Wood-Doughty; Jacob Eisenstein; Justin Grimmer; Roi Reichart; Margaret E Roberts", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "Causal inference in natural language processing: Estimation, prediction, interpretation and beyond", "year": "2022" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Ming-Wei Chang", "journal": "", "ref_id": "b9", "title": "Realm: Retrievalaugmented language model pre-training", "year": "2020" }, { "authors": "J Stephen; Hoch", "journal": "Journal of Experimental Psychology: Learning, Memory, and Cognition", "ref_id": "b10", "title": "Counterfactual reasoning and accuracy in predicting personal events", "year": "1985" }, { "authors": "Srinivasan Iyer; Sewon Min; Yashar Mehdad; Wentau Yih", "journal": "", "ref_id": "b11", "title": "Reconsider: Improved re-ranking using span-focused cross-attention for open domain question answering", "year": "2021" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b12", "title": "Distilling knowledge from reader to retriever for question answering", "year": "2020" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b13", "title": "Leveraging passage retrieval with generative models for open domain question answering", "year": "2021" }, { "authors": "Gautier Izacard; Patrick Lewis; Maria Lomeli; Lucas Hosseini; Fabio Petroni; Timo Schick; Jane Dwivedi-Yu; Armand Joulin; Sebastian Riedel; Edouard Grave", "journal": "", "ref_id": "b14", "title": "Few-shot learning with retrieval augmented language models", "year": "2022" }, { "authors": "Mandar Joshi; Eunsol Choi; Daniel S Weld; Luke Zettlemoyer", "journal": "", "ref_id": "b15", "title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension", "year": "2017" }, { "authors": "Mingxuan Ju; Wenhao Yu; Tong Zhao; Chuxu Zhang; Yanfang Ye", "journal": "", "ref_id": "b16", "title": "Grape: Knowledge graph enhanced passage reader for open-domain question answering", "year": "2022" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b17", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Jungo Kasai; Keisuke Sakaguchi; Yoichi Takahashi; Ronan Le Bras; Akari Asai; Xinyan Yu; Dragomir Radev; Noah A Smith; Yejin Choi; Kentaro Inui", "journal": "", "ref_id": "b18", "title": "Realtime qa: What's the answer right now?", "year": "2022" }, { "authors": "Divyansh Kaushik; Eduard Hovy; Zachary Lipton", "journal": "", "ref_id": "b19", "title": "Learning the difference that makes a difference with counterfactually-augmented data", "year": "2019" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee", "journal": "TACL", "ref_id": "b20", "title": "Natural questions: A benchmark for question answering research", "year": "2019" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Haonan Li; Martin Tomko; Maria Vasardani; Timothy Baldwin", "journal": "", "ref_id": "b22", "title": "Multispanqa: A dataset for multi-span question answering", "year": "2022" }, { "authors": "Zujie Liang; Weitao Jiang; Haifeng Hu; Jiaying Zhu", "journal": "", "ref_id": "b23", "title": "Learning to contrast the counterfactual samples for robust visual question answering", "year": "2020" }, { "authors": "Yuning Mao; Pengcheng He; Xiaodong Liu; Yelong Shen; Jianfeng Gao; Jiawei Han; Weizhu Chen", "journal": "", "ref_id": "b24", "title": "Reader-guided passage reranking for opendomain question answering", "year": "2021" }, { "authors": "Sewon Min; Julian Michael; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "", "ref_id": "b25", "title": "Ambigqa: Answering ambiguous open-domain questions", "year": "2020" }, { "authors": "Sewon Min; Luke Zettlemoyer; Hannaneh Hajishirzi", "journal": "", "ref_id": "b26", "title": "Crepe: Open-domain question answering with false presuppositions", "year": "2022" }, { "authors": "Dan Moldovan; Sanda Harabagiu; Marius Pasca; Rada Mihalcea; Roxana Girju; Richard Goodrum; Vasile Rus", "journal": "", "ref_id": "b27", "title": "The structure and performance of an open-domain question answering system", "year": "2000" }, { "authors": "Ella Neeman; Roee Aharoni; Or Honovich; Leshem Choshen; Idan Szpektor; Omri Abend", "journal": "", "ref_id": "b28", "title": "Disentqa: Disentangling parametric and contextual knowledge with counterfactual question answering", "year": "2022" }, { "authors": "Yulei Niu; Kaihua Tang; Hanwang Zhang; Zhiwu Lu; Xian-Sheng Hua; Ji-Rong Wen", "journal": "", "ref_id": "b29", "title": "Counterfactual vqa: A cause-effect look at language bias", "year": "2021" }, { "authors": "Lianhui Qin; Antoine Bosselut; Ari Holtzman; Chandra Bhagavatula; Elizabeth Clark; Yejin Choi", "journal": "", "ref_id": "b30", "title": "Counterfactual story reasoning and generation", "year": "2019" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "", "ref_id": "b31", "title": "Rocketqa: An optimized training approach to dense passage retrieval for opendomain question answering", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b32", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Samuel Joseph; Amouyal Ohad Rubin; Ori Yoran; Tomer Wolfson; Jonathan Herzig; Jonathan Berant", "journal": "", "ref_id": "b33", "title": "Qampari:: An open-domain question answering benchmark for questions with many answers from multiple paragraphs", "year": "2022" }, { "authors": "Devendra Singh Sachan; Mike Lewis; Dani Yogatama; Luke Zettlemoyer; Joelle Pineau; Manzil Zaheer", "journal": "", "ref_id": "b34", "title": "Questions are all you need to train a dense passage retriever", "year": "2022" }, { "authors": "Niket Tandon; Bhavana Dalvi; Keisuke Sakaguchi; Peter Clark; Antoine Bosselut", "journal": "", "ref_id": "b35", "title": "reasoning over procedural text", "year": "2019" }, { "authors": "Harsh Trivedi; Niranjan Balasubramanian; Tushar Khot; Ashish Sabharwal", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b36", "title": "Musique: Multihop questions via single-hop question composition", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b37", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Wei Yang; Yuqing Xie; Aileen Lin; Xingyu Li; Luchen Tan; Kun Xiong; Ming Li; Jimmy Lin", "journal": "", "ref_id": "b38", "title": "End-to-end open-domain question answering with bertserini", "year": "2019" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "", "ref_id": "b39", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Donghan Yu; Chenguang Zhu; Yuwei Fang; Wenhao Yu; Shuohang Wang; Yichong Xu; Xiang Ren; Yiming Yang; Michael Zeng", "journal": "", "ref_id": "b40", "title": "a. Kg-fid: Infusing knowledge graph in fusion-in-decoder for opendomain question answering", "year": "2022" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b41", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2022" }, { "authors": "Wenhao Yu; Chenguang Zhu; Zaitang Li; Zhiting Hu; Qingyun Wang; Ji Heng; Meng Jiang", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b42", "title": "A survey of knowledge-enhanced text generation", "year": "2022" }, { "authors": "Zhihan Zhang; Wenhao Yu; Chenguang Zhu; Meng Jiang", "journal": "", "ref_id": "b43", "title": "A unified encoder-decoder framework with entity memory", "year": "2022" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Youcheng Huang; Chao Wang; Shuo Zhang; Jiancheng Lv; Fuli Feng; Tat-Seng Chua; ; ", "journal": "", "ref_id": "b44", "title": "Tat-qa: A question answering benchmark on a hybrid of tabular and textual content in finance", "year": "2021" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Chao Wang; Jianming Zheng; Soujanya Poria; Tat-Seng Chua", "journal": "", "ref_id": "b45", "title": "Retrieving and reading: A comprehensive survey on open-domain question answering", "year": "2021" }, { "authors": "Qingfu Zhu; Weinan Zhang; Ting Liu; William Yang; Wang ", "journal": "", "ref_id": "b46", "title": "Counterfactual off-policy training for neural dialogue generation", "year": "2020" }, { "authors": "Noah Ziems; Wenhao Yu; Zhihan Zhang; Meng Jiang", "journal": "", "ref_id": "b47", "title": "Large language models are built-in autoregressive search engines", "year": "2023" } ]
[ { "formula_coordinates": [ 7, 208.3, 141.8, 289.31, 8.7 ], "formula_id": "formula_0", "formula_text": "EM | F1 EM | F1 EM | F1 EM | F1" } ]
10.18653/v1/D16-1250
2024-03-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b40", "b13", "b19", "b32", "b2", "b3", "b10", "b34", "b15", "b11" ], "table_ref": [], "text": "Bilingual lexicons are a basic resource with varied uses, both in themselves, for dictionary building and language learning, as well as seeds for solving other problems in natural language processing (NLP), such as parsing (Zhao et al., 2009;Durrett et al., 2012) and word-to-word and unsupervised machine translation (Irvine and Callison-Burch, 2013;Thompson et al., 2019). While there is growing interest in unsupervised or minimally supervised bilingual lexicon induction (BLI), existing methods often depend on aligning monolingual word embedding spaces, assumed to be of good quality for both languages, and/or bilingual supervision (Artetxe et al., 2016(Artetxe et al., , 2017;;Conneau et al., 2018;Artetxe et al., 2018aArtetxe et al., ,b, 2019)). However, extremely low-resource languages (LRLs) and dialects often lack good quality embeddings due to limited monolingual data, leading to very low or near-zero performance of alignment-based methods for these languages (Wada et al., 2019;Eder et al., 2021). This is the case for the under-researched Indic language continuum, which is the focus of this article (see Section 2 for a description of the linguistic setup in India that motivates our work). We work with five extremely low-resourced Indic languages, Bhojpuri (bho), Magahi mag), Awadhi (awa), Maithili (mai), and Braj (bra), which are closely related to higher-resourced Hindi, and which have extremely limited resources, in terms of training data (<5M tokens of monolingual data) and embeddings, and even evaluation data. We demonstrate that state-of-the-art, alignment-based methods perform poorly in these settings, and introduce a new method for unsupervised BLI that performs much better. We aim to design methods that work well in characteristic data-scarce conditions, as well as generate resources for further work in these languages. Our main contribution is a novel unsupervised BLI method to address the typical scenario of the LRLs of the Indic continuum, i.e. for extremely LRLs that share significant overlap with a closely related HRL. We suppose that a masked language model (MLM) such as monolingual BERT (Devlin et al., 2019) is available for the HRL and that we have some monolingual LRL sentences that contain unknown words. The method consists of building a lexicon iteratively by using the HRL MLM over LRL sentences to extract translation equivalents, and replacing learnt words in LRL sentences with HRL equivalents to make them more tractable for the HRL MLM for future unknown words (see Section 4). Given the lack of existing gold lexicons for our target languages (a frequent scenario for extremely LRLs), we create silver lexicons for Bhojpuri and Magahi created from parallel data, unfortunately unavailable for Awadhi, Maithili, and Braj. We also perform control experiments on Marathi and Nepali, two medium-resource languages more distantly related to Hindi with available gold lexicons, and discuss the performance of canonical methods and our proposed method on these languages, shedding light on what strategies are appropriate for differentlyresourced language pairs. Our experiments indicate that current state-of-the-art methods are not suitable for low-resourced dialects, and methods that account for the data imbalance in the language pair, such as ours, may be more successful. We release our code, our generated lexicons for all languages (to our knowledge the first to be publicly released for all languages except Bhojpuri),1 and our created silver evaluation lexicons for Bhojpuri and Magahi. 2 See details of our released lexicons in Section 7. Our motivation and method, while relevant to the 40+ resource-scarce languages of the Indic language family and other Indian languages, are also relevant to other linguistic systems with similar circumstances, i.e. with a single high-or-medium resource language (usually a standard dialect), and several closely related dialects with lexical, morphological, and syntactic variation, written in the same script with or without orthographic standardization. This setup describes, for example, the Arabic continuum, the Turkic language continuum, and the German dialect system." }, { "figure_ref": [], "heading": "Linguistic Setup in India", "publication_ref": [ "b20", "b45", "b33" ], "table_ref": [ "tab_1" ], "text": "India has around 15-22 languages that are mediumto-high-resource, such as Hindi, Marathi, and Tamil, but dozens of other languages and dialects that are extremely low-resourced, with very little monolingual data (<5M tokens), and no other resources, such as Marwadi, Tulu, Dogri, and Santhali. These languages are often closely related to at least one high-resource language (HRL), meaning that they share morphosyntactic properties as well as a high number of cognates (Jha, 2019;Mundotiya et al., 2021) (see Table 1 for examples). They often have no official status in the regions where they are spoken, and therefore do not have concerted funding efforts for data collection or research. Even when such efforts do exist, 3 the collected corpora are rarely of the magnitude at which static or contextual embeddings can be well-estimated. While the actual number of distinct dialects and languages spoken in India is contested, people self-reported about 576 such \"mother tongue\" dialects in the latthub.com/niyatibafna/BLI-for-Indic-langu ages.\n3 See https://data.ldcil.org/text.\nest census, 4 which were then grouped into around 121 languages. Only 22 of these languages have official status (i.e. they are either the official language of some state/union territory, or have national cultural significance), and are therefore accorded funds for the development of resources. Therefore, although some studies in the literature question the real use case for entirely unsupervised BLI (Vulić et al., 2019), since it is \"easy\" to collect a small bilingual lexicon, we argue that situations such as these, where there is a large number of languages to build support for, and where efforts in data collection and annotation for individual languages are restricted by the availability of funds, do constitute genuine application scenarios for unsupervised BLI. Furthermore, we focus on a scenario where the two languages in question are closely related. This is because for most of the low-resource languages in the Indian context cited above, we can usually find a linguistic neighbour that is relatively well off, usually one of India's 22 scheduled languages. 5 In general, when building resources for a given lowresource dialect or language, it is likely that the standard variant of that dialect, or the HRL closest to it, will have large enough corpora available to build a good quality MLM. We target our efforts to these situations." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b18", "b2", "b3", "b10", "b10", "b2", "b3", "b26", "b8", "b37", "b2", "b29", "b11", "b30", "b39", "b28", "b36", "b1", "b23", "b9", "b15", "b35", "b7", "b34" ], "table_ref": [ "tab_1" ], "text": "Recent years have seen interest in unsupervised BLI (Haghighi et al., 2008;Artetxe et al., 2016Artetxe et al., , 2017;;Conneau et al., 2018;Artetxe et al., 2018aArtetxe et al., ,b, 2019)), allowing the possibility of BLI for LRLs. Most unsupervised approaches, notably MUSE (Conneau et al., 2018) and VecMap (Artetxe et al., 2016(Artetxe et al., , 2017(Artetxe et al., , 2018a,b) ,b) are based on training static embeddings from large monolingual corpora (Mikolov et al., 2013;Bojanowski et al., 2017), and aligning the embeddings using linear or non-linear mappings, using an initial seed (Xing et al., 2015;Artetxe et al., 2016).\nRecent works have also looked at using contextual embeddings or BERT-based models (Peters et al., 2018;Devlin et al., 2019;Ruder et al., 2019) Hindi l@ãkA: b@h@n a:pki: b@t \":a:ja:/ k@:h lija: dýa: r@he: ho: Awadi l@ãkA: b@hin a:p@n b@t \":a:v@t \" dýa:t \" @ha:i Bhojpuri l@ika: b@hin a:p@n k@h@l dýa:t \" ba: Magahi l@i:ka: b@hin @p@n k@h@lie: dýa: h@i Maithili l@ãkA: b@hin @ha:nk k@h@lhu n dýa: r@h@l @Ù h i Table 1: Examples of cognates. Since the Devanagari script is phonetically transparent, phonetic similarity is visible both in IPA and in Devanagari (not shown).\nHindi Awadi Bhojpuri Magahi Maithili Meaning dýa: r@he: ho: dýa:t \" @ha:i dýa:t \" ba: dýa: h@i dýa: r@h@l @Ù h i (you) are going l@ãkA: l@ãkA: l@ika: l@i:ka: l@ãkA: boy (nom.) b@t \":a:ja:/ k@:h lija: b@t \":a:v@t \" k@h@l k@h@lie: k@h@lhu n told (completive) a:pki: a:p@n a:p@n @p@n @ha:nk your (hon., fem. sing. obj) b@h@n b@hin b@hin b@hin b@hin sister 2020) present a human-in-theloop system for BLI in four low-resource languages, updating contextual embeddings with the help of annotations provided by a native speaker. Zhang et al. (2021) present CSCBLI, a method that uses a \"spring network\" to align non-isomorphic contextual embeddings, and interpolates them with static embeddings to estimate word similarities, showing superior results to other methods using contextual embeddings, notably BLISS (Patra et al., 2019). These approaches rely on parallel data or large monolingual corpora for good quality contextual em-beddings. However, for low-resource languages, contextual embeddings from both monolingual and multilingual models are known to be unreliable (Wu and Dredze, 2020).\nLater works show the failings of the above approaches in low-resource settings (Adams et al., 2017;Kuriyozov et al., 2020;Chimalamarri et al., 2020;Eder et al., 2021) and propose alternative training strategies such as joint training of static embeddings (Woller et al., 2021;Bafna et al., 2022), and multilingual embeddings from LSTM-based models (Wada et al., 2019). However, these works either address a higher resource range (>15M tokens), use bilingual lexicons as seeds, or show low scores (≈30 precision@5) for unsupervised BLI.\nIn general, there is a paucity of attention given to setups where there is a severe resource imbalance between the two languages of the BLI pair, despite this being a very typical real-world scenario." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our method is intended for a closely related HRL (source) and LRL (target) pair, written in the same script, and given that we can train or already have a good quality monolingual MLM for the HRL. The main idea is that if we mask an unknown word in the LRL sentence, feed the masked LRL sentence to the HRL MLM, and ask the HRL MLM to propose candidates for the masked LRL word, the HRL MLM should have access to enough contextual cues due to shared vocabulary and syntax to propose meaningful HRL candidates for the masked word. This potentially gives us translation equivalence between the original LRL word and the best scoring proposed HRL candidate. We proceed in an iterative manner, growing the lexicon from equivalents gained from each processed sentence, and using learned equivalents in the lexicon to replace known LRL words with HRL equivalents to process future sentences.\nStarting with an empty HRL-LRL bilingual lexicon, we perform the following steps to update our lexicon iteratively, explained in further detail below, and shown in Algorithm 1: (i) we choose an input, consisting of an LRL sentence, and a source LRL word occurring in it, (ii) we replace known words in the input sentence by HRL equivalents using the current state of the lexicon, in order to make the sentence more HRL-like, (iii) the resulting sentence is passed to the HRL MLM to obtain HRL candidate suggestions for the masked LRL word, (iv) we use a reranking heuristic to choose the best HRL candidate, if any, and (v) we update the lexicon if we have found a new equivalent pair." }, { "figure_ref": [], "heading": "Choosing (sentence, word) pairs to process", "publication_ref": [], "table_ref": [], "text": "Intuitively, the chance of the HRL MLM giving accurate translation equivalents for the (LRL) word is higher if the LRL sentence is more easily \"comprehensible\" to the HRL MLM, or if the LRL sentence already has several HRL words in it. Therefore, we aim to first process words in sentences that have a higher concentration of known words, where known words are either shared vocabulary or words that are already in the current state of our lexicon. These words are replaced by their HRL equivalents before the sentence is passed to the HRL MLM. 6 We maintain a priority list of (sentence, word) pairs based on the percentage of known words in the sentence and update the list after every batch of sentences based on new learned translations. 7" }, { "figure_ref": [], "heading": "Reranking", "publication_ref": [ "b7" ], "table_ref": [], "text": "The HRL MLM may propose valid candidates for the masked token that are not translation equivalents to the LRL source word; typically, there may be a wide range of reasonable possibilities for any masked word. Therefore, we rerank the returned HRL candidates based on orthographic closeness to the masked LRL word. Our use of orthographic closeness as the basis of our rerankers is motivated by the high percentage of orthographically similar cognates, borrowings, and spelling variants in the vocabulary of these languages with respect to each other (shown by Jha (2019) for Maithili and Hindi). Note that minimum normalized edit distance as a stand-alone approach, i.e. positing the orthographically closest HRL word as a translation equivalent for any LRL word, performs badly for various reasons (Bafna et al., 2022). We compare two rerankers, Basic and Rulebook." }, { "figure_ref": [], "heading": "Basic", "publication_ref": [], "table_ref": [], "text": "In the Basic approach, we simply use normalized orthographic similarity (computed using Levenshtein distance) between the candidate and the original masked word. This reranker considers all character substitutions equally costly." }, { "figure_ref": [], "heading": "Rulebook", "publication_ref": [], "table_ref": [], "text": "We may see from discovered cognate pairs that certain character transformations are very common (corresponding to regular sound change, or systematic differences in orthographic conventions), and so should be less costly than others. Similarly, different language pairs may have different preferences for cheap or costly character substitutions.\nIn the Rulebook variant, we use Bafna et al.'s (2022) iterative expectation-maximization (EM) method to learn a custom edit-distance matrix for 6 We mask whole words and accept single token responses (as the default) from the MLM. In practice, this does not pose a big problem, since the HRL MLM tokenizer has a large vocabulary size (52000): 86% and 81% in the Hindi side of the Bhojpuri and Magahi silver lexicons respectively are preserved as single tokens. We leave it to future work to handle multi-word terms.\n7 Specifically, the priority list is created from the (sentence, unk_word) pairs by first sorting them by the number of times each instance has previously been processed, and then by the percentage of other unknown words in the sentence, both in ascending order.\nthe source and target character sets. This custom edit-distance matrix is used as an orthographic reranker for our approach (lines 6-9 in Algorithm 1). The idea of this reranker is to iteratively optimize character substitution probabilities from the source to target character set in \"known\", or hypothesized, cognate pairs, while simultaneously learning new cognate pairs by reranking candidates suggested by the HRL MLM, using the current state of the substitution probabilities. Setup Let χ s and χ t represent the sets of characters on the source (LRL) and target (HRL) sides, respectively. We define a scoring function, S(c i , c j ) that provides a score for replacing a character c i ∈ χ s with c j ∈ χ t . Insertions and deletions are considered special cases of replacement, where a null character is introduced or replaced. For a given source set character, S is modelled as a transformation probability distribution over χ t . Initially, the probabilities in S are assigned to favor self-transformations (typically set to 0.5), and the remaining probability mass is evenly distributed among other characters. At any given iteration, we can calculate the score for a source-target character substitution, viewed as a conditional probability:\nS(c i , c j ) = C(c i , c j ) T (c i )(1)\nHere, C(a, b) is the number of times we have seen a → b, and T (a) is the total number of times we have seen a on the source side.\nEM Steps for Rulebook. 1) Expectation step. Given a list of top k candidates for a given source word s: for each candidate pair (s, t), we find Ops(s, t), which is the minimal list of the operations we need to perform to get from s to t. Each member in Ops is of the type (c i , c j ). Note that we also want to estimate S(a, a) ∀ a, and so we also use a \"retain\" operation, for characters that remain the same. The score for the pair (s, t) is computed as:\nζ(s, t) = - (a,b)∈Ops log(S(a, b)),(2)\nwhere the lower the ζ the more probable it is that a pair is equivalent. For a given s, we can then always find the word that is the most probable equivalent as t best = argmin ti̸ =s (ζ(s, t i )) (line 6 in Algorithm 1).\nWe then add (s, t best ) to our learned lexicon (line 8).\n2) Maximization step. We update the model parameters based on the newly identified equivalents in the previous step (line 9 in Algorithm 1). This is done by increasing the counts of all observed edit distance operations:\nC(a, b) := C(a, b) + 1 ∀(a, b) ∈ Ops(s, t) T (a) := T (a) + 1 ∀(a, b) ∈ Ops(s, t)\nWe disallow updates for s = t (i.e. identical words) in the training phase, to mitigate exploding selftransform probabilities." }, { "figure_ref": [], "heading": "Multiple passes over the input", "publication_ref": [], "table_ref": [], "text": "Once all (sentence, word) pairs have been processed once (or n times), we reprocess them (for an (n + 1) th pass) in the hope of gaining more accurate translations, as previously unknown neighbour words may have been learned in the meantime." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [], "table_ref": [], "text": "We use a minimum normalized orthographic similarity threshold of 0.5 (see line 7 of Algorithm 1). This threshold was heuristically chosen. We set the maximum number of passes to 3, meaning that the algorithm terminates if all unknown words have been processed 3 times. We found in our initial experiments that the algorithm yields very few or no new words in further passes. This also serves as a terminating condition (line 1 in Algorithm 1)." }, { "figure_ref": [], "heading": "Examples", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We give examples of inputs and outputs of our method in Table 2, illustrating the outputs for Basic. As we see, the Hindi BERT is fairly good at giving reasonable Hindi candidates for the masked Bhojpuri, although, naturally, these candidates may not be equivalents of the masked word, as shown for the top candidates in rows 2 and 3. Applying reranking based on orthographic similarity solves this problem to a large extent, serving to identify translation equivalents from among given candidates. We also see an example (row 3) where replacing a Bhojpuri word with its Hindi equivalent in the input sentence helps the Hindi MLM to produce more reasonable Hindi candidates for the masked word.\nAlgorithm 1: Basic and Rulebook " }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b47", "b48", "b46", "b45", "b42", "b43" ], "table_ref": [ "tab_7" ], "text": "Monolingual Data We use monolingual data from the LoResMT shared task (Ojha et al., 2020) for Bhojpuri and Magahi, and the VarDial 2018 shared task data (Zampieri et al., 2018) for Bhojpuri, Awadhi and Braj. For Bhojpuri, we additionally use the BHLTR project (Ojha, 2019). We use the BMM corpus (Mundotiya et al., 2021) and the Wordschatz Leipzig corpus (Goldhahn et al., 2012) for Maithili. For Marathi and Nepali, we use large-scale monolingual corpora made available by IndicCorp (Kakwani et al., 2020) and (Lamsal, 2020) respectively. See Table 4 for monolingual data sizes." }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b22", "b21", "b8", "b39", "b14", "b46", "b43" ], "table_ref": [], "text": "We use the MuRIL model and tokenizer (Khanuja et al., 2021) as our HRL MLM for Bhojpuri, Magahi, Awadhi, Maithili and Braj; we use the Hindi BERT and associated tokenizer given by Joshi (2023) for Marathi and Nepali.8 \nBaselines We compare our approaches against semi-supervised VecMap approach with CSLS (Artetxe et al., 2018b,a), using identical words as seeds, with 300-dimensional fastText embed-dings (Bojanowski et al., 2017). 9 We also choose CSCBLI (Zhang et al., 2021) as a representative of methods using contextual representations, hypothesizing that the ensemble of static and contextual embeddings may perform better than VecMap. Finally, we report results for a trivial baseline ID, the identity function, representing vocabulary overlap.\nEvaluation Data Given the lack of gold lexicons between Hindi and our LRLs, we create silver lexicons instead from parallel data. We use FastAlign with GDFA (Dyer et al., 2013) to extract word alignments from existing gold Bhojpuri-Hindi and Magahi-Hindi parallel data (≈500 sentences per language) (Ojha, 2019). 10 We use the two best candidates per source word in the resulting silver lexicons as valid translations. 11 This yields 2,469 and 3,359 entries for Bhojpuri and Magahi respectively. We report the manually evaluated quality of the silver lexicons in the following paragraph. For Marathi and Nepali, we use existing gold parallel lexicons against Hindi, taken from IndoWordNet (Kakwani et al., 2020), manually aligned to the Hindi Word-Net. We obtain lexicons with 35,000 and 22,000 entries for Marathi and Nepali respectively." }, { "figure_ref": [], "heading": "Manual Evaluation of Silver Lexicons", "publication_ref": [], "table_ref": [ "tab_6", "tab_6" ], "text": "We perform a manual evaluation of our silver lexicons, in order to judge the credibility of the reported results for our methods for Bhojpuri and Magahi. We manually examine 150 entries in the automatically created Bhojpuri lexicon, and find that 90% of entries are satisfactory, i.e. they list accurate Hindi equivalents of Bhojpuri words. We observe a few general problems with the lexicon, and list representative examples in Table 3:\n• Missing common synonyms, e.g. in row 1 of Table 3. This kind of error results in underestimation of precision scores for all approaches.\n• Problems with correctly equating inflections, missing feminine inflections, e.g. row 2. A natural problem arising from differences in morphological systems of the source and target language is that inflected verbs can be difficult to match cross-lingually. This results in missing equivalents of a given inflected form. For example, while genderless verbs in Bhojpuri should ideally be listed with the corresponding masculine and feminine verbs in Hindi, we observe that they are often missing one gender inflection, usually the feminine one. Similarly, not all possible target inflectional variants of a source inflection are listed for each verb entry.\n• Multi-word equivalences lead to errors. For example, in row 3, the single-word Bhojpuri source verb has a noun-light verb complex equivalent in Hindi (consisting of two words, literally meaning \"sharing do\"), and the silver lexicon lists the light verb (\"do\") as the target translation. This is also observed in the case of other verb equivalences, where one of the languages using multiple tokens to express an inflection, leading to incorrect matches in the silver lexicon.\n• Miscellaneous errors. The lexicon contains some entirely incorrect equivalents (8.76%), due to word alignment errors, e.g. row 4.\nNote that we only mark entries as wrong if the listed equivalents are inaccurate, and so faults such as missing synonyms and inflections, which affected 7.33% of the sample we examined, are not represented in the error percentage reported." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "We report precision@2 and accuracy on nonidentical predictions (NIA) in Table 5. 12 NIA is calculated by taking all non-identical predictions in the top 2 predictions per word, and reporting the percentage of those predictions that were marked correct by the evaluation lexicons. We report this metric because precision@2 may be inflated by \"easy\" identical word predictions.\nBaselines Table 6 provides examples of the performance of these approaches. VecMap performs well for Marathi: we provide examples where it predicts correct equivalents for rare words (row 8), noncognates (row 9), as well as frequent words (row 7). However, for Nepali, Bhojpuri, and Magahi, both VecMap and CSCBLI make seemingly random wrong predictions on almost all words (rows 1, 2, and 6), with near-zero performance, probably due to the low quality of static and contextual embeddings for the LRLs. CSCBLI also fails for Marathi, indicating that the Marathi contextual embeddings may still be of poor quality or that the approach may not generalize well to untested language pairs. While the failure of these baselines for Nepali is surprising, it can perhaps be explained by the fact that Nepali has about five times less data than Marathi, and less lexical overlap with Hindi.\nOur methods Our Basic and Rulebook approaches outperform ID by more than 20 accuracy points for all languages. Rulebook gains very little, if at all, over Basic, but Rulebook has an edge when it comes to predicting cognates with common sound correspondences (see row 5). We observe that these approaches are reasonably successful for Bhojpuri and Magahi on cognate verbs and common nouns, but fail on syntactic words and postpositions (row 3 for Basic), and may be confused by unrelated words with chance orthographic similarity even for common words (row 5 for Basic). Furthermore, these approaches often predict incorrect inflections of the correct verbal/noun stem (we count these predictions as wrong), as in rows 1 and 4.\nAlthough Basic and Rulebook perform with high accuracy for Marathi and Nepali, their NIA is extremely low, indicating that they serve mainly to identify or \"sieve\" out vocabulary overlap. We see that the candidates proposed by the Hindi MLM are often in fact Marathi/Nepali words, indicating that it has seen some Marathi/Nepali data (due to corpus contamination and/or code-mixing) and is capable of performing mask-filling for Marathi/Nepali." }, { "figure_ref": [], "heading": "Manual Evaluation of Generated Lexicons", "publication_ref": [], "table_ref": [], "text": "We manually examine errors in the non-identical predictions of Basic, looking at 60 randomly chosen non-identical Bhojpuri predictions. 13 We find that 31.7% of predictions are correctly inflected equivalents, as opposed to 18.1% given by the NIA quantitative evaluation. The underestimation is caused by missing synonyms in the silver lexicon. Furthermore, 25% are incorrectly inflected cognates of the source word, and the rest are unrelated words." }, { "figure_ref": [], "heading": "How useful is reranking by orthographic distance?", "publication_ref": [], "table_ref": [], "text": "We also ran the Basic approach without reranking with orthographic distance, i.e. we simply pick the top candidate suggested by the HRL mask-filling model as an equivalent. This approach is clearly worse than the standard Basic approach " }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our work is driven by the motivation to boost NLP for severely under-resourced languages of the Indic language belt, as well as contribute a method that may be relevant to other language families with a similar linguistic and resource setup. Our method relies on the predictions of large language models for the highresource language and is therefore fallible to 5 (with reranking), performing at only 3.03% NIA for Bhojpuri and 4.04% NIA for Magahi (approximately -15 and -14 percentage points compared to Basic for Bhojpuri and Magahi respectively, as shown in Table 5). However, this approach can still identify and capture identical vocabulary.\nVariants We experimented with minor variants of the Rulebook update mechanisms to see if they result in boosts to performance. We tried disallowing updates for the null character, since we found that a large probability mass iteratively accumulates in the null character (or for deletion). We also incorporated a change in the original algorithm, whereby we made updates to the custom edit distance matrix based on the optimal list of substitutions as per the current state of the edit distance matrix, rather than choosing a minimal length path at random (with each substitution counted as length 1) from the source to the target word when several exist. However, these variants result in very minor improvements or even slight degradations to performance, and we do not report these results." }, { "figure_ref": [], "heading": "Details of released lexicons", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "We make our bilingual lexicons publicly available under a CC BY-NC 4.0 license for Bhojpuri, Magahi, Awadhi, Braj, and Maithili, and also release our created silver evaluation lexicons for Bhojpuri and Magahi under the same license. These are the first publicly available bilingual lexicons all these languages except Bhojpuri, to the best of our knowledge. The sizes of the released lexicons for each target language are provided in Table 4. Note that while we also release our generated lexicons for Marathi and Nepali, large high quality gold bilingual lexicons already exist for these languages (see Section 5) and should be used instead of ours; we are mainly interested in creating resources for the low-resource languages." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce a novel method for unsupervised BLI between a related LRL and HRL, which only requires a good quality MLM for the HRL. This addresses an important gap in the existing literature, which often relies on good quality embeddings for both languages. Our method shows superior performance on two low-resource languages from the Indic continuum, against near-zero performances of existing state-of-the-art methods. We perform control experiments for two more distantly related Indic languages, and release resulting bilingual lexicons for five truly low-resource Indic languages." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b12" ], "table_ref": [ "tab_10", "tab_2" ], "text": "The applicability of our method is restricted to low-resource languages that are related to a highresource language. As the Basic and Rulebook method are directly dependent on orthographic distance between translation pairs, they are only useful for identifying cognate equivalents, borrowings, or alternate spellings in the source and target language. We also clarify that our method is not intended for mid-to-high resourced language pairs (such as Marathi-Hindi), where canonical stateof-the-art methods such as VecMap work more robustly, specifically on non-identical word equivalents. Our method therefore has a specific (although important) target scenario, i.e. it is a simple method to build bilingual lexicons for severely underresourced languages leveraging the resources of a closely related high-resource language, given that state-of-the-art methods fail in these settings. Note that we also only deal in the entirely unsupervised scenario in keeping with typical conditions for our target languages (see Section 2), and leave it to future work to improve these methods with a little supervision from bilingual lexicons, possibly obtained from parallel data. Another limitation of our work is that we were not able to provide true native speaker evaluation for the resulting target language lexicons, instead providing evaluation by the first author (Hindi native speaker) relying on knowledge of shared cognates, the morphology of the target language, and inflection tables. We provide examples in Table 6 and Table 2, and release the automatically created as well as silver lexicons. Finally, our method is only capable of providing single token (HRL) matches to the masked (LRL) whole word. As discussed in Section 4, this problem does not affect the large majority of cases. We leave it to future work to extend our idea to handle multi-token words and multi-word expressions using, for example, spanfilling language models (Donahue et al., 2020)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was partly funded by the last two authors' chairs in the PRAIRIE institute funded by the French national agency ANR as part of the \"Investissements d'avenir\" programme under the reference ANR-19-P3IA-0001. First and second authors are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project-ID 232722074-SFB 1102. Second and third authors are supported by the EU project LT-Bridge (GA952194)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b25" ], "table_ref": [], "text": "Our work is driven by the aim to boost NLP for severely under-resourced languages of the Indic language belt, as well as contribute a method that may be relevant to other language families with a similar linguistic and resource setup. Our method relies on the predictions of language models for the high-resource language and is therefore fallible to general ethical issues with such models, including caste, religion, and gender biases shown to be exhibited by such models (Malik et al., 2022). bho mag P@1 P@3 P@5 P@1 P@3 P@5 VecMap+CSLS 0 0 0 1. Table 7: P@{1,3,5} for bho and mag.\nbho mag NIA@1 NIA@3 NIA@5 NIA@1 NIA@3 NIA@5 VecMap+CSLS 0 0 0 0. Table 8: NIA@{1,3,5} for bho and mag." }, { "figure_ref": [], "heading": "A. Additional Results", "publication_ref": [], "table_ref": [], "text": "We report P@1,3,5 in Table 7 and NIA@1,3,5 in Table 8. We see that both Basic and Rulebook approaches do not benefit from considering more than 3 best answers. In general, we see the same relative trend as in Table 5." } ]
Most existing approaches for unsupervised bilingual lexicon induction (BLI) depend on good quality static or contextual embeddings requiring large monolingual corpora for both languages. However, unsupervised BLI is most likely to be useful for low-resource languages (LRLs), where large datasets are not available. Often we are interested in building bilingual resources for LRLs against related high-resource languages (HRLs), resulting in severely imbalanced data settings for BLI. We first show that state-of-the-art BLI methods in the literature exhibit near-zero performance for severely data-imbalanced language pairs, indicating that these settings require more robust techniques. We then present a new method for unsupervised BLI between a related LRL and HRL that only requires inference on a masked language model of the HRL, and demonstrate its effectiveness on truly low-resource languages Bhojpuri and Magahi (with <5M monolingual tokens each), against Hindi. We further present experiments on (mid-resource) Marathi and Nepali to compare approach performances by resource range, and release our resulting lexicons for five low-resource Indic languages: Bhojpuri, Magahi, Awadhi, Braj, and Maithili, against Hindi.
When your Cousin has the Right Connections: Unsupervised Bilingual Lexicon Induction for Related Data-Imbalanced Languages
[ { "figure_caption": "", "figure_data": "forBLI. Gonen et al. (2020) induce word-level transla-tions by directly prompting mBERT (Devlin et al.,", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Examples of cognates. Since the Devanagari script is phonetically transparent, phonetic similarity is visible both in IPA and in Devanagari (not shown).", "figure_data": "Input and Output Examples for Bhojpuri1 Input Mask Correct Predsउ ास joy 'May your pilgrimage be filled with joy and spirituality.' और and अ ाि का से spirituality-with [MASK] [MASK] आपके your तीथर् यात्रा pilgrimage आनं दमय enjoyable भरल 'filled' भरी पिरपू णर् , replete, भरी, filled, यु , containing, भरपू र, filled-up, स prosperousहो। may-be2 Input Mask Correct Predsप्रधानमं त्री Prime Minister 'The Prime Minister praised the discussion and inputs made in the conference.' स े लन conference में in भईल occurred िवचार-िवमशर् discussion अउर and इनपु ट input बतवला के telling-of तारीफ praise [MASK] [MASK] कइलन 'did' की, करी करे , do-hypothetical, करी, did-fem, की, did-fem, िकया, did-masc, *करे ल -। .3 Input Mask Correct Preds New input हमनी के उन [MASK] पर बहुते गवर् बा । हमनी के I/We उ those [MASK] [MASK] पर on बहुते lots of गवर् pride 'I/We was/were very proud of those people.' बा was । . लोगन 'people' लोग, लोगों बात, thing, काम, work, लड़की, िदन, day, औरत woman girl, Preds सब, all of (them), लोग, people, लोगों, people, िदन, day, सभी all of (them)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples for our method. Input: Target language text with an unknown masked word, with an English gloss.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Types and examples of faults in the silver lexicon.", "figure_data": "Target #Tokens Lexicon Silver lexiconlang.sizesizeawa0.17M10462-bho3.09M219832469bra0.33M10760-mag3.16M307843359mai0.16M12069-mar*551.00M36929-nep*110.00M22037-", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Monolingual data sizes in tokens, and sizes of our released lexicons (created using our method), and released silver lexicons (from parallel data) for Bhojpuri and Magahi. *High-quality gold bilingual lexicons already exist for these languages.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of the methods as measured by Precision@2 (P@2) and the accuracy of non-identical predictions (NIA).", "figure_data": "bhomagmarnepMethodP@2 bhoNIA P@2 magNIA P@2 marNIA P@2 NIA nepBaselines ID Method37.3 [email protected] NIA P@2 39.9 NIA P@2 0.027.5 NIA P@2 NIA 0.0 21.20VecMap+CSLS CSCBLI Basic Baselines ID Ours VecMap+CSLS CSCBLI Rulebook Ours Basic Rulebook0.0 0.0 61.0 18.1 0.0 0.0 37.3 0.0 0.0 0.0 0.0 0.0 61.5 15.1 61.0 18.1 61.5 15.1 65.4 65.2 18.8 1.2 0.6 2.0 0.5 39.9 0.0 27.5 1.2 0.6 42.4 26.7 42.4 26.7 0.0 0.0 80.9 2.8 0.0 21.2 0.0 2.0 0.5 0.0 0.0 0.0 65.4 17.4 80.6 1.72 65.2 18.8 80.9 2.8 87.6 17.4 80.6 1.72 87.60.0 0.0 87.6 0.0 0 0.0 87.6 8.2 6.00.0 0.0 8.2 6.0Table 5: Performance of the methods, given by Precision@2 (P@2) and accuracy of non-identicalpredictions (NIA).# Lang WordCorrect BasicRulebookVecMapCSCBLI1 2 3 4 5 6 7 8 9bho mag marदे खत (sees) िमलत (meets) इहा ँ (here) डालऽ (puts) सबाल (question) चोरा (steal) थं डी (cold) िकमान (at least) अनादर (disrespect) अपमान दे खता िमलते यहा ँ डालती सवाल चु रा ठं ड ू नतमदे ख † िमलते इितहास (history) यहा ँ दे ख † िमल † डाले † डाल † बोल (speak) सवाल चोरी †(theft) चोर †(thief) िदहाड़े (day) अटपटे (weird) मं त्रमु (spellbound) गा (sing) गा (sing) लहरी (wavy) नजारा (view) तु ने * बहुतों (many) िवधाियका* िवधाियका* िदहाड़ी (day) थं डी थं डी ठं ड ोित (light) िकमान िकमान ू नतम swift अनादर अनादर अपमान चामु ं डे री (place name)", "figure_id": "tab_8", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Predictions made by different approaches. Meanings are provided for the first occurrence of the word. * indicates a non-word, †indicates a prediction in the wrong inflectional/derivational form of the target.", "figure_data": "335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 351 352 353 354 355 356 357 350Limitations The applicability of our method is restricted to low-resource languages that are related to a high-resource language and show non-trivial lexical overlap with that language. As the Ba-sic and Rulebook method are directly depen-dant on orthographic distance between trans-lation pairs, they are also only useful for identi-fying cognate equivalents or alternate spellings in the source and target language. Further, we make it clear that our method is not intended for mid-to-high resourced language pairs (such as Marathi-Hindi), where canonical state-of-the-art methods such as VecMap indeed work more robustly, specifically on non-identical vised scenario, and leave it to future work to that we also only deal in the entirely unsuper-the-art methods fail in these settings. Note high-resource language, given that state-of-leveraging the resources of a closely related icons for severely under-resourced languages i.e. it is a simple method to build bilingual lex-specific (although important) target scenario, word equivalents. Our method therefore has afrom parallel data. Another limitation of our work is that we were not able to provide true native speaker evalua-tion for the resulting target language lexicons, instead providing evaluation by a Hindi native speaker relying on knowledge of shared cog-nates, the morphology of the target language, and inflection tables. We provide examples in Table 2 and Table 4, and release the automati-cally created as well as silver lexicons. Finally, our method is only capable of providing sin-gle token matches to the (masked) whole word. This is not a big problem, since the HRL MLM tokenizer has a large vocabulary size (52000) and is therefore likely to preserve most HRL words as single tokens; however, we leave it to future work to handle multi-word terms.360", "figure_id": "tab_9", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Predictions made by different approaches. Meanings are provided for the first occurrence of the word. * indicates a non-word and † a prediction in the wrong inflectional/derivational form of the target.", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" } ]
Niyati Bafna; Cristina España-Bonet; Josef Van Genabith; Benoît Sagot; Rachel Bawden
[ { "authors": "", "journal": "Bibliographical References", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Oliver Adams; Adam Makarucha; Graham Neubig; Steven Bird; Trevor Cohn", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Cross-Lingual Word Embeddings for Low-Resource Language Modeling", "year": "2017" }, { "authors": "Mikel Artetxe; Gorka Labaka; Eneko Agirre", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Learning Principled Bilingual Mappings of Word Embeddings While Preserving Monolingual Invariance", "year": "2016" }, { "authors": "Mikel Artetxe; Gorka Labaka; Eneko Agirre", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Learning Bilingual Word Embeddings with (Almost) No Bilingual Data", "year": "2017" }, { "authors": "Mikel Artetxe; Gorka Labaka; Eneko Agirre", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "A Robust Self-Learning Method for Fully Unsupervised Cross-Lingual Mappings of Word Embeddings", "year": "2018" }, { "authors": "Mikel Artetxe; Gorka Labaka; Eneko Agirre", "journal": "", "ref_id": "b5", "title": "Generalizing and Improving Bilingual Word Embedding Mappings with a Multi-Step Framework of Linear Transformations", "year": "2018" }, { "authors": "Mikel Artetxe; Gorka Labaka; Eneko Agirre", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Bilingual Lexicon Induction through Unsupervised Machine Translation", "year": "2019" }, { "authors": "Niyati Bafna; Josef Van Genabith; Cristina España-Bonet; Zdeněk Žabokrtský", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Combining Noisy Semantic Signals with Orthographic Cues: Cognate Induction for the Indic Dialect Continuum", "year": "2022" }, { "authors": "Piotr Bojanowski; Edouard Grave; Armand Joulin; Tomas Mikolov", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "Enriching Word Vectors with Subword Information", "year": "2017" }, { "authors": "Santwana Chimalamarri; Dinkar Sitaram; Ashritha Jain", "journal": "ACM Transactions on Asian and Low-Resource Language Information Processing", "ref_id": "b9", "title": "Morphological Segmentation to Improve Crosslingual Word Embeddings for Low Resource Languages", "year": "2020" }, { "authors": "Alexis Conneau; Guillaume Lample; Marc'aurelio Ranzato; Ludovic Denoyer; Hervé Jégou", "journal": "", "ref_id": "b10", "title": "Word Translation Without Parallel Data", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Chris Donahue; Mina Lee; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Enabling language models to fill in the blanks", "year": "2020" }, { "authors": "Greg Durrett; Adam Pauls; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Syntactic Transfer Using a Bilingual Lexicon", "year": "2012" }, { "authors": "Chris Dyer; Victor Chahuneau; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2", "year": "2013" }, { "authors": "Tobias Eder; Viktor Hangya; Alexander Fraser", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Anchor-based Bilingual Word Embeddings for Low-Resource Languages", "year": "2021" }, { "authors": "Goran Glavaš; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Non-Linear Instance-Based Cross-Lingual Mapping for Non-Isomorphic Embedding Spaces", "year": "2020" }, { "authors": "Shauli Hila Gonen; Yanai Ravfogel; Yoav Elazar; Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT", "year": "2020" }, { "authors": "Aria Haghighi; Percy Liang; Taylor Berg-Kirkpatrick; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Learning Bilingual Lexicons from Monolingual Corpora", "year": "2008" }, { "authors": "Ann Irvine; Chris Callison-Burch", "journal": "", "ref_id": "b19", "title": "Combining Bilingual and Comparable Corpora for Low Resource Machine Translation", "year": "2013" }, { "authors": "Sanjay Kumar; Jha ", "journal": "International Journal of Innovations in TESOL and Applied Linguistics", "ref_id": "b20", "title": "Exploring the Degree of Similarities between Hindi and Maithili Words from Glottochronological Perspective", "year": "2019" }, { "authors": "Raviraj Joshi", "journal": "", "ref_id": "b21", "title": "L3Cube-HindBERT and De-vBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages", "year": "2023" }, { "authors": "Simran Khanuja; Diksha Bansal; Sarvesh Mehtani; Savya Khosla; Atreyee Dey; Balaji Gopalan; Dilip Kumar Margam; Pooja Aggarwal; Rajiv Teja Nagipogu; Shachi Dave; Shruti Gupta; Subhash Chandra Bose; Vish Gali; Partha Subramanian; Talukdar", "journal": "", "ref_id": "b22", "title": "MuRIL: Multilingual Representations for Indian Languages", "year": "2021" }, { "authors": "Elmurod Kuriyozov; Yerai Doval; Carlos Gómez-Rodríguez", "journal": "European Language Resources Association", "ref_id": "b23", "title": "Cross-Lingual Word Embeddings for Turkic Languages", "year": "2020" }, { "authors": "Rabindra Lamsal", "journal": "", "ref_id": "b24", "title": "A Large Scale Nepali Text Corpus", "year": "2020" }, { "authors": "Vijit Malik; Sunipa Dev; Akihiro Nishi; Nanyun Peng; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Socially Aware Bias Measurements for Hindi Language Representations", "year": "2022" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b26", "title": "Efficient Estimation of Word Representations in Vector Space", "year": "2013" }, { "authors": "Rajesh Kumar Mundotiya; Manish Kumar Singh; Rahul Kapur; Swasti Mishra; Anil Kumar Singh", "journal": "ACM Transactions on Asian and Low-Resource Language Information Processing", "ref_id": "b27", "title": "Linguistic Resources for Bhojpuri, Magahi, and Maithili: Statistics about Them, Their Similarity Estimates, and Baselines for Three Applications", "year": "2021" }, { "authors": "Barun Patra; Joel Ruben; Antony Moniz; Sarthak Garg; Matthew R Gormley; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Bilingual Lexicon Induction with Semisupervision in Non-Isometric Embedding Spaces", "year": "2019" }, { "authors": "Matthew E Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher Clark; Kenton Lee; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Deep Contextualized Word Representations", "year": "2018" }, { "authors": "Sebastian Ruder; Ivan Vulić; Anders Søgaard", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b30", "title": "A Survey of Cross-Lingual Word Embedding Models", "year": "2019" }, { "authors": "Anders Søgaard; Sebastian Ruder; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "On the Limitations of Unsupervised Bilingual Dictionary Induction", "year": "2018" }, { "authors": "Brian Thompson; Rebecca Knowles; Xuan Zhang; Huda Khayrallah; Kevin Duh; Philipp Koehn", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation", "year": "2019" }, { "authors": "Ivan Vulić; Goran Glavaš; Roi Reichart; Anna Korhonen", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?", "year": "2019" }, { "authors": "Takashi Wada; Tomoharu Iwata; Yuji Matsumoto", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models", "year": "2019" }, { "authors": "Lisa Woller; Viktor Hangya; Alexander Fraser", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Do Not Neglect Related Languages: The Case of Low-Resource Occitan Cross-Lingual Word Embeddings", "year": "2021" }, { "authors": "Shijie Wu; Mark Dredze", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Are All Languages Created Equal in Multilingual BERT", "year": "2020" }, { "authors": "Chao Xing; Dong Wang; Chao Liu; Yiye Lin", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation", "year": "2015" }, { "authors": "Michelle Yuan; Mozhi Zhang; Benjamin Van Durme; Leah Findlater; Jordan Boyd-Graber", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Interactive Refinement of Cross-Lingual Word Embeddings", "year": "2020" }, { "authors": "Jinpeng Zhang; Baijun Ji; Nini Xiao; Xiangyu Duan; Min Zhang; Yangbin Shi; Weihua Luo", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Combining Static Word Embeddings and Contextual Representations for Bilingual Lexicon Induction", "year": "2021" }, { "authors": "Hai Zhao; Yan Song; Chunyu Kit; Guodong Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Cross Language Dependency Parsing using a Bilingual Lexicon", "year": "2009" }, { "authors": "", "journal": "", "ref_id": "b41", "title": "Language Resource References", "year": "" }, { "authors": "Dirk Goldhahn; Thomas Eckart; Uwe Quasthoff", "journal": "", "ref_id": "b42", "title": "Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages", "year": "2012" }, { "authors": "Divyanshu Kakwani; Anoop Kunchukuttan; Satish Golla; Gokul; Avik Bhattacharyya; Mitesh M Khapra; Pratyush Kumar", "journal": "", "ref_id": "b43", "title": "inlpsuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages", "year": "2020" }, { "authors": "Rabindra Lamsal", "journal": "IEEEdataport", "ref_id": "b44", "title": "A Large Scale Nepali Text Corpus", "year": "2020" }, { "authors": "Rajesh Mundotiya; Kumar; Manish Singh; Kumar; Rahul Kapur; Swasti Mishra; Anil Singh; Kumar", "journal": "", "ref_id": "b45", "title": "Linguistic Resources for Bhojpuri, Magahi, and Maithili: Statistics about Them, Their Similarity Estimates, and Baselines for Three Applications", "year": "2021" }, { "authors": "Atul Ojha; Kr", "journal": "", "ref_id": "b46", "title": "English-Bhojpuri SMT System: Insights from the Karaka Model", "year": "2019" }, { "authors": "Atul Ojha; Kr; Valentin Malykh; Alina Karakanta; Chao-Hong Liu", "journal": "", "ref_id": "b47", "title": "Findings of the LoResMT 2020 Shared Task on Zero-Shot for Low-Resource languages", "year": "2020" }, { "authors": "Marcos Zampieri; Preslav Nakov; Nikola Ljubešić; Jörg Tiedemann; Shervin Malmasi; Ahmed Ali", "journal": "", "ref_id": "b48", "title": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 137.72, 393.97, 153.21, 23.89 ], "formula_id": "formula_0", "formula_text": "S(c i , c j ) = C(c i , c j ) T (c i )(1)" }, { "formula_coordinates": [ 5, 111.38, 600.4, 179.55, 21.44 ], "formula_id": "formula_1", "formula_text": "ζ(s, t) = - (a,b)∈Ops log(S(a, b)),(2)" }, { "formula_coordinates": [ 5, 90.72, 65.05, 417.84, 707.48 ], "formula_id": "formula_2", "formula_text": "C(a, b) := C(a, b) + 1 ∀(a, b) ∈ Ops(s, t) T (a) := T (a) + 1 ∀(a, b) ∈ Ops(s, t)" } ]
2023-10-17
[ { "figure_ref": [ "fig_0", "fig_1", "fig_4" ], "heading": "", "publication_ref": [ "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b0", "b19", "b20", "b21", "b22", "b16", "b23", "b24", "b9", "b12" ], "table_ref": [], "text": "CLIP can perceive and understand text in images, even for irregular text with noise, rotation, and occlusion. CLIP is potentially a powerful scene text recognition expert.\nunlike the popularity of pre-trained VLMs in other crossmodal tasks, STR methods still tend to rely on backbones pre-trained on single-modality data [10], [11], [12], [13]. In this work, we show that VLM pre-trained on image-text pairs possess strong scene text perception abilities, making them superior choices as STR backbones.\nSTR methods generally struggle with irregular text like rotated, curved, blurred, or occluded text [14], [15]. However, irregular text is prevalent in real-life scenarios [16], [17], making it necessary for STR models to effectively handle these challenging cases. Interestingly, we observe that the VLM (e.g., CLIP [1]) can robustly perceive irregular text in natural images. In Figure 1, we put different text stickers on a natural image, use CLIP to classify it 1 , and visualize the attention of CLIP via Grad-CAM [20]. It is evident that CLIP pays high attention to the text sticker and accurately understands the meaning of the word, regardless of text variations 2 . CLIP is trained on massive natural images collected from the web and its text perception ability may come from the natural images containing scene texts [21]. Will CLIP perceive the text in common STR images [22], [23], [17], which are cropped from a natural image? Figure 2 presents the visualization results of CLIP-ViT-B/32 for STR images. Although the text in these STR images is occluded, curved, blurred, and rotated, CLIP can still perceive them. From Figure 1&2, we can see CLIP possesses an exceptional capability to perceive and comprehend various text in images. This is exactly the desired quality for a robust STR backbone.\nIn this work, we aim to leverage the text perception capability of CLIP for STR and build a strong baseline for future STR research with VLMs. To this end, we introduce CLIP4STR, a simple yet effective STR framework built upon CLIP. CLIP4STR consists of two encoder-decoder branches: the visual branch and the cross-modal branch. The image and text encoders inherit from CLIP, while the decoders employ the transformer decoder [24]. To enable the decoder to delve deep into word structures (dependency relationship among characters in a word), we incorporate the permuted sequence modeling technique proposed by PARSeq [25]. This allows the decoder to perform sequence modeling of characters in arbitrary orders without relying on specific sequence order assumptions. During training, the visual branch provides an initial prediction based on the visual feature, which is then refined by the cross-modal branch to address possible discrepancies between the visual feature and text semantics of the prediction. The cross-modal branch functions as a semanticaware spell checker, similar to modern STR methods [10], [13]. For inference, we design a dual predict-and-refine decoding scheme to fully utilize the capabilities of both encoderdecoder branches for improved character recognition.\nCLIP4STR achieves state-of-the-art performance on 11 commonly used STR benchmarks, encompassing both regular and irregular text. Additionally, we present a comprehensive empirical study on adapting CLIP to STR. We believe CLIP4STR provides a simple but strong baseline for future STR research with VLMs." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "A. Vision-Language Models and Its Application", "publication_ref": [ "b0", "b1", "b25", "b5", "b26", "b27", "b4", "b7", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b0", "b20", "b39", "b20" ], "table_ref": [], "text": "Large-scale pre-trained vision-language models learning under language supervision such as CLIP [1], ALIGN [2], and Florence [26] demonstrate excellent generalization abilities. This encourages researchers to transfer the knowledge of these pre-trained VLMs to different downstream tasks in a fine-tuning or zero-shot fashion. For instance, [6], [27], [28] tune CLIP on videos and make CLIP specialized in textvideo retrieval, DiffusionCLIP [5] introduces CLIP to zeroshot image manipulation, CLIPScore [8] uses CLIP to evaluate the quality of generated image captions, and [29], [30] use CLIP as the reward model during test time or training. The wide application of VLMs also facilitates the research on different pre-training models, e.g., ERNIE-ViLG [31], CoCa [32], OFA [33], DeCLIP [34], FILIP [35], and ALBEF [36]. Researchers also try to explore the power of scaling up the data, e.g., COYO-700M [37] and LAION-5B [38]. Generally, more data brings more power for large VLMs [39].\nVLMs pre-trained on large-scale image-text pairs possess many fascinating attributes [1], [21], [40]. For instance, some neurons in CLIP can perceive the visual and text signals corresponding to the same concept. [21] finds particular neurons in CLIP-RN50×4 respond to both photos of Spiderman and the text \"spider\" in an image. This also leads to Typographic Attacks, namely, VLMs focus on the text rather than natural objects in an image as shown in Figure 1. In this work, we leverage the text perception ability of multi-modal neurons and make CLIP specialize in scene text recognition. " }, { "figure_ref": [], "heading": "B. Scene Text Recognition", "publication_ref": [ "b40", "b41", "b42", "b43", "b10", "b44", "b45", "b46", "b47", "b48", "b9", "b49", "b50", "b12", "b51", "b52", "b24", "b12", "b53", "b50", "b35", "b54", "b55" ], "table_ref": [], "text": "Scene text recognition methods can be broadly divided into two categories: context-free and context-aware. Context-free STR methods only utilize the visual features of images, such as CTC-based [41] methods [42], [43], [44], [11], segmentationbased methods [45], [46], [47], and attention-based methods with an encoder-decoder mechanism [48], [49]. Since contextfree STR methods lack the understanding of text semantics, they are less robust against occluded or incomplete text. Context-aware STR methods are the mainstream approach now, leveraging text semantics to enhance recognition performance. For example, ABINet [10], LevOCR [50], MA-TRN [51], and TrOCR [13] incorporate an external language model to capture text semantics. Other methods achieve similar goals with built-in modules, such as RNN [52], transformer [53], [25].\nThe success of VLMs also spreads to the STR area. For example, TrOCR [13] adopts separate pre-trained language and vision models plus post-pretraining on STR data in an auto-regressive manner [54], MATRN [51] uses a popular multi-modal fusion manner in VLMs such as ALBEF [36] and ViLT [55], and a recent work CLIPTER [56] enhances the character recognition performance via the CLIP features of the global image. In this work, we aim to directly transform CLIP into a strong scene text reader and provide a baseline for further STR research with VLMs." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Preliminary", "publication_ref": [ "b0", "b24", "b23", "b56", "b57", "b58", "b9", "b24", "b23", "b59", "b60" ], "table_ref": [ "tab_2" ], "text": "Before illustrating the framework of CLIP4STR, we first introduce CLIP [1] and the permuted sequence modeling (PSM) technique proposed by PARSeq [25]. CLIP serves as the backbone, and the PSM is used to extract character information from the CLIP features.\n1) CLIP: CLIP consists of a text encoder and an image encoder. CLIP is pre-trained on 400 million image-text pairs using contrastive learning. The text and image features from CLIP are aligned in a joint image-text embedding space. i) The text encoder of CLIP is a transformer encoder [24], [57]. The text tokenizer is a lower-cased byte pair encoding -BPE [58] ] represent the input context and output sequence, respectively. The entry M i,j = -∞ (negative infinity) indicates that the dependency of output i on input context j is removed.\n[B] y 1 y 2 y 3\ny 1 0 -∞ -∞ -∞ y 2 0 0 -∞ -∞ y 3 0 0 0 -∞ [E] 0 0 0 0 (a) AR mask [B] y 1 y 2 y 3 y 1 0 -∞ 0 0 y 2 0 0 -∞ 0 y 3 0 0 0 -∞ [E] 0 0 0 0 (b) cloze mask [B] y 1 y 2 y 3 y 1 0 -∞ 0 0 y 2 0 -∞ -∞ -∞ y 3 0 -∞ 0 -∞ [E] 0 0 0 0 (c) random mask\nthe feature of the [EOS] token, but in this work, we return features of all tokens. These features are further normalized and linearly projected into the joint image-text embedding space. ii) The image encoder of CLIP is a vision transformer (ViT) [59]. Given an image, ViT introduces a visual tokenizer (convolution) to convert non-overlapped image patches into a discrete sequence. A [CLASS] token is then prepended to the beginning of the image sequence. Initially, CLIP image encoder only returns the feature of the [CLASS] token, but in this work, we return features of all tokens. These features are also normalized and linearly projected into the joint imagetext embedding space. Generally, we use a ViT-B/16 (patch size 16×16) as the image encoder.\n2) Permuted sequence modeling: Traditionally, STR methods use a left-to-right or right-to-left order to model character sequences [10]. However, the characters in a word do not strictly follow such directional dependencies. For instance, to predict the letter \"o\" in the word \"model\", it is sufficient to consider only the context \"m_de\" rather than relying solely on the left-to-right context \"m_\" or the right-to-left context \"led_\". The dependencies between characters in a word can take various forms. To encourage the STR method to explore these structural relationships within words, PARSeq [25] introduces a permuted sequence modeling (PSM) technique. This technique uses a random attention mask M for attention operations [24] to generate random dependency relationships between the input context and the output. Table I illustrates three examples of mask M. We will delve further into this mechanism in Section III-C. [B], [E], and [P] are the beginning, end, and padding tokens, respectively. Layer normalization [60] and dropout [61] are ignored." }, { "figure_ref": [ "fig_2" ], "heading": "B. Encoder", "publication_ref": [ "b61" ], "table_ref": [], "text": "The framework of CLIP4STR is illustrated in Figure 3. CLIP4STR employs a dual encoder-decoder design, consisting of a visual branch and a cross-modal branch. The text and image encoders utilize the architectures and pre-trained weights from CLIP. The visual branch generates an initial prediction based on the visual features extracted by the image encoder. Subsequently, the cross-modal branch refines the initial prediction by addressing the discrepancy between the visual features and the textual semantics of the prediction. Since the image and text features are aligned in a joint imagetext embedding space during pre-training, it becomes easy to identify this discrepancy. The cross-modal branch acts as a semantic-aware spell checker.\nThe text encoder is partially frozen. This freezing operation retains the learned text understanding ability of the language model and reduces training costs. It is a common practice in transfer learning of large language models [62]. In contrast, the visual branch is fully trainable due to the domain gap between STR data (cropped word images) and CLIP training data (collected from the web, often natural images). Additionally, we block the gradient flow from the cross-modal decoder to the visual encoder to enable autonomous learning of the visual branch, resulting in improved refined cross-modal predictions.\nFor the text encoder g(•) and the image encoder h(•), given the input text t and image x, the text, image, and cross-modal features are computed as:\nF t = g(t) ∈ R Lt×D ,(1)\nF i = h(x) ∈ R Li×D ,(2)\nF c = [F T i F T t ] T ∈ R Lc×D ,(3)\nwhere L t represents the text sequence length, L i is the sequence length of image tokens, D denotes the dimension of the joint image-text embedding space, and the cross-modal sequence length\nL c = L i + L t ." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "C. Decoder", "publication_ref": [ "b23", "b9", "b24", "b24", "b9", "b24" ], "table_ref": [ "tab_2" ], "text": "The decoder aims to extract the character information from the visual feature F i or cross-modal feature F c . The decoder framework is shown in Figure 4. It adopts the design of the transformer decoder [24] plus the PSM technique mentioned in Section III-A2, enabling a predicted character to have arbitrary dependencies on the input context during training.\nThe visual and cross-modal decoders have the same architecture but differ in the input. They receive the following inputs: a learnable position query p ∈ R N ×D , an input context c ∈ R N ×D , and a randomly generated attention mask M ∈ R N ×N . N represents the length of characters. The decoder outputs the prediction y ∈ R N ×C , where C is the number of character classes. The decoding stage can be denoted as\ny = DEC(p, c, M, F ).(4)\nThe first Multi-Head Attention (MHA) in Figure 4 performs context-position attention:\nm 1 = softmax( pc T √ D + M)c + p.(5)\nThe second MHA focuses on feature-position attention:\nm 2 = softmax( m 1 F T √ D )F + m 1 .(6)\nFor simplicity, we ignore the input and output linear transformations in the attention operations of Eq. ( 5) and Eq. ( 6). Then m 2 ∈ R N ×D is used for the final prediction y:\ny = Linear(MLP(m 2 ) + m 2 ).(7)\nDuring training, the output of the decoder depends on the input context in an arbitrary manner. This encourages the decoder to analyze the word structure beyond the traditional left-to-right or right-to-left sequence modeling assumptions [10]. The inclusion of a random attention mask M in Eq.( 5) enables this capability [25]. Table I presents examples of generated attention masks, including a left-to-right autoregressive (AR) mask, a cloze mask, and a random mask. Following PARSeq [25], we employ K = 6 masks per input context during training. The first two masks are left-to-right and right-to-left masks, and others are randomly generated.\nCLIP4STR is optimized to minimize the sum of crossentropy losses (CE(•)) of the visual branch and the cross-modal branch:\nL = CE(y i , ŷ) + CE(y, ŷ),(8)\nwhere ŷ represents the ground truth, y i is the prediction of the visual branch, and y is the prediction of the cross-modal branch. \nT i Output: prediction y // c 1,• denote the 1st row 1 c 1,• ← CTK([B]); 2 F i ← h(x);\n// autoregressive visual decode\n3 y i ← 0; 4 for k ← 1 to N -1 do 5 y i k,• ← Dec i (p i k,• , c 1:k,• , M a 1:k,1:k , F i ); 6 c k+1,• ← CTK(y i k,• ); 7 end // autoregressive cross-modal decode 8 F c ← [F T i g(TTK(y i )) T ] T ; 9 y ← 0; 10 for k ← 1 to N -1 do 11 y k,• ← Dec c (p c k,• , c 1:k,• , M a 1:k,1:k , F c ); 12 c k+1,• ← CTK(y k,• ); 13 end\n// refinement with cloze mask\n14 for k ← 1 to T i do 15 c ← [CTK([B]) T CTK(y i 1:N -1,• ) T ] T ; 16 y i ← Dec i (p i , c, M c , F i ); 17 F c ← [F T i g(TTK(y i )) T ] T ; 18 c ← [CTK([B]) T CTK(y 1:N -1,• ) T ] T ; 19 y ← Dec c (p c , c, M c , F c ); 20 end\n1) Decoding scheme: CLIP4STR consists of two branches: a visual branch and a cross-modal branch. To fully exploit the capacity of both branches, we design a dual predict-andrefine decoding scheme for inference, inspired by previous STR methods [10], [25]. Algorithm 1 illustrates the decoding process. The visual branch first performs autoregressive decoding, where the future output depends on previous predictions. Subsequently, the cross-modal branch addresses possible discrepancies between the visual feature and the text semantics of the visual prediction, aiming to improve recognition accuracy. This process is also autoregressive. Finally, the previous predictions are utilized as the input context for refining the output in a cloze-filling manner. The refinement process can be iterative. After iterative refinement, the output of the crossmodal branch serves as the final prediction." }, { "figure_ref": [], "heading": "IV. EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experimental Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model variants", "publication_ref": [ "b65", "b24", "b70", "b71", "b72", "b73", "b16", "b15", "b74", "b75", "b76", "b77", "b78", "b79" ], "table_ref": [], "text": "We instantiate two STR models with CLIP: CLIP4STR-B and CLIP4STR-L. The two models inherit the encoders of CLIP-ViT-B/16 and CLIP-ViT-L/14, respectively. CLIP-ViT-B/16 and CLIP-ViT-L/14 have roughly 149M and 427M parameters in total, separately. We provide the inference speed in Section V-E. Training dataset Previous studies [66], [25] demonstrate that real training data leads to better performance compared to commonly used synthetic data such as MJSynth (MJ, 9M samples) [71] and SynthText (ST, 6.9M samples) [72]. Thus we primarily utilize real data for training. Specifically, we use COCO-Text (COCO) [73], RCTW17 [74], Uber-Text (Uber) [17], ArT [16], LSVT [75], MLT19 [76], ReCTS [77], TextOCR [78], Open Images [79] annotations from the OpenVINO toolkit [80]. These real datasets have 3.3M images in total." }, { "figure_ref": [], "heading": "Test benchmarks", "publication_ref": [ "b80", "b81", "b82", "b83", "b21", "b22", "b66", "b72", "b15", "b16", "b84", "b85", "b86", "b87" ], "table_ref": [], "text": "The evaluation benchmarks include IIIT5K [81], CUTE80 [82], Street View Text (SVT) [83], SVT-Perspective (SVTP) [84], ICDAR 2013 (IC13) [22], ICDAR 2015 (IC15) [23], and two occluded datasets -HOST and WOST [67]. Additionally, we utilize 3 recent large benchmarks: COCO-Text (9.8K samples; low-resolution, occluded text) [73], ArT (35.1K samples; curved and rotated text) [16], and Uber-Text (80.6K samples; vertical and rotated text) [17].\nLearning strategies We apply a warm up and cosine learning rate decay policy. The learning rate for CLIP encoders is 8.4e-5 × batch size 512 [85] . For models trained from scratch (decoders), the learning rate is multiplied by 19.0. We use a batch size 1024 for CLIP4STR-B and 960 for CLIP4STR-L. For real data, the training epochs of CLIP4STR-B and CLIP4STR-L are 16 and 10, respectively. For synthetic data, we train CLIP4STR-B for 6 epochs and CLIP4STR-L for 5 epochs. AdamW [86] optimizer is adopted with a weight decay value 0.2. All experiments are performed with mixed precision [87].\nCLIP4STR-B is trained on 8 NVIDIA Tesla V100 GPUs with a batch size 128 on a single GPU. This costs about 11.4 hours when training on 3.3M real STR images for 16 epochs. CLIP4STR-L is trained on 4 NVIDIA A100 GPUs with a batch size 48 on a single GPU and gradient accumulation steps 5. This costs about 36 hours for training on real data. CLIP4STR-B and CLIP4STR-L contain 114M and 366M trainable parameters, respectively. Data and label processing RandAugment [88] excludes sharpness and invert is used with layer depth " }, { "figure_ref": [ "fig_4" ], "heading": "B. Comparison to State-of-the-art", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "We compare CLIP4STR with previous SOTA methods on 8 common STR benchmarks in Table II 1&2 and supports our motivation for adapting CLIP as a scene text reader, as CLIP demonstrates robust identification of regular and irregular text. CLIP4STR exhibits excellent reading ability on occluded datasets, surpassing the previous SOTA by 7.8% and 3.8% in the best case on HOST and WOST, respectively. This ability can be attributed to the pre-trained text encoder and cross-modal decoder, which can infer missing characters using text semantics or visual features.\nIn addition to the small-scale common benchmarks, we also evaluate CLIP4STR on three larger and more challenging benchmarks. These benchmarks primarily consist of irregular texts with various shapes, low-resolution images, rotation, etc. The results, shown in Table III, further demonstrate the strong generalization ability of CLIP4STR. It outperforms the previous SOTA methods substantially, specifically, 2.7% improvement in accuracy compared to previous SOTA on Uber which contains 80K samples. Once again, these results support our motivation that CLIP possesses robust scene text perception ability and serves as an effective scene text reader." }, { "figure_ref": [ "fig_2" ], "heading": "V. EMPIRICAL STUDY", "publication_ref": [ "b24", "b89" ], "table_ref": [], "text": "This section presents our empirical study on adapting CLIP to STR. The models are all trained on real data. IC15 dataset here contains 2,077 samples. with the visual branch in Figure 3 as the baseline. The encoder is a ViT-S without pre-training. Then we apply the permuted sequence modeling (PSM) technique [25] to the visual decoder and follow the training recipe of PARSeq: 4×8 patch size, the same learning rate for the encoder and decoder, and 20 training epochs. This brings a 0.7% improvement in accuracy. Next, we replace the encoder with the image encoder of CLIP-ViT-B/16. However, no significant gain is observed without adaptations. To unleash the potential of CLIP, we adjust the training recipe: using 16×16 patch size, a small learning rate for CLIP encoders, a relatively large learning rate for decoders, and fewer training epochs -16 (Section IV-A). The learning rate is searched automatically by Ray [90], and the best number of training epochs is decided by manual test. CLIP makes the model converge easier and faster, so the training recipe should change accordingly. At this point, we already surpass the previous SOTA. Moreover, we add the cross-modal branch to the system. Although the performance is already very high, the cross-modal branch improves the average accuracy on 9 benchmarks by 0.4%, demonstrating its effectiveness. The use of a large model -CLIP-ViT-L/14 further increases the accuracy by 0.7%. " }, { "figure_ref": [], "heading": "A. Ablation Study of CLIP4STR", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B. Parameter Freezing Options", "publication_ref": [ "b61" ], "table_ref": [ "tab_8" ], "text": "In CLIP4STR, we freeze half of the layers in the CLIP text encoder, which is a common practice when transferring a large language model to new tasks [62]. Table V illustrates the influence of different parameter freezing options. The results indicate that freezing the language model has a lesser impact compared to freezing the image model. Despite using the fixed pre-trained token embeddings of the CLIP text encoder, the system can still achieve satisfactory performance. This demonstrates that semantic understanding in STR is relatively easier compared to general language understanding. In STR, text mainly consists of words and phrases, which simplifies the task compared to the general language case. On the other hand, freezing the image models has a significant impact on performance. The substantial domain gap between the data in STR and the pre-trained data of the CLIP image encoder possibly contributes to this discrepancy. CLIP is pretrained on web images, which are primarily natural images. In contrast, the scene text recognition data comprises cropped word images. Such a disparity may necessitate a fully trainable image encoder in CLIP4STR to bridge the domain gap." }, { "figure_ref": [ "fig_2" ], "heading": "C. Comparison to Single-modality Pre-trained Model", "publication_ref": [ "b90", "b91", "b92", "b24", "b12", "b91", "b93", "b94" ], "table_ref": [ "tab_10", "tab_10" ], "text": "In previous empirical studies, we see the effectiveness of CLIP as a STR backbone. Is VLM better than models pre-trained on single-modality data? To further clarify this question, Table VI presents the results of replacing the visual encoder in Figure 3 with a random initialized ViT, an ImageNet-1K [91] pre-trained ViT via DeiT [92] 3 , and an ImageNet-21K pre-trained ViT provided by Ridnik et al. [93] 4 . The training schedules including the learning rate and training epochs are kept the same as CLIP4STR. In Table VI, the ImageNet pre-trained models even perform worse than the model trained from scratch. Previous works also support this finding. PARSeq [25] trains its vision transformer from scratch rather than using a pre-trained model. TrOCR [13] uses pre-trained transformers from DeiT [92], BEiT [94], and RoBERTa [95], but it still post-pretrains them on 684M textlines from publicly available PDF files on the Internet. " }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "D. Parameter-efficient Adaptations", "publication_ref": [ "b95", "b96", "b96", "b97", "b98", "b1", "b3", "b5", "b7", "b9", "b11" ], "table_ref": [ "tab_12" ], "text": "CLIP4STR fine-tunes the whole pre-trained CLIP model to transfer the knowledge of CLIP to the STR task. Besides such a fully fine-tuning manner, the parameter-efficient finetuning (PEFT) methods for large pre-trained models are also popular. For example, CoOp [96] only trains several learnable prefix prompts for efficiency, and CLIP-Adapter [97] incorporates tunable linear layers on top of frozen VLMs. These PEFT methods achieve pretty good performance on a few tasks, so we wonder if such PEFT methods work for STR.\nWe test CLIP with two PEFT methods in this work, i.e., CLIP-Adapter [97] and Ladder Side-Tuning (LST) adapter [98]. Figure 5 shows the design of the two adapters. CLIP-Adapter adds two linear layers on the top of the frozen pre-trained VLM. We use the same architecture as the original implementation 5 and a residual addition ratio λ = 0.2, which means that the original CLIP feature is multiplied by 0.8. Ladder Side-Tuning (LST) uses a ladder side network as shown in Figure 5. We follow the original implementation 6and use the structure-pruned [99] CLIP model as the ladder side network. The CLIP features are downsampled by a factor of 1/r before entering the ladder side network to reduce the computation cost, and then upsampled by a factor of r before output to match the original feature dimension. We also use the layer-dropping strategy in LST, which connects only the layers [2,4,6,8,10,12] to the ladder side network, namely, the depth of LST is 6. This reduces the training cost. The results of using the two adapters with CLIP in STR are presented in Table VII. CLIP-Adapter outperforms the frozen model but falls short of the performance achieved by the fully fine-tuned model. The addition of a few learnable parameters on top of the CLIP model alone is insufficient to bridge the domain gap between scene text data and the pretraining data of CLIP. On the other hand, LST achieves notably improved performance but still lags behind the fine-tuned model. However, when the parameters of LST are increased, it approaches the performance of the fine-tuned model. Overall, LST can serve as an alternative option when computational resources are limited for training." }, { "figure_ref": [], "heading": "E. Inference Time", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "Despite the good performance, adapting the pre-trained CLIP model introduces extra training and inference costs due to its large size. Table VIII presents the inference time of CLIP4STR. The large transformer models slow down the inference speed of CLIP4STR. However, using a large ViT does not always improve accuracy, as Table VI shows, because of different pre-training strategies. The cross-modal branch also increases the inference time, but slightly (0.49ms), since the input sequence length of the text encoder is small (16, as explained in Sec. IV-A). Moreover, we can reduce the inference time of the cross-modal branch by replacing line 10˜13 in Algorithm 1 with y ← Dec c (p c , c, M a , F c ).\nEquation ( 9) uses the prediction of the visual branch as the input context instead of the previous prediction of the crossmodal branch, avoiding repeated runs of the cross-modal decoder. However, this slightly decreases the performance. The ViT-L backbone also increases the inference time. Clearly, for CLIP4STR, there is a trade-off between recognition accuracy will not bring further improvement in accuracy, so we just set T i = 1 in practice." }, { "figure_ref": [ "fig_7" ], "heading": "F. Qualitative results", "publication_ref": [], "table_ref": [], "text": "Figure 6 shows a few qualitative results of CLIP4STR on IC15 (incidental scene text), SVTP (perspective scene text), CUTE (curved text line images), and HOST (heavily occluded scene text). CLIP4STR can robustly read scene text that is curved, occluded, blurred, or rotated. This matches its stateof-the-art performance in Table II and verifies our motivation for adapting CLIP to STR in Section I." }, { "figure_ref": [], "heading": "G. Results on Cleaned Benchmarks", "publication_ref": [ "b88" ], "table_ref": [ "tab_14" ], "text": "Recently, Yang et al. [89] correct the ground truth of mislabeled samples and present cleaned versions of IIIT5K, SVT, IC13, IC15, SVTP, and CUTE 7 . Table IX shows the results of CLIP4STR on these cleaned benchmarks. CLIP4STR still achieves SOTA performance on these cleaned benchmarks." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We present CLIP4STR, a method that leverages CLIP for STR. It has a dual encoder-decoder architecture: a visual branch for initial prediction and a cross-modal branch for refinement. CLIP4STR achieves state-of-the-art results on 11 STR benchmarks, showing that CLIP is a powerful scene text reader and that vision-language pre-training is beneficial for STR. We also conduct a comprehensive empirical study to explain how CLIP adapts to STR. We hope that CLIP4STR can serve as a simple but strong baseline for future STR research with VLMs." }, { "figure_ref": [], "heading": "APPENDIX REPRODUCEBILITY OF CLIP4STR", "publication_ref": [], "table_ref": [], "text": "A thirty-party open-sourced code at https://github. com/VamosC/CLIP4STR reproduces most of the performance of CLIP4STR. This verifies the reproducibility of CLIP4STR." } ]
Pre-trained vision-language models (VLMs) are the de-facto foundation models for various downstream tasks. However, scene text recognition methods still prefer backbones pretrained on a single modality, namely, the visual modality, despite the potential of VLMs to serve as powerful scene text readers. For example, CLIP can robustly identify regular (horizontal) and irregular (rotated, curved, blurred, or occluded) text in images. With such merits, we transform CLIP into a scene text reader and introduce CLIP4STR, a simple yet effective STR method built upon image and text encoders of CLIP. It has two encoderdecoder branches: a visual branch and a cross-modal branch. The visual branch provides an initial prediction based on the visual feature, and the cross-modal branch refines this prediction by addressing the discrepancy between the visual feature and text semantics. To fully leverage the capabilities of both branches, we design a dual predict-and-refine decoding scheme for inference. CLIP4STR achieves new state-of-the-art performance on 11 STR benchmarks. Additionally, a comprehensive empirical study is provided to enhance the understanding of the adaptation of CLIP to STR. We believe our method establishes a simple but strong baseline for future STR research with VLMs.
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
[ { "figure_caption": "Fig. 1 :1Fig. 1: Zero-shot classification results of CLIP-ViT-B/32. CLIP can perceive and understand text in images, even for irregular text with noise, rotation, and occlusion. CLIP is potentially a powerful scene text recognition expert.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Attention of CLIP-ViT-B/32 for STR images.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: The framework of CLIP4STR. It has a visual branch and a cross-modal branch. The cross-modal branch refines the prediction of the visual branch for the final output. The text encoder is partially frozen.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: The decoder of CLIP4STR.[B], [E], and [P] are the beginning, end, and padding tokens, respectively. Layer normalization[60] and dropout[61] are ignored.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1inference decoding scheme Input: image x, image encoder h(•) and decoder Dec i (•), text encoder g(•), cross-modal decoder Dec c (•), AR mask M a , cloze mask M c , image and cross-modal position query p i and p c , context c = 0 ∈ R N ×D , char and text tokenizer CTK(•) and TTK(•), iterative refinement times", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3 and magnitude 5 .5The image size is 224×224. The sequence length of the text encoder is 16. The maximum length of the character sequence is 25. Considering an extra [B] or [E] token, we set N = 26. During training, the number of character classes C = 94, i.e., mixed-case alphanumeric characters and punctuation marks are recognized. During inference, we only use a lowercase alphanumeric charset, i.e., C = 36. The iterative refinement times T i = 1. The evaluation metric is word accuracy.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: CLIP-Adapter (left) and LST (right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Qualitative results of CLIP4STR-B.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "with vocabulary size 49 152. The beginning and end of the text sequence are padded with [SOS] and [EOS] tokens, respectively. Initially, CLIP text encoder only returns", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Examples of attention mask M. The sequences with [B] and [E", "figure_data": "", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Word accuracy on 8 common benchmarks. The best and second-best results are highlighted. Benchmark datasets (B) -SVT, IIIT5K, IC13, and IC15. † TrOCR uses pre-trained models and post-pretrained on 648M textlines from publicly available PDF files on the Internet. ♯ Reproduced by PARSeq[25].", "figure_data": "MethodVenueTrain dataIIIT5K SVT IC13 IC15 IC15 SVTP CUTE HOST WOST 3,000 647 1,015 1,811 2,077 645 288 2,416 2,416ASTER [49]PAMI'19MJ+ST93.489.5-76.1-78.579.5--SRN [63]CVPR'20MJ+ST94.891.5-82.7-85.187.8--TextScanner [46]AAAI'20MJ+ST95.792.7 94.9-83.584.891.6--SE-ASTER [64]CVPR'20MJ+ST93.889.6 92.8 80.0-81.483.6--RCEED [65]ICDAR'21MJ+ST+B94.991.8--82.283.691.7--TRBA [66]CVPR'21MJ+ST92.188.9-86.0-89.389.2--VisionLAN [67]ICCV'21MJ+ST95.891.7-83.7-86.088.550.370.3ABINet [10]CVPR'21MJ+ST96.293.5-86.0-89.389.2--ViTSTR-B [11]ICDAR'21MJ+ST88.487.7 92.4 78.572.681.881.3--LevOCR [50]ECCV'22MJ+ST96.692.9-86.4-88.191.7--MATRN [51]ECCV'22MJ+ST96.695.0 95.8 86.682.890.693.5--PETR [68]TIP'22MJ+ST95.892.4 97.0 83.3-86.289.9--DiG-ViT-B [12]MM'22MJ+ST96.794.6 96.9 87.1-91.091.374.982.3PARSeq A [25]ECCV'22MJ+ST97.093.6 96.2 86.582.988.992.2--TrOCR Large [13] †AAAI'23MJ+ST+B94.196.1 97.3 88.184.193.095.1--SIGA T [69]CVPR'23MJ+ST96.695.1 96.8 86.683.090.593.1--PARSeq+CLIPTER [56]ICCV'23N/A-96.6--85.9----DiG-ViT-B [12]MM'22Real(2.8M)97.696.5 97.6 88.9-92.996.562.879.7ViTSTR-S [11] ♯ICDAR'21Real(3.3M)97.996.0 97.8 89.087.591.596.264.577.9ABINet [10] ♯CVPR'21Real(3.3M)98.698.2 98.0 90.588.794.197.272.285.0PARSeq A [25]ECCV'22Real(3.3M)99.197.9 98.4 90.7 89.695.798.374.485.4MAERec-B [70]ICCV'23 Union14M-L [70]98.597.8 98.1-89.594.498.6--CLIP4STR-BMJ+ST97.795.2 96.1 87.684.291.395.579.887.0CLIP4STR-LMJ+ST98.095.2 96.9 87.784.593.395.182.788.8CLIP4STR-BReal(3.3M)99.298.3 98.3 91.490.697.299.377.587.5CLIP4STR-LReal(3.3M)99.598.5 98.5 91.390.897.499.079.889.2", "figure_id": "tab_4", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Word accuracy on 3 large benchmarks. ♯ Reproduced by PARSeq[25].", "figure_data": "MethodTrain dataCOCO 9,825ArT 35,149 80,551 UberViTSTR-S [11] ♯MJ+ST56.466.137.6TRBA [66] ♯MJ+ST61.468.238.0ABINet [10] ♯MJ+ST57.165.434.9PARSeq A [25]MJ+ST64.070.742.0MPSTR A [89]MJ+ST64.569.942.8CLIP4STR-BMJ+ST66.372.843.4CLIP4STR-LMJ+ST67.073.744.5DiG-ViT-B [12]Real(2.8M)75.8--ViTSTR-S [11] ♯Real(3.3M)73.681.078.2TRBA [66] ♯Real(3.3M)77.582.581.2ABINet [10] ♯Real(3.3M)76.581.271.2PARSeq A [25]Real(3.3M)79.884.584.1MPSTR A [89]Real(3.3M)80.384.484.9CLIP4STR-BReal(3.3M)81.185.886.8CLIP4STR-LReal(3.3M)81.985.987.6", "figure_id": "tab_5", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Table II&III show that CLIP4STR achieves SOTA performance on 11 STR benchmarks without bells-and-whistles. What are the sources of this high performance? We conduct ablation studies of different components in Table IV, starting", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study of different components. Average accuracy on 9 benchmarks (14,315 samples) in Table II are reported.", "figure_data": "Reference MethodAvg.ABINet [10]89.1PARSeq A [25](previous SOTA) 89.9Baseline PSM CLIP-B Recipe Cross CLIP-LAvg.✓89.2✓✓89.9✓✓✓90.0✓✓✓✓90.8✓✓✓✓✓91.2✓✓✓✓✓✓91.9", "figure_id": "tab_7", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Freezing options in CLIP4STR-B. #Params means the number of learnable parameters of encoders in CLIP4STR-B. One decoder in CLIP4STR-B has 4.3M parameters. token means we only use pre-trained token embeddings of CLIP text encoder as text features.", "figure_data": "Frozen Layers #Params IC15 WOST HOST COCO Uber Image Text00149 M90.887.576.480.887.003114 M90.488.176.981.286.806104 M90.687.577.581.186.80995 M90.386.874.980.986.301286 M90.386.174.980.986.40token86 M90.787.377.080.986.70695 M90.687.577.581.186.83684 M90.488.576.581.386.46662 M89.586.772.880.383.89641 M87.880.064.075.372.812619 M61.255.840.449.520.6", "figure_id": "tab_8", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Different pre-training strategies. #Params means the learnable parameters in the visual encoder. For a fair comparison, only the results of the visual branch in CLIP4STR-B are shown.", "figure_data": "Pre-train#Params IC15 WOST HOST COCO UberScratch86 M90.1 84.974.880.7 86.6ImageNet-1K86 M89.7 82.768.780.0 84.0ImageNet-21K86 M89.3 83.169.179.6 82.9Image-text pairs 86 M90.3 87.476.380.9 86.6", "figure_id": "tab_10", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "Table VI demonstrates the advantage of using a VLM learning under text supervision in scene text recognition.", "figure_data": "", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Parameter-efficient adaptations. #Params means the learnable parameters in the visual encoder. r is the feature reduction ratio in LST. Here we only show the results of the visual branch in CLIP4STR-B, and the crossmodal branch is ignored.", "figure_data": "Method#Params IC15 WOST HOST COCO UberFrozen0 60.9 54.839.948.9 20.1CLIP-Adapter262 K 63.6 57.241.150.9 22.7LST (r = 4)4.1M 88.2 82.866.177.1 78.7LST (r = 2)13.1M 89.6 86.070.879.6 80.6Fine-tune86 M 90.3 87.476.380.9 86.6", "figure_id": "tab_12", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "Inference time of CLIP4STR. Average accuracy on 9 benchmarks (14,315 samples) in Table II are reported. AR stands for autoregressive decoding, and cloze stands for cloze-filling decoding manner (see Alogorithm 1). Iter. is the number of refinement iterations during decoding. Time is the average inference time per sample. Test on a single NVIDIA A100 40GB GPU.", "figure_data": "MethodBackboneDecodeIter. Avg. Time (ms)ABINet [10]ResNet-45Cloze1 89.11.30PARSeq [25]ViT-SAR1 89.91.32PARSeq [25]ViT-BAR1 90.02.81CLIP4STR-B (Visual)ViT-BAR1 90.83.03CLIP4STR-B (Cross)ViT-BAR1 91.23.52CLIP4STR-B (Cross)ViT-BAR + Eq. (9) 1 91.13.41CLIP4STR-B (Cross)ViT-BAR2 91.23.72CLIP4STR-L (Cross)ViT-LAR1 91.96.52", "figure_id": "tab_13", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "Word accuracy on cleaned benchmarks. Mislabeled samples in blue benchmarks are cleaned by Yang et al.[89]. All methods are trained on 3.3M real samples. The best results are highlighted. Besides, Table VIII also shows that more iterative refinement times (a large T i at line 14 in Algorithm 1)", "figure_data": "MethodIIIT5K SVT 3,000 647IC13 1,015 1,811 2,077 IC15 IC15SVTP CUTE 645 288ABINet [10] ♯98.697.898.093.291.494.797.2PARSeq A [25]98.997.598.593.892.695.798.6MPSTR A [89]99.298.598.393.992.796.199.0CLIP4STR-B99.297.898.494.193.397.499.3CLIP4STR-L99.497.898.694.093.597.499.0and inference speed.", "figure_id": "tab_14", "figure_label": "IX", "figure_type": "table" } ]
Shuai Zhao; Xiaohan Wang; Linchao Zhu; Ruijie Quan; Yi Yang; Shuai However; Zhao
[ { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark; G Krueger; I Sutskever", "journal": "", "ref_id": "b0", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "C Jia; Y Yang; Y Xia; Y Chen; Z Parekh; H Pham; Q V Le; Y Sung; Z Li; T Duerig", "journal": "", "ref_id": "b1", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "H Song; L Dong; W Zhang; T Liu; F Wei", "journal": "", "ref_id": "b2", "title": "CLIP models are few-shot learners: Empirical studies on VQA and visual entailment", "year": "2022" }, { "authors": "O Patashnik; Z Wu; E Shechtman; D Cohen-Or; D Lischinski", "journal": "", "ref_id": "b3", "title": "Styleclip: Text-driven manipulation of stylegan imagery", "year": "2021" }, { "authors": "G Kim; T Kwon; J C Ye", "journal": "", "ref_id": "b4", "title": "Diffusionclip: Text-guided diffusion models for robust image manipulation", "year": "2022" }, { "authors": "H Luo; L Ji; M Zhong; Y Chen; W Lei; N Duan; T Li", "journal": "Neurocomputing", "ref_id": "b5", "title": "Clip4clip: An empirical study of CLIP for end to end video clip retrieval and captioning", "year": "2022" }, { "authors": "S Subramanian; W Merrill; T Darrell; M Gardner; S Singh; A Rohrbach", "journal": "", "ref_id": "b6", "title": "Reclip: A strong zero-shot baseline for referring expression comprehension", "year": "2022" }, { "authors": "J Hessel; A Holtzman; M Forbes; R L Bras; Y Choi", "journal": "", "ref_id": "b7", "title": "Clipscore: A reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "N Fei; Z Lu; Y Gao; G Yang; Y Huo; J Wen; H Lu; R Song; X Gao; T Xiang", "journal": "Nature Communications", "ref_id": "b8", "title": "Towards artificial general intelligence via a multimodal foundation model", "year": "2022" }, { "authors": "S Fang; H Xie; Y Wang; Z Mao; Y Zhang", "journal": "", "ref_id": "b9", "title": "Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition", "year": "2021" }, { "authors": "R Atienza", "journal": "", "ref_id": "b10", "title": "Vision transformer for fast and efficient scene text recognition", "year": "2021" }, { "authors": "M Yang; M Liao; P Lu; J Wang; S Zhu; H Luo; Q Tian; X Bai", "journal": "", "ref_id": "b11", "title": "Reading and writing: Discriminative and generative modeling for selfsupervised text recognition", "year": "2022" }, { "authors": "M Li; T Lv; L Cui; Y Lu; D Florencio; C Zhang; Z Li; F Wei", "journal": "AAAI", "ref_id": "b12", "title": "Trocr: Transformer-based optical character recognition with pre-trained models", "year": "2023" }, { "authors": "S Long; X He; C Yao", "journal": "Int. J. Comput. Vis", "ref_id": "b13", "title": "Scene text detection and recognition: The deep learning era", "year": "2021" }, { "authors": "X Chen; L Jin; Y Zhu; C Luo; T Wang", "journal": "ACM Comput. Surv", "ref_id": "b14", "title": "Text recognition in the wild: A survey", "year": "2022" }, { "authors": "C K Chng; E Ding; J Liu; D Karatzas; C S Chan; L Jin; Y Liu; Y Sun; C C Ng; C Luo; Z Ni; C Fang; S Zhang; J Han", "journal": "", "ref_id": "b15", "title": "ICDAR2019 robust reading challenge on arbitrary-shaped text -rrc-art", "year": "2019" }, { "authors": "Y Zhang; L Gueguen; I Zharkov; P Zhang; K Seifert; B Kadlec", "journal": "", "ref_id": "b16", "title": "Uber-text: A large-scale dataset for optical character recognition from street-level imagery", "year": "2017" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b17", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "S Fort", "journal": "", "ref_id": "b18", "title": "Pixels still beat text: Attacking the openai clip model with text patches and adversarial pixel perturbations", "year": "2021-03" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "Int. J. Comput. Vis", "ref_id": "b19", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2020" }, { "authors": "G Goh; N C † ; C V † ; S Carter; M Petrov; L Schubert; A Radford; C Olah", "journal": "Distill", "ref_id": "b20", "title": "Multimodal neurons in artificial neural networks", "year": "2021" }, { "authors": "D Karatzas; F Shafait; S Uchida; M Iwamura; L G Bigorda; S R Mestre; J Mas; D F Mota; J Almazán; L De; Heras", "journal": "", "ref_id": "b21", "title": "ICDAR 2013 robust reading competition", "year": "2013" }, { "authors": "D Karatzas; L Gomez-Bigorda; A Nicolaou; S K Ghosh; A D Bagdanov; M Iwamura; J Matas; L Neumann; V R Chandrasekhar; S Lu; F Shafait; S Uchida; E Valveny", "journal": "", "ref_id": "b22", "title": "ICDAR 2015 competition on robust reading", "year": "2015" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "NeurIPS", "ref_id": "b23", "title": "Attention is all you need", "year": "2017" }, { "authors": "D Bautista; R Atienza", "journal": "", "ref_id": "b24", "title": "Scene text recognition with permuted autoregressive sequence models", "year": "2022" }, { "authors": "L Yuan; D Chen; Y Chen; N Codella; X Dai; J Gao; H Hu; X Huang; B Li; C Li; C Liu; M Liu; Z Liu; Y Lu; Y Shi; L Wang; J Wang; B Xiao; Z Xiao; J Yang; M Zeng; L Zhou; P Zhang", "journal": "CoRR", "ref_id": "b25", "title": "Florence: A new foundation model for computer vision", "year": "2021" }, { "authors": "S Zhao; L Zhu; X Wang; Y Yang", "journal": "", "ref_id": "b26", "title": "Centerclip: Token clustering for efficient text-video retrieval", "year": "2022" }, { "authors": "X Wang; L Zhu; Z Zheng; M Xu; Y Yang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b27", "title": "Align and tell: Boosting text-video retrieval with local alignment and fine-grained supervision", "year": "2022" }, { "authors": "Z Shuai; W Xiaohan; Z Linchao; Y Yi", "journal": "", "ref_id": "b28", "title": "Test-time adaptation with clip reward for zero-shot generalization in vision-language models", "year": "2023" }, { "authors": "J Cho; S Yoon; A Kale; F Dernoncourt; T Bui; M Bansal", "journal": "", "ref_id": "b29", "title": "Finegrained image captioning with clip reward", "year": "2022" }, { "authors": "H Zhang; W Yin; Y Fang; L Li; B Duan; Z Wu; Y Sun; H Tian; H Wu; H Wang", "journal": "CoRR", "ref_id": "b30", "title": "Ernie-vilg: Unified generative pre-training for bidirectional vision-language generation", "year": "2021" }, { "authors": "J Yu; Z Wang; V Vasudevan; L Yeung; M Seyedhosseini; Y Wu", "journal": "CoRR", "ref_id": "b31", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "P Wang; A Yang; R Men; J Lin; S Bai; Z Li; J Ma; C Zhou; J Zhou; H Yang", "journal": "", "ref_id": "b32", "title": "OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework", "year": "2022" }, { "authors": "Y Li; F Liang; L Zhao; Y Cui; W Ouyang; J Shao; F Yu; J Yan", "journal": "", "ref_id": "b33", "title": "Supervision exists everywhere: A data efficient contrastive languageimage pre-training paradigm", "year": "2021" }, { "authors": "L Yao; R Huang; L Hou; G Lu; M Niu; H Xu; X Liang; Z Li; X Jiang; C Xu", "journal": "", "ref_id": "b34", "title": "FILIP: fine-grained interactive language-image pre-training", "year": "2022" }, { "authors": "J Li; R R Selvaraju; A Gotmare; S R Joty; C Xiong; S C Hoi", "journal": "", "ref_id": "b35", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "M Byeon; B Park; H Kim; S Lee; W Baek; S Kim", "journal": "", "ref_id": "b36", "title": "Coyo-700m: Image-text pair dataset", "year": "2022" }, { "authors": "C Schuhmann; R Beaumont; R Vencu; C Gordon; R Wightman; M Cherti; T Coombes; A Katta; C Mullis; M Wortsman; P Schramowski; S Kundurthy; K Crowson; L Schmidt; R Kaczmarczyk; J Jitsev", "journal": "CoRR", "ref_id": "b37", "title": "LAION-5B: an open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "G Ilharco; M Wortsman; R Wightman; C Gordon; N Carlini; R Taori; A Dave; V Shankar; H Namkoong; J Miller; H Hajishirzi; A Farhadi; L Schmidt", "journal": "", "ref_id": "b38", "title": "Openclip", "year": "2021-07" }, { "authors": "S Shen; L H Li; H Tan; M Bansal; A Rohrbach; K Chang; Z Yao; K Keutzer", "journal": "", "ref_id": "b39", "title": "How much can CLIP benefit vision-and-language tasks?", "year": "2022" }, { "authors": "A Graves; S Fernández; F J Gomez; J Schmidhuber", "journal": "", "ref_id": "b40", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "year": "2006" }, { "authors": "P He; W Huang; Y Qiao; C C Loy; X Tang", "journal": "", "ref_id": "b41", "title": "Reading scene text in deep convolutional sequences", "year": "2016" }, { "authors": "B Shi; X Bai; C Yao", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b42", "title": "An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition", "year": "2017" }, { "authors": "F Borisyuk; A Gordo; V Sivakumar", "journal": "", "ref_id": "b43", "title": "Rosetta: Large scale system for text detection and recognition in images", "year": "2018" }, { "authors": "M Liao; J Zhang; Z Wan; F Xie; J Liang; P Lyu; C Yao; X Bai", "journal": "", "ref_id": "b44", "title": "Scene text recognition from two-dimensional perspective", "year": "2019" }, { "authors": "Z Wan; M He; H Chen; X Bai; C Yao", "journal": "", "ref_id": "b45", "title": "Textscanner: Reading characters in order for robust scene text recognition", "year": "2020" }, { "authors": "L Zhao; Z Wu; X Wu; G Wilsbacher; S Wang", "journal": "", "ref_id": "b46", "title": "Backgroundinsensitive scene text recognition with text semantic segmentation", "year": "2022" }, { "authors": "Z Cheng; F Bai; Y Xu; G Zheng; S Pu; S Zhou", "journal": "", "ref_id": "b47", "title": "Focusing attention: Towards accurate text recognition in natural images", "year": "2017" }, { "authors": "B Shi; M Yang; X Wang; P Lyu; C Yao; X Bai", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b48", "title": "ASTER: an attentional scene text recognizer with flexible rectification", "year": "2019" }, { "authors": "C Da; P Wang; C Yao", "journal": "", "ref_id": "b49", "title": "Levenshtein OCR", "year": "2022" }, { "authors": "B Na; Y Kim; S Park", "journal": "", "ref_id": "b50", "title": "Multi-modal text recognition networks: Interactive enhancements between visual and semantic features", "year": "2022" }, { "authors": "C Lee; S Osindero", "journal": "", "ref_id": "b51", "title": "Recursive recurrent nets with attention modeling for OCR in the wild", "year": "2016" }, { "authors": "F Sheng; Z Chen; B Xu", "journal": "", "ref_id": "b52", "title": "NRTR: A no-recurrence sequenceto-sequence model for scene text recognition", "year": "2019" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "", "ref_id": "b53", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "W Kim; B Son; I Kim", "journal": "", "ref_id": "b54", "title": "Vilt: Vision-and-language transformer without convolution or region supervision", "year": "2021" }, { "authors": "A Aberdam; D Bensaïd; A Golts; R Ganz; O Nuriel; R Tichauer; S Mazor; R Litman", "journal": "", "ref_id": "b55", "title": "CLIPTER: looking at the bigger picture in scene text recognition", "year": "2023" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b56", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "R Sennrich; B Haddow; A Birch", "journal": "", "ref_id": "b57", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly; J Uszkoreit; N Houlsby", "journal": "", "ref_id": "b58", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "L J Ba; J R Kiros; G E Hinton", "journal": "CoRR", "ref_id": "b59", "title": "Layer normalization", "year": "2016" }, { "authors": "N Srivastava; G E Hinton; A Krizhevsky; I Sutskever; R Salakhutdinov", "journal": "J. Mach. Learn. Res", "ref_id": "b60", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "J Alayrac; J Donahue; P Luc; A Miech; I Barr; Y Hasson; K Lenc; A Mensch; K Millican; M Reynolds; R Ring; E Rutherford; S Cabi; T Han; Z Gong; S Samangooei; M Monteiro; J Menick; S Borgeaud; A Brock; A Nematzadeh; S Sharifzadeh; M Binkowski; R Barreira; O Vinyals; A Zisserman; K Simonyan", "journal": "CoRR", "ref_id": "b61", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "D Yu; X Li; C Zhang; T Liu; J Han; J Liu; E Ding", "journal": "", "ref_id": "b62", "title": "Towards accurate scene text recognition with semantic reasoning networks", "year": "2020" }, { "authors": "Z Qiao; Y Zhou; D Yang; Y Zhou; W Wang", "journal": "", "ref_id": "b63", "title": "SEED: semantics enhanced encoder-decoder framework for scene text recognition", "year": "2020" }, { "authors": "M Cui; W Wang; J Zhang; L Wang", "journal": "", "ref_id": "b64", "title": "Representation and correlation enhanced encoder-decoder framework for scene text recognition", "year": "2021" }, { "authors": "J Baek; Y Matsui; K Aizawa", "journal": "", "ref_id": "b65", "title": "What if we only use real datasets for scene text recognition? toward scene text recognition with fewer labels", "year": "2021" }, { "authors": "Y Wang; H Xie; S Fang; J Wang; S Zhu; Y Zhang", "journal": "", "ref_id": "b66", "title": "From two to one: A new scene text recognizer with visual language modeling network", "year": "2021" }, { "authors": "Y Wang; H Xie; S Fang; M Xing; J Wang; S Zhu; Y Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b67", "title": "Petr: Rethinking the capability of transformer-based language model in scene text recognition", "year": "2022" }, { "authors": "T Guan; C Gu; J Tu; X Yang; Q Feng; Y Zhao; W Shen", "journal": "", "ref_id": "b68", "title": "Selfsupervised implicit glyph attention for text recognition", "year": "2023" }, { "authors": "Q Jiang; J Wang; D Peng; C Liu; L Jin", "journal": "", "ref_id": "b69", "title": "Revisiting scene text recognition: A data perspective", "year": "2023" }, { "authors": "M Jaderberg; K Simonyan; A Vedaldi; A Zisserman", "journal": "CoRR", "ref_id": "b70", "title": "Synthetic data and artificial neural networks for natural scene text recognition", "year": "2014" }, { "authors": "A Gupta; A Vedaldi; A Zisserman", "journal": "", "ref_id": "b71", "title": "Synthetic data for text localisation in natural images", "year": "2016" }, { "authors": "A Veit; T Matera; L Neumann; J Matas; S J Belongie", "journal": "CoRR", "ref_id": "b72", "title": "Cocotext: Dataset and benchmark for text detection and recognition in natural images", "year": "2016" }, { "authors": "B Shi; C Yao; M Liao; M Yang; P Xu; L Cui; S J Belongie; S Lu; X Bai", "journal": "", "ref_id": "b73", "title": "ICDAR2017 competition on reading chinese text in the wild (RCTW-17)", "year": "2017" }, { "authors": "Y Sun; D Karatzas; C S Chan; L Jin; Z Ni; C K Chng; Y Liu; C Luo; C C Ng; J Han; E Ding; J Liu", "journal": "", "ref_id": "b74", "title": "ICDAR 2019 competition on large-scale street view text with partial labeling -RRC-LSVT", "year": "2019" }, { "authors": "N Nayef; C Liu; J Ogier; Y Patel; M Busta; P N Chowdhury; D Karatzas; W Khlif; J Matas; U Pal; J Burie", "journal": "", "ref_id": "b75", "title": "ICDAR2019 robust reading challenge on multi-lingual scene text detection and recognition -RRC-MLT-2019", "year": "2019" }, { "authors": "R Zhang; M Yang; X Bai; B Shi; D Karatzas; S Lu; C V Jawahar; Y Zhou; Q Jiang; Q Song; N Li; K Zhou; L Wang; D Wang; M Liao", "journal": "", "ref_id": "b76", "title": "ICDAR 2019 robust reading challenge on reading chinese text on signboard", "year": "2019" }, { "authors": "A Singh; G Pang; M Toh; J Huang; W Galuba; T Hassner", "journal": "", "ref_id": "b77", "title": "Textocr: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text", "year": "2021" }, { "authors": "I Krasin; T Duerig; N Alldrin; V Ferrari; S Abu-El-Haija; A Kuznetsova; H Rom; J Uijlings; S Popov; A Veit; S Belongie; V Gomes; A Gupta; C Sun; G Chechik; D Cai; Z Feng; D Narayanan; K Murphy", "journal": "", "ref_id": "b78", "title": "Openimages: A public dataset for large-scale multilabel and multi-class image classification", "year": "2017" }, { "authors": "I Krylov; S Nosov; V Sovrasov", "journal": "", "ref_id": "b79", "title": "Open images V5 text annotation and yet another mask text spotter", "year": "2021" }, { "authors": "A Mishra; K Alahari; C V Jawahar", "journal": "", "ref_id": "b80", "title": "Scene text recognition using higher order language priors", "year": "2012" }, { "authors": "A Risnumawan; P Shivakumara; C S Chan; C L Tan", "journal": "Expert Syst. Appl", "ref_id": "b81", "title": "A robust arbitrary text detection system for natural scene images", "year": "2014" }, { "authors": "K Wang; B Babenko; S J Belongie", "journal": "", "ref_id": "b82", "title": "End-to-end scene text recognition", "year": "2011" }, { "authors": "T Q Phan; P Shivakumara; S Tian; C L Tan", "journal": "", "ref_id": "b83", "title": "Recognizing text with perspective distortion in natural scenes", "year": "2013" }, { "authors": "P Goyal; P Dollár; R B Girshick; P Noordhuis; L Wesolowski; A Kyrola; A Tulloch; Y Jia; K He", "journal": "CoRR", "ref_id": "b84", "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "year": "2017" }, { "authors": "I Loshchilov; F Hutter", "journal": "ICLR", "ref_id": "b85", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "P Micikevicius; S Narang; J Alben; G F Diamos; E Elsen; D García; B Ginsburg; M Houston; O Kuchaiev; G Venkatesh; H Wu", "journal": "ICLR", "ref_id": "b86", "title": "Mixed precision training", "year": "2018" }, { "authors": "E D Cubuk; B Zoph; J Shlens; Q Le", "journal": "", "ref_id": "b87", "title": "Randaugment: Practical automated data augmentation with a reduced search space", "year": "2020" }, { "authors": "X Yang; Z Qiao; J Wei; Y Zhou; Y Yuan; Z Ji; D Yang; W Wang", "journal": "", "ref_id": "b88", "title": "Masked and permuted implicit context learning for scene text recognition", "year": "2023" }, { "authors": "P Moritz; R Nishihara; S Wang; A Tumanov; R Liaw; E Liang; M Elibol; Z Yang; W Paul; M I Jordan", "journal": "", "ref_id": "b89", "title": "Ray: A distributed framework for emerging {AI} applications", "year": "2018" }, { "authors": "O Russakovsky; J Deng; H Su; J Krause; S Satheesh; S Ma; Z Huang; A Karpathy; A Khosla; M S Bernstein; A C Berg; L Fei-Fei", "journal": "Int. J. Comput. Vis", "ref_id": "b90", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "H Touvron; M Cord; M Douze; F Massa; A Sablayrolles; H Jégou", "journal": "", "ref_id": "b91", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "T Ridnik; E Ben-Baruch; A Noy; L Zelnik-Manor", "journal": "", "ref_id": "b92", "title": "Imagenet-21k pretraining for the masses", "year": "2021" }, { "authors": "H Bao; L Dong; S Piao; F Wei", "journal": "", "ref_id": "b93", "title": "Beit: BERT pre-training of image transformers", "year": "2022" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "CoRR", "ref_id": "b94", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "K Zhou; J Yang; C C Loy; Z Liu", "journal": "CoRR", "ref_id": "b95", "title": "Learning to prompt for visionlanguage models", "year": "2021" }, { "authors": "P Gao; S Geng; R Zhang; T Ma; R Fang; Y Zhang; H Li; Y Qiao", "journal": "CoRR", "ref_id": "b96", "title": "Clip-adapter: Better vision-language models with feature adapters", "year": "2021" }, { "authors": "Y Sung; J Cho; M Bansal", "journal": "NeurIPS", "ref_id": "b97", "title": "LST: ladder side-tuning for parameter and memory efficient transfer learning", "year": "2022" }, { "authors": "H Li; A Kadav; I Durdanovic; H Samet; H P Graf", "journal": "", "ref_id": "b98", "title": "Pruning filters for efficient convnets", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 315.56, 286.55, 246.74, 58.11 ], "formula_id": "formula_0", "formula_text": "y 1 0 -∞ -∞ -∞ y 2 0 0 -∞ -∞ y 3 0 0 0 -∞ [E] 0 0 0 0 (a) AR mask [B] y 1 y 2 y 3 y 1 0 -∞ 0 0 y 2 0 0 -∞ 0 y 3 0 0 0 -∞ [E] 0 0 0 0 (b) cloze mask [B] y 1 y 2 y 3 y 1 0 -∞ 0 0 y 2 0 -∞ -∞ -∞ y 3 0 -∞ 0 -∞ [E] 0 0 0 0 (c) random mask" }, { "formula_coordinates": [ 4, 118.17, 545.71, 181.86, 11.72 ], "formula_id": "formula_1", "formula_text": "F t = g(t) ∈ R Lt×D ,(1)" }, { "formula_coordinates": [ 4, 118.36, 561.16, 181.67, 11.72 ], "formula_id": "formula_2", "formula_text": "F i = h(x) ∈ R Li×D ,(2)" }, { "formula_coordinates": [ 4, 117.62, 576.62, 182.41, 12.69 ], "formula_id": "formula_3", "formula_text": "F c = [F T i F T t ] T ∈ R Lc×D ,(3)" }, { "formula_coordinates": [ 4, 117.36, 634.14, 59.17, 9.65 ], "formula_id": "formula_4", "formula_text": "L c = L i + L t ." }, { "formula_coordinates": [ 4, 391.11, 317.51, 171.92, 8.99 ], "formula_id": "formula_5", "formula_text": "y = DEC(p, c, M, F ).(4)" }, { "formula_coordinates": [ 4, 363.71, 367.96, 199.32, 25.19 ], "formula_id": "formula_6", "formula_text": "m 1 = softmax( pc T √ D + M)c + p.(5)" }, { "formula_coordinates": [ 4, 363.68, 418.85, 199.36, 25.19 ], "formula_id": "formula_7", "formula_text": "m 2 = softmax( m 1 F T √ D )F + m 1 .(6)" }, { "formula_coordinates": [ 4, 370.87, 496.61, 192.17, 9.68 ], "formula_id": "formula_8", "formula_text": "y = Linear(MLP(m 2 ) + m 2 ).(7)" }, { "formula_coordinates": [ 4, 381.67, 692.37, 181.37, 11.03 ], "formula_id": "formula_9", "formula_text": "L = CE(y i , ŷ) + CE(y, ŷ),(8)" }, { "formula_coordinates": [ 5, 50.46, 144.46, 153.29, 57.47 ], "formula_id": "formula_10", "formula_text": "T i Output: prediction y // c 1,• denote the 1st row 1 c 1,• ← CTK([B]); 2 F i ← h(x);" }, { "formula_coordinates": [ 5, 46.97, 214.62, 227.15, 143.75 ], "formula_id": "formula_11", "formula_text": "3 y i ← 0; 4 for k ← 1 to N -1 do 5 y i k,• ← Dec i (p i k,• , c 1:k,• , M a 1:k,1:k , F i ); 6 c k+1,• ← CTK(y i k,• ); 7 end // autoregressive cross-modal decode 8 F c ← [F T i g(TTK(y i )) T ] T ; 9 y ← 0; 10 for k ← 1 to N -1 do 11 y k,• ← Dec c (p c k,• , c 1:k,• , M a 1:k,1:k , F c ); 12 c k+1,• ← CTK(y k,• ); 13 end" }, { "formula_coordinates": [ 5, 46.97, 373.33, 182.42, 81.24 ], "formula_id": "formula_12", "formula_text": "14 for k ← 1 to T i do 15 c ← [CTK([B]) T CTK(y i 1:N -1,• ) T ] T ; 16 y i ← Dec i (p i , c, M c , F i ); 17 F c ← [F T i g(TTK(y i )) T ] T ; 18 c ← [CTK([B]) T CTK(y 1:N -1,• ) T ] T ; 19 y ← Dec c (p c , c, M c , F c ); 20 end" } ]
10.18653/v1/N19-1388
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32" ], "table_ref": [], "text": "In machine translation research, gender bias has emerged as a significant problem, with recent neural machine translation (NMT) systems exhibiting this bias (Prates et al., 2020). This bias can manifest in various ways, such as the misgendering of individuals based on stereotypes or defaulting to masculine gender translations. As a result, there is a growing need to address and mitigate gender bias in machine translation systems to ensure fair and unbiased translations that accurately reflect the intended meaning without perpetuating gender-based assumptions.\nLe réceptionniste a salué l'avocat...." }, { "figure_ref": [], "heading": "[FR]", "publication_ref": [ "b6", "b9", "b10" ], "table_ref": [], "text": "[EN] Efforts have been made in recent studies to mitigate the gender bias issue in machine translation (Choubey et al., 2021, Saunders and Byrne, 2020, Costa-jussà and de Jorge, 2020). However, most of the works focus on mitigating bilingual NMT models and evaluate on a single language direction. Recently, Costa-jussà et al. (2022) demonstrated that the shared encoder-decoder architecture in multilingual NMT systems leads to worse gender accuracy compared to language-specific modules. Nonetheless, it remains unclear whether the existing debiasing methods would yield similar effectiveness in multilingual NMT models.\nIn this work, we investigate in detail the gender bias issue of multilingual NMT models. We focus on translating unambiguous cases where there is only one correct translation with respect to gender. We consider multiple target languages simultaneously with various gender-based metrics and find that even the state-of-the-art multilingual NMT systems still exhibit a tendency to prefer gender stereotypes in translation.\nTherefore, we propose a new debiasing method for multilingual MT based on a new perspective of the problem. We hypothesize that the gender bias in unambiguous settings is due to the lack of gender information encoded into the nonexplicit gender words and devise a scheme to inject correct gender information into their latent embeddings. Specifically, we develop Gender-Aware Contrastive Learning, GACL, which assigns gender pseudo-labels to text and encodes gender-specific information into encoder text representations. Our method is agnostic to the target translation language as the learning is applied on the encoder side of the model and can be applied to debias pre-trained NMT models through finetuning. We also evaluate whether existing debiasing techniques for bilingual NMT are equally effective for multilingual systems and compare their effectiveness on different target languages.\nExperimental results show that our method is highly effective at improving gender bias metrics for all 12 evaluated languages, with negligible impact on the actual translation performance. We find our approach applicable to various model architectures and very efficient in that it demonstrates significant gender accuracy improvement with just a few thousand steps of fine-tuning. We also discover that the debiasing effects extend to target language directions that are not trained on previous models. Through further analysis, we demonstrate that our method effectively incorporates contextual gender information into the model encoder representations.\nIn summary, the contributions of our work are as follows:\n• We find that recent multilingual NMT models still suffer from gender bias and propose GACL, a novel gender debiasing technique for multilingual NMT models based on contrastive learning.\n• To the best of our knowledge, we are the first to show that the gender debiasing effect transfers across other languages on multilingual NMT models that were not fine-tuned.\n• Through extensive evaluation and analysis, we show that our method is effective across multiple architectures while having a negligible impact on translation performance." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose GACL, a gender-aware contrastive learning method for mitigating the unambiguous gender bias issue in multilingual machine translation. Our approach is applicable to multilingual models that have encoder-decoder architectures and support English-to-many translation directions. We filter gender-related data and fine-tune the pre-trained NMT model with the filtered data. The overview of the method is shown in Figure 2." }, { "figure_ref": [], "heading": "Data Filtering and Preprocessing", "publication_ref": [ "b44", "b6" ], "table_ref": [], "text": "We first filter the parallel train data for sentence pairs that contain gendered words in the English source sentence using the gender-related word list by Zhao et al. (2018). We exclude sentences that contain either no gendered words or gendered words of both male and female genders. After filtering the train data, we undersample the larger of the sentence pairs containing male and female gender words so that the number of samples for each gender is the same, in similar fashion to Choubey et al. (2021)." }, { "figure_ref": [], "heading": "Gender-aware Contrastive Loss", "publication_ref": [ "b21", "b17", "b38" ], "table_ref": [], "text": "We devise a contrastive loss that incorporates gender information into the encoder embeddings. Although the optimal approach would be to apply the contrastive scheme exclusively to words that exhibit gender-based translation variations, this varies depending on the translated language and is challenging to know in advance. Hence, we use mean-pooled sentence-level embeddings for our contrastive learning scheme instead. Given h i as the encoder embedding of the source sentence, we define positive samples to be the set of sentence representations that have the same gender as h i and negative samples as the set of representations that have the opposite gender. We correspondingly formulate contrastive loss as follows:\nL (i) GC = - h + ∈H + i log e sim(h i ,h + )/τ h * ∈H + i ∪H - i e sim(h i ,h * )/τ ,\nwhere H + i is the set of positive samples, H - i is the set of negative samples, sim(•, •) is the cosine similarity function, and τ is the temperature hyperparameter. Our formulation is equivalent to the supervised contrastive loss by Khosla et al. (2020), where we use the gender information as pseudo-labels to define positive and negative pairs. In practice, we use positive samples\nH + i = {h ′ i } ∪ {h j |g j = g i }\nwhere h ′ i is the representation based on different dropout seed, as in Gao et al. (2021), and h j are in-batch samples with the same gender marking g i . For negative samples, we use\nH - i = {h k |g k ̸ = g i }\nwhere h k are the in-batch samples with different gender markings.\nIn addition to the gender-aware contrastive loss, we train our model with the original machine translation loss to prevent forgetting. We also add knowledge distillation loss with the frozen machine translation model as the teacher model to preserve the translation performance (Shao and Feng, 2022). In sum, our training objective is as follows:\nL train = (1 -α) • L M T + α • L KD + λ • L GC ,\nwhere the machine tranlation loss L M T and the knowledge distillation loss L KD is added with weights based on hyperparameter α, and our proposed loss L GC is added with multiplied hyperparameter λ." }, { "figure_ref": [], "heading": "Experimental Framework", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the details of the experiments, including the data, metrics, baseline methods, training architecture, and parameters." }, { "figure_ref": [], "heading": "Dataset and Metrics", "publication_ref": [ "b40", "b44", "b6", "b11" ], "table_ref": [], "text": "In order to measure the unambiguous gender bias in machine translation systems, we employ two evaluation benchmarks: WinoMT and MT-GenEval.\nWinoMT (Stanovsky et al., 2019) is a widely used gender bias evaluation benchmark consisting of 3,888 English sentences, where each sentence contains an occupation and a gendered coreferential pronoun. WinoMT supports ten target languages: German, French, Italian, Ukrainian, Polish, Hebrew, Russian, Arabic, Spanish, and Czech.\nFour metrics are used to measure the gender bias with the WinoMT dataset.\nAccuracy measures whether the occupation is translated with the correct gender inflection based on the pronoun. The occupation word is determined using source-target alignment algorithm, and the inflected gender is detected using target language-specific morphological analysis.\n∆G = Acc male -Acc f emale measures the difference in accuracy between male sentences and female sentences.\n∆S = Acc pro -Acc anti measures the difference in accuracy between sentences with pro-stereotypical and anti-stereotypical genderoccupation pairings as defined by Zhao et al. (2018).\n∆R = Recall male -Recall f emale , suggested by Choubey et al. (2021), measures the difference in the recall rate of male and female sentences.\nMT-GenEval (Currey et al., 2022) is a recently released gender accuracy evaluation benchmark that provides realistic, gender-balanced sentences in gender-unambiguous settings. We use the counterfactual subset, where for each sentence, there exists a counterfactual version in the set with only the gender changed. MT-GenEval supports eight target languages: Arabic, German, Spanish, French, Hindi, Italian, Portuguese, and Russian.\nFour metrics are used to measure the gender bias with the MT-GenEval dataset.\nAccuracy is measured based on whether the unambiguously gendered occupation has the correct gender inflection. Unlike WinoMT, however, accuracy is measured differently; using the counterfactual variants, words that are unique to a single gender are extracted for each sentence, and a translation is marked correct if the words unique to a different gender are not included in the translation.\nHowever, this definition of accuracy has a problem in that even if the translation is incorrect, it could still be marked correct if the words unique to the different gender are not be contained. To avoid this problem, we define an alternative measure of accuracy, denoted Explicit Accuracy. In measuring explicit accuracy, a translation is marked correct if the words unique to different gender are not included in the translation, and the words unique to the same gender are explicitly included in the translation. This definition makes explicit accuracy a stricter version of the original accuracy.\n∆G = Acc male -Acc f emale and E-∆G = ExplicitAcc male -ExplicitAcc f emale measures the difference of male and female sentences in terms of accuracy and explicit accuracy respectively.\nWe use the FLORES-200 (Costa-jussà et al., 2022), a standard multilingual NMT benchmark, to measure the translation performance of our models. FLORES-200 consists of 3,001 sentences sampled from English Wikimedia projects and professionally translated into 200+ languages. We use SentencePiece BLEU (spBLEU) and ChrF++ as evaluation metrics." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b12" ], "table_ref": [], "text": "We compare three baseline methods that have been previously proposed to mitigate gender bias in machine translation.\nBalanced: Costa-jussà and de Jorge (2020) proposed to filter existing parallel corpora for sentences with gender mentions and subsample the data to create a balanced version of the dataset. We fine-tune the machine translation system on the balanced dataset. We use the WMT18 en-de dataset as processed by Edunov et al. (2018).\nGFST: Choubey et al. ( 2021) proposed a method to create a gender-balanced parallel dataset with source-and target-language filtering of pseudoparallel data. In our work, instead of re-training the model with GFST from scratch, we fine-tune the initially trained model with the target-filtered data. We use the same news2018 corpus used in the original work for the monolingual corpus.\nHandcrafted: Saunders and Byrne (2020) proposed to use a small, high-quality handcrafted parallel dataset containing a balanced combination of gendered pronouns and occupations to fine-tune the machine translation system. We use the handcrafted en2de dataset provided by the authors in our work, which consists of 388 samples." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b13" ], "table_ref": [], "text": "We use three pre-trained multilingual model architectures as our backbone for our experiments: M2M-100 (Fan et al., 2020), SMaLL-100 (Mohammadshahi et al., 2022a), which is a knowledgedistilled model from M2M-100, and NLLB-200 (Costa-jussà et al., 2022). Due to resource limitations, we use a 1.2 billion parameter variant for M2M-100 and a 1.3 billion parameter distilled variant for NLLB-200. We train with a batch size of 8 and learning rate of 4e-6 with 200 warmup steps and inverse square root learning rate decay schedule. The hyperparameters for the contrastive learning objective was set to λ = 1.0 and α = 0.4 based on hyperparameter search (refer to Appendix B). We evaluate every 100 training steps and early stop based on the Explicit Accuracy score of the MT-GenEval development set for the fine-tuned language direction. For evaluation, we use beam search of beam size 5 during decoding." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this section, we explore the gender bias issue of recent multilingual NMT models and report experimental results of the gender bias mitigation techniques of multilingual machine translation models." }, { "figure_ref": [ "fig_1" ], "heading": "Gender Bias Evaluation of Recent Multilingual NMT Models", "publication_ref": [ "b23" ], "table_ref": [ "tab_1" ], "text": "We first evaluate existing multilingual NMT models on gender bias and analyze their relationship with their translation performance. We test seven model variants of different sizes based on three model architectures: M2M-100, SMaLL-100, and NLLB-200. For gender bias, we evaluate on the WinoMT dataset for all 10 languages and average the results. Similarly for translation performance, we evaluate on the FLORES-200 devtest set on the same 10 language directions and average the results. For the correlation measure between translation performance and gender bias metrics, we use Pearson's correlation coefficient ρ. As shown in Figure 3, we find a strong positive correlation (ρ = 0.98) between the translation performance and the gender accuracy. As shown by a negative correlation of ∆G (ρ = -0.97) and ∆R (ρ = -0.96), the accuracy and recall gap between genders are also reduced as translation performance improves. However, the correlation between translation performance and ∆S is positive (ρ = 0.91), implying that better-performing models rely more on occupation stereotypes rather than the original context for gender disambiguation.\nOverall, we conclude that recent multilingual models continue to show similar tendencies as MT systems previously reported by Kocmi et al. (2020), with positive correlation with gender accuracy, negative correlation with ∆G, and positive correlation with ∆S. This suggests that the development of NMT systems with a unidimensional focus on performance is insufficient to address the gender bias issue, and active consideration is required.\nRecently, ChatGPT2 has shown remarkable performance in various zero-shot NLP tasks, including machine translation. We evaluate the gender bias of ChatGPT (gpt-3.5-turbo) in performing zero-shot machine translation. We use the prompt \"Translate the following sentence into <lang>. <sent>\", where <lang> is the name of target language and <sent> is the source sentence. As shown in Table 1, ChatGPT falls short regarding the spBLEU score but achieves relatively high gender accuracy, surpassing M2M-100 with 54.25. However, we also find that the ∆S is the highest for ChatGPT, indicating that its translation output often relies on gender stereotypes." }, { "figure_ref": [], "heading": "Main Experimental Results", "publication_ref": [ "b12" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We report the results of fine-tuning multilingual NMT model with our GACL method along with other baseline methods. Specifically, we fine-tune the model on a single language direction and observe its effect on the corresponding language direction (denoted in-domain, ID) as well as other language directions (denoted out-of-domain, OOD). We use the WMT18 en-de dataset (Edunov et al., 2018) for fine-tuning on English to German language direction. For evaluation, all target languages supported by the dataset were evaluated. This involves 10 target languages for WinoMT and 8 for MT-GenEval. Finally, we use the union of the languages covered by the two datasets for evaluating translation performance on the FLORES-200 dataset, which amounts to 12 target languages.\nAs shown in Table 2, the results on the SMaLL-100 model show that our method, GACL, achieves the greatest improvement in in-domain gender accuracy for both WinoMT and MT-GenEval with 27.3 and 13.0 absolute improvement from the baseline respectively. On the other hand, other baseline methods that were originally proposed for bilingual MT systems proved to be less effective compared to our method. GFST was proved to be the least effective out of the baselines, with less than 3% improvement in gender accuracy, and fine-tuning on the Handcrafted set second was most effective, with 20.6% improvement. Based on these results, we suggest that for multilingual MT models, it is more effective to use a smaller, focused dataset on gender and occupation where the bias of the model is exhibited. metrics in Table 2, gender bias mitigation strategies also have a positive effect on the unseen target languages during fine-tuning, regardless of the method used. This implies that gender-related information is agnostic to the target language, and the debiasing effects are transferred to other languages. However, while other baseline methods have a much lower improvement in OOD than ID, our approach is almost as effective in OOD. Fine-tuning on the Handcrafted set improves WinoMT accuracy by 20.6% in ID and 7.7% in OOD, while GACL approach improves by 25.7% and 20.5% respectively." }, { "figure_ref": [], "heading": "As shown by OOD gender accuracy and |∆G|", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We report the full results on all evaluated metrics and model architectures in Table 3. We observe that applying GACL improves upon all gender accuracy and bias metrics for all evaluated model architectures. Especially, we find that |∆S| metric of NLLB-200, which scored highest out of all methods before fine-tuning, is reduced to 6.1, the lowest out of all methods. On the other hand, we find that the spBLEU and ChrF++ metrics for M2M-100 and NLLB-200 drop by an average of 0.5 points. We suggest that catastrophic forgetting did not occur during fine-tuning due to the model's fast convergence. Still, the fine-tuning was long enough to significantly improve gender-related translations." }, { "figure_ref": [ "fig_2" ], "heading": "Results for Individual Target Languages", "publication_ref": [ "b34" ], "table_ref": [], "text": "We report the individual results for each target language on the WinoMT dataset in Figure 4. To observe the effect of the target language used during GACL fine-tuning, we evaluate and report on two language directions: English to German (en2de) and English to Turkish (en2tr). In contrast with the German language, which has rich gender morphology, the Turkish language is a gender-neutral language that lacks grammatical gender. We use the same TED2020 corpus (Reimers and Gurevych, 2020) for getting gender-balanced training data to rule out the effect of data domain from our experiments.\nResults show that the en2de-finetuned model has higher gender accuracy than the en2tr-finetuned model by an average of 3.6%. However, using en2tr is quite effective on improving gender accuracy and reducing ∆G on all evaluated target languages. Since the target language of Turkish does not contain gender-related words, results suggest that the gender-related knowledge is accessible from the source encoder representations, and our approach is able to mitigate the bias that lies within it." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We compare the effects of using different contrastive samples in Table 4. First, we observe that using single dropout representation as the positive sample and in-batch sentence representations with different gender as negative samples achieves substantial performance improvement on both gender accuracy and ∆G. We can also see that using just in-batch samples with the same gender as positive samples can similarly improve the gender accuracy. However, we find that incorporating all available samples within the batch for positive samples achieves the best results.\nWe also perform an ablation study on training with just machine translation loss L M T , which is equivalent to the Baseline method, and just the gender-aware contrastive loss L GC . The results shown in Table 3 show that training with L M T on a gender-balanced dataset improves over the baseline by a relatively small amount for all metrics. On the other hand, training with just L GC loss achieves surprisingly high performance on both gender evaluation benchmarks. We point out that training on L GC only updates the encoder parameters of an encoder-decoder model, and thus having genderaware contextual embedding in the encoder representations can be effective. We also observed during the fine-tuning process with only L GC that the performance converges quickly within 200 steps, and upon further fine-tuning, both translation performance and gender accuracy deteriorate very quickly. We thus conclude that losses like L M T and L KD are required to prevent catastrophic forgetting during fine-tuning and enable stable training to get optimal convergence on gender-aware contrastive learning. " }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we investigate how our GACL finetuning method affects the model representations regarding gender. Specifically, we use 40 stereotypical occupation words from the WinoMT dataset, where 20 are stereotypically assumed male while the remaining 20 are assumed female. We label the \"stereotypical gender\" of an occupation word as defined by this gender assumption. We then construct a sentence with the template \"He is <occupation>.\" and \"She is <occupation>.\" as encoder input. Here, the pronouns decide the \"contextual gender\" of the occupation word. Finally, a single contextual representation vector is computed by taking the average of encoder output representations of the tokens that make up the occupation word. We use this same representation extraction process for both the baseline SMaLL-100 model and the SMaLL-100 model fine-tuned with GACL.\nTo examine the representations, we employ the t-SNE dimension reduction technique (van der Maaten and Hinton, 2008) to visualize the occupation representations in 2 dimensions, as shown in Figure 5. We observe that the representations for each occupation are clustered closely together regardless of the sentence context and model. This shows that the contextual gender has a relatively little contribution to the representation compared to the semantic meaning of the occupation. Also, our fine-tuning method induces a relatively small change, preserving the semantic distinction between the occupations. Finally, we note that the average distance between representations of different contexts is farther apart for the GACL representations (0.59) than the baseline representations (0.19), suggesting that the contrastive objective has guided the model to differentiate the occupation based on the gender context.\nTo investigate how much gender information is encoded within the embeddings in-depth, we perform k-means clustering with k = 2 on the occupation embeddings, and evaluate the cluster quality based on stereotypical and contextual gender as label assignments. Based on the Purity and Normalized Mutual Information (NMI) metrics, we see that clusters for both models have negligible alignment with the stereotypical gender assignment (Figure 6). On the other hand, we find that clustering based on GACL embeddings is very well aligned with the contextual gender, while the baseline model continues to be misaligned. This shows that GACL representations capture contextual gender information significantly better than the baseline representations." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b18", "b30", "b33", "b25", "b24", "b43", "b39", "b26", "b1", "b4", "b31", "b19", "b0", "b41", "b5", "b3", "b38", "b37", "b14" ], "table_ref": [], "text": "Gender Bias in NLP Chung et al. ( 2022) point out that the existing English-centric Large Language Models (LLMs) suffer from gender-stereotypical words. However, fixing biases that are deeply ingrained in hidden representations is a challenging task (Gonen and Goldberg, 2019;Orgad and Belinkov, 2022). Previous researchers such as Ravfogel et al. (2020); Kumar et al. (2020); Gaci et al. (2022b) debias the hidden representation using learning-based methods. However, Kumar et al. (2022) point out the limitations of such studies and recommend the use of data augmentation techniques (Webster et al., 2020;Sharma et al., 2021;Lauscher et al., 2021). In addition to aforementioned research, Attanasio et al. (2022) and Gaci et al. (2022a) focus on biased attention weights, and Cheng et al. (2021) and He et (2022) use a contrastive learning scheme to reduce bias in sentence embeddings and language models respectively.\nMultilingual Machine Translation Studies such as mBERT (Pires et al., 2019) and XLM-R (Goyal et al., 2021) have shown that it is possible to train language models on multiple languages simultaneously, a method referred to as multilingual training. Recent research has proven that multilingual training contributes to a positive impact on NMT (Aharoni et al., 2019;Tran et al., 2021;Chiang et al., 2022). According to Carrión-Ponz and Casacuberta (2022), by training multilingual NMT models further with a few-shot regularization, a decrease in the performance can be prevented. Knowledge distillation also helps NMT models preserve their original performance (Shao and Feng, 2022). 2022) employ data augmentation-based approach to reduce gender bias of MT models. In addition, Saunders and Byrne (2020) propose utilizing a transfer-learning method, and Savoldi et al. (2021) develop a unified framework to tackle the biases. However, these works mostly do not consider multilingual NMT models that support multiple language directions. Alternatively, Fleisig and Fellbaum (2022) propose an adversarial learning framework to mitigate gender bias in machine translation models by removing gender information when the input has masked gender context. Our approach, on the other hand, injects the correct contextual gender information from encoder output contrastively when given inputs have gender contexts." }, { "figure_ref": [], "heading": "Gender Bias in Machine Translation", "publication_ref": [ "b2" ], "table_ref": [], "text": "In the case of multilingual machine translation, Costa-jussà et al. ( 2022) have shown that the shared encoder-decoder architecture of multilingual NMT models has a negative effect on the gender bias. Also, Mohammadshahi et al. (2022b) investigate how the multilingual model compression affects gender bias. Contemporary work by Cabrera and Niehues (2023) examines gender preservation in zero-shot multilingual machine translation. However, no existing work that the authors are aware of specifically considers mitigating the unambiguous gender bias of multilingual NMT models for multiple language directions simultaneously.\nIn this work, we conducted an investigation into the gender bias of multilingual NMT models, specifically focusing on unambiguous cases and evaluating multiple target languages. Our findings indicated that even state-of-the-art multilingual NMT systems tend to exhibit a preference for gender stereotypes in translations. We then proposed a novel debiasing method, Gender-Aware Contrastive Learning (GACL), which injects contextually consistent gender information into latent embeddings. Our experiments demonstrated that GACL effectively improves gender accuracy and reduces gender performance gaps in multilingual NMT models, with positive effects extending to target languages not included in fine-tuning. These findings highlight the importance of addressing gender bias in machine translation and provide a promising approach to mitigate it in multilingual NMT systems." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The gender debiasing process in this study relies on a curated list of gender word pairs to identify and filter gendered terms in the dataset. However, this approach may not cover the whole of the gendered terms present in the world. The limited coverage of gendered terms could potentially introduce biases and inaccuracies in the evaluation results, as certain gendered terms may be missed or not appropriately accounted for.\nFurthermore, our method only deals with binary gender, and do not consider the possible representations of non-binary genders and their bias in translation. As languages vary in their use of grammatical gender and they often lack clearly defined rules or established linguistic structures for non-binary genders, it is especially challenging to evaluate and mitigate bias in this context. This limitation highlights the need for further research and consideration of more diverse gender representations in bias mitigation.\nWhile we extend gender bias evaluation to multilingual settings, this study is still limited to the provided target languages, which predominantly include medium-to-high resource languages. Due to a lack of evaluation data, the evaluated source language is also limited to English. Consequently, the findings and conclusions may not be representative of the gender biases present in languages that are not included in the evaluation. The limitations in language coverage may restrict the generalizability of the study's results to a broader linguistic context.\nThe focus of this study is primarily on evaluating gender bias in unambiguous settings where the intended gendered terms are clear. However, the investigation of gender bias in ambiguous settings, where the gendered term can have multiple interpretations, is not addressed in this study. Consequently, the study does not provide insights into potential biases in ambiguous gendered language use, which can also contribute to societal biases and stereotypes.\nOur work focuses on recent NMT models based on the encoder-decoder architecture, and hence the effect of our method on decoder-only NMT models remains unverified. Nevertheless, our approach is applicable to other model architectures as long as we can aggregate a representation of the source input sentence. Specifically for decoder-only architectures, one feasible strategy would be to pool the decoder model outputs of tokens up to given source input sentence. As decoder-only large language models such as ChatGPT are increasingly being considered for the MT task, we believe this is an interesting direction for future work." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Our work attempts to reduce bias and misrepresentations in translation of masculine and feminine gendered referents. Our methodology and evaluation has been limited to considering binary genders, which overlooks non-binary genders and correspondingly doesn't consider bias in genderinclusive and gender-neutral translations. Possible mitigations include extending the translation data to incorporate sentences with gender-neutral inflections and defining a separate gender pseudo-label for applying proposed contrastive loss. The lack of flexibility in gender could also be mitigated by extending our work to controlled generation where the preferred gender inflection is given as input.\nFurthermore, in our work, the evaluated source language was limited to English, and evaluated target languages are mostly of high-resource. Correspondingly, our work may under-represent bias found in unevaluated low-resource languages. However, the findings in our work show potential gender debiasing effects transferring to non fine-tuned languages, and extending gender bias evaluation resources to include low-resource languages may help analyze and mitigate gender bias for those languages." }, { "figure_ref": [], "heading": "A Experimental Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 ChatGPT Generation", "publication_ref": [], "table_ref": [], "text": "For our experiments using ChatGPT, we use the API provided by OpenAI for generating text, using the model gpt-3.5-turbo. For a small number of cases, ChatGPT generated multiple lines of text delimited by newline characters, leading to errors in source-target alignment during WinoMT evalution. For these cases, we split the sentence based on the newline character and take the first sentence as the translation." }, { "figure_ref": [], "heading": "A.2 Data Preprocessing Details", "publication_ref": [ "b12" ], "table_ref": [], "text": "In our work, we use the cleaned WMT18 en-de dataset, as pre-processed by Edunov et al. (2018). By filtering for sentences with gender-related words, we find 415,401 masculine and 86,431 feminine sentence sets. Next, masculine sentence sets are undersampled with random sampling while all samples from feminine set are used to create a final balanced set of total 2*86,431=172,862 samples. We do not perform additional processing to handle other domain differences except gender." }, { "figure_ref": [], "heading": "A.3 Fine-tuning Details", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "For our experiments, hyperparameter search was done manually with learning rate from {2e-6, 4e-6, 8e-6}, batch size from {4, 8, 16, 32}, and learning rate warmup steps from {100, 200, 400} based on fine-tuning GACL on the SMaLL-100 architecture and metric based on explicit accuracy on the MT-GenEval development set. Our final selection of hyperparameters is then used for all experiments in our paper. One exception for fine-tuning GACL with NLLB-200 architecture is we set the learning rate to 8e-6 due to its slower convergence during training in comparison to other architectures. For fine-tuning with our GACL method, we use balanced random sampling so that the number of sentences for each gender are equal within one mini-batch.\nThe fine-tuned dataset size and number of finetuning steps before early stopping is shown in Table 5. We notice that Balanced and GFST data augmentation based methods trained for more than one thousands steps before early stopping. On the other hand, our GACL method and Handcrafted method stopped fine-tuning within one thousand steps.\nWe fine-tuned the SMaLL-100 model on 1 NVIDIA A6000 GPU, and fine-tuned M2M-100 1.2B and NLLB-200 1.3B distilled models on 1 NVIDIA A100 80GB GPU. We use the pre-trained model checkpoints downloaded from the Huggingface website. 3" }, { "figure_ref": [], "heading": "B Additional Experiments B.1 Many2many Translation Performance", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "The 12 target languages covered by the WinoMT and MT-GenEval are mostly categorized as high-resource languages (Mohammadshahi et al., 2022a). Thus, we extend our FLORES-200 evaluation to languages of low and medium resources for both source and target languages to more accurately analyze the impact of our approach on multilingual translation performance.\nFor this experiment, we use the four resource levels defined by Mohammadshahi et al. (2022a) 6 and7." }, { "figure_ref": [], "heading": "B.2 Ablation Results on Hyperparameter α", "publication_ref": [], "table_ref": [], "text": "We report the effects of changing the hyperparmeter α used to determine the relative weight between L M T and L KD in the joint training loss we employed during fine-tuning. As shown in Figure 7, we find that for translation performance, setting α to 0.4 performs the best, while using a single loss of either L M T (i.e. α = 0) and L KD (i.e. α = 1) performs slightly worse. For gender accuracy, we find that trends are not very clear, with α set to 1.0 being the most effective and α of 0.4 second most effective. Based on these findings, we choose to use α value of 0.4 for the rest of the experiments in this paper." }, { "figure_ref": [ "fig_6" ], "heading": "B.3 Relationship between translation performance and gender bias metrics for each language", "publication_ref": [], "table_ref": [], "text": "In Figure 8, we report results on the relationship between translation performance and gender bias metrics for each language. We observe similar correlations between translation performance and gender bias metrics of multilingual MT systems across each independent target languages. However, the slope of the correlation differs by the target language." }, { "figure_ref": [], "heading": "B.4 Gender bias evaluation results for each target language", "publication_ref": [], "table_ref": [ "tab_9", "tab_10" ], "text": "We report the evaluation results on WinoMT and MT-GenEval datasets for each of the supported target languages individually in Tables 8 and9. For all evaluations, the source language is fixed to English, as it is the only provided source language for the datasets." }, { "figure_ref": [], "heading": "B.5 Statistical significance tests", "publication_ref": [], "table_ref": [ "tab_3", "tab_3", "tab_12" ], "text": "We share the results on statistical significance testing between our GACL model (Table 3; row 2) and our ablation fine-tuned with L GC only (Table 3; row 4) which scored closely in accuracy scores. We conduct a paired randomized permutation test with the number of resamples N set to 100,000.\nThe p-values from the test are shown in Table 10. We found that the GACL method shows higher accuracy than the ablation on 10 out of 10 evaluated languages on WinoMT dataset and 4 out of 8 evaluated languages on MT-GenEval dataset directions with statistical significance of p-value less than 0.05." }, { "figure_ref": [], "heading": "B.6 Gender evaluation on the secondary entity of the WinoMT dataset", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "The WinoMT dataset is comprised of sentences that mention two entities; one entity has an unambiguous gender indicated by a coreferential pronoun, while the gender of the other entity is ambiguous.\nIn this subsection, we evaluate the effect of gender debiasing methods on this secondary, genderunspecified entity in the WinoMT dataset. We use the variation dataset proposed by Saunders et al. (2020) and show the results in Table 11. We found that all previous methods as well as ours lead to an consistent increase in gender accuracy of secondary entity by about 5% compared to gender accuracy of primary entity, which is consistent with findings by Saunders et al. (2020). Note that none of the evaluated methods, including ours, explicitly account for entities with ambiguous gender and this issue is left for future research." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements This research was supported by the MSIT(Ministry of Science, ICT), Korea, under the High-Potential Individuals Global Training Program)(2022-00155958) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation) (Contribution: 50%), and Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) [No. 2022-0-00184, Development and Study of AI Technologies to Inexpensively Conform to Evolving Policy on Ethics], and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)], and Microsoft Research Asia. K. Jung is with Automation and Systems Research Institute (ASRI), Seoul National University." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* Work done during internship at MSRA." } ]
Gender bias is a significant issue in machine translation, leading to ongoing research efforts in developing bias mitigation techniques. However, most works focus on debiasing bilingual models without much consideration for multilingual systems. In this paper, we specifically target the gender bias issue of multilingual machine translation models for unambiguous cases where there is a single correct translation, and propose a bias mitigation method based on a novel approach. Specifically, we propose Gender-Aware Contrastive Learning, GACL, which encodes contextual gender information into the representations of non-explicit gender words. Our method is target languageagnostic and is applicable to pre-trained multilingual machine translation models via finetuning. Through multilingual evaluation, we show that our approach improves gender accuracy by a wide margin without hampering translation performance. We also observe that incorporated gender information transfers and benefits other target languages regarding gender accuracy. Finally, we demonstrate that our method is applicable and beneficial to models of various sizes. 1
Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation
[ { "figure_caption": "Figure 1 :1Figure 1: Example sentence from the WinoMT benchmark and the corresponding translation outputs of multilingual NMT systems. Existing systems often fail to translate with correct gender inflections.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Relationships between translation performance and gender bias metrics of multilingual NMT models. Each point represents the average score of an NMT system on the ten target languages of the WinoMT dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Gender accuracy and ∆G of individual target languages supported by WinoMT. Results of the SMaLL-100 baseline model, model fine-tuned on en2de dataset, and model fine-tuned on en2tr dataset are reported.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure 5: t-SNE visualization of encoder output representation of occupation words. Circle and cross markers denote embeddings of the original SMaLL-100 model and GACL-finetuned variant, respectively. The colors of the marker denote the context of the occupation word.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Clustering results on occupation word embeddings based on stereotypical and contextual gender label assignments. Lower scores are better for stereotypical gender labels, and higher scores are better for contextual gender labels.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Bilingual NMT models have been shown to be easily exposed to gender biases (Prates et al., 2020). Correspondingly, Zhao et al. (2018); Choubey et al. (2021); Currey et al. (", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Relationships between translation performance and gender bias metrics of multilingual NMT model for various evaluated target languages.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Acc. ↑ ∆G |↓| ∆S |↓| Average spBLEU score and WinoMT metrics of NMT systems and ChatGPT (gpt-3.5-turbo) on the ten target languages of the WinoMT dataset. † ChatGPT spBLEU score is obtained from Lu et al. (2023).", "figure_data": "FLORES-200WinoMTMethodspBLEU ↑SMaLL-10032.0246.2531.439.85M2M-10034.6049.6626.8518.14NLLB-20040.9262.449.3724.94ChatGPT27.02 †54.2521.0924.95", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Main experimental results on the WinoMT, MT-GenEval, and FLORES-200 datasets for the SMaLL-100 model. The in-domain (ID) setting signifies the en-de language direction in which the model is fine-tuned, while the out-of-domain (OOD) setting encompasses the remaining language directions supported by the dataset.", "figure_data": "WinoMTMT-GenEvalFLORES-200IDOODIDOODIDOODMethodAcc. ↑∆G |↓|Acc. ↑|∆G| ↓E-Acc. ↑E-∆G |↓|E-Acc. ↑|E-∆G| ↓spBLEU ↑spBLEU ↑Baseline57.424.2 45.032.254.710.042.327.136.032.7Balanced72.93.8 49.722.456.78.344.123.935.432.4GFST59.920.9 46.231.157.310.341.927.235.532.4Handcrafted78.0-1.5 52.718.458.74.045.122.435.832.7GACL (Ours)84.7-3.7 65.58.167.7-2.056.212.936.032.7WinoMTMT-GenEvalFLORES-200MethodAcc. ↑|∆G| ↓|∆S| ↓|∆R| ↓E-Acc. ↑|E-∆G| ↓Acc. ↑|∆G| ↓spBLEU ↑ChrF++ ↑SMaLL-100Baseline46.231.49.957.543.825.0 57.925.633.052.3GACL (Ours)67.47.76.418.457.611.5 73.211.133.052.2-with LMT Only52.020.69.344.245.722.0 61.321.632.651.8-with LGC Only63.59.07.521.755.313.0 71.612.732.551.7M2M-100Baseline49.726.818.253.944.923.5 58.023.935.553.6GACL (Ours)71.46.47.315.559.38.8 73.39.234.953.1NLLB-200Baseline61.79.024.529.357.116.7 66.416.840.557.2GACL (Ours)78.23.96.16.269.95.9 80.45.540.056.9", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Dataset size and number of training steps before early stopping for the fine-tuning experiments in our work.", "figure_data": "MethodDataset size # StepsSMaLL-100Balanced172,8623,700GFST1,802,8321,900Handcrafted388200GACL172,862700-with LGC only-200M2M-100GACL172,862400NLLB-200GACL172,86280033.1533.1067.5 68.0spBLEU32.95 33.00 33.0565.5 66.0 66.5 67.0 Gender Accuracy32.900.00.20.40.60.81.065.0Figure 7: Experimental results on tuning hyperparame-ter α. the spBLEU scores are the average score on 10language directions on FLORES200 development setand gender accuracy is the average explicit accuracy onthe MT-GenEval development set.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Many2many translation performance evaluation by resource level on FLORES-200 using the spBLEU metric. X2X represents the total average score.", "figure_data": "FLORES-200 ChrF++", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Many2many translation performance evaluation by resource level on FLORES-200 using the ChrF++ metric. X2X represents the total average score.", "figure_data": "(VL), and evaluate many2many translation perfor-mance using spBLEU. Due to resource limitations,we sample 5 languages for each resource groupand evaluate across all sampled language direc-tions. The languages used are as follows: High re-source languages include French, German, Italian,Russian, and Spanish (Latin America). Mediumresource languages are Arabic, Bulgarian, Chi-nese (Simplified), Korean, and Turkish. Low re-source languages are Afrikaans, Amharic, Bosnian,Cebuano, and Kazakh. Very Low resource lan-guages are Belarusian, Croatian, Filipino (Taga-log), Nepali, and Occitan. The averaged results byresource group are reported in Tables", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Acc. ∆G Baseline 57.4 24.2 35.8 27.0 50.3 24.3 40.7 38.2 53.3 26.4 38.8 35.5 47.7 21.0 45.6 39.5 44.2 42.3 48.9 35.5 46.2 31.4 Balanced 72.9 3.8 38.8 22.6 59.6 7.9 43.4 29.1 64.7 8.0 40.1 32.5 49.2 17.0 50.3 28.1 46.8 34.0 54.3 22.8 52.0 20.6 GFST 59.9 20.9 36.3 27.8 54.2 17.0 41.4 38.3 55.9 21.8 39.6 36.2 47.9 22.0 46.5 40.9 43.8 41.8 49.8 34.5 47.5 30.1 Handcrafted 78.0 -1.5 38.1 22.7 63.9 3.2 49.4 19.3 69.8 2.6 42.3 28.9 51.5 13.3 51.6 25.6 50.6 29.8 57.1 20.1 55.2 16.4 GACL (Ours) 84.8 -3.7 46.8 10.8 78.4 -7.8 67.4 -2.5 85.5 -5.1 52.5 15.0 64.3 Accuracy and ∆G scores on the 10 target languages of the WinoMT dataset.", "figure_data": "Target lang.derufritesukhearplcsAVGAcc. 2.8 66.14.7 61.1 14.4 67.49.7 67.43.9Target lang.derufritesptarhiAVGAcc.∆GAcc.∆GAcc.∆GAcc.∆GAcc.∆GAcc.∆GAcc.∆GAcc.∆GAcc.∆G", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Accuracy and ∆G scores on the 8 target languages of the MT-GenEval test set.", "figure_data": "", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "P-values from randomized pairwise permutation test between accuracy scores of GACL and ablation. * denotes evaluations that are not stastistically significant with respect to threshold of 0.05.", "figure_data": "MethodAccprim. Accsec.∆Baseline46.348.62.3GFST47.552.24.7Handcrafted55.259.03.8GACL (Ours)67.472.75.3", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Gender \"Accuracy\" of the primary and secondary entities in the WinoMT dataset.", "figure_data": "", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" } ]
Minwoo Lee; Hyukhun Koh; Kang-Il Lee; Dongdong Zhang; Minsung Kim; Kyomin Jung
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Giuseppe Attanasio; Debora Nozza; Dirk Hovy; Elena Baralis", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Entropy-based attention regularization frees unintended bias mitigation from lists", "year": "2022" }, { "authors": "Lena Cabrera; Jan Niehues", "journal": "European Association for Machine Translation", "ref_id": "b2", "title": "Gender lost in translation: How bridging the gap between languages affects gender bias in zero-shot multilingual translation", "year": "2023" }, { "authors": "Salvador Carrión; -Ponz ; Francisco Casacuberta", "journal": "Association for Machine Translation in the Americas", "ref_id": "b3", "title": "Few-shot regularization to tackle catastrophic forgetting in multilingual machine translation", "year": "2022" }, { "authors": "Pengyu Cheng; Weituo Hao; Siyang Yuan; Shijing Si; Lawrence Carin", "journal": "", "ref_id": "b4", "title": "Fairfil: Contrastive neural debiasing method for pretrained text encoders", "year": "2021" }, { "authors": "Ting-Rui Chiang; Yi-Pei Chen; Yi-Ting Yeh; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Breaking down multilingual machine translation", "year": "2022" }, { "authors": "Prafulla Kumar Choubey; Anna Currey; Prashant Mathur; Georgiana Dinu", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "GFST: Genderfiltered self-training for more accurate gender in translation", "year": "2021" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b7", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "James Marta R Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Maillard", "journal": "", "ref_id": "b8", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Marta R Costa-Jussà; Adrià De; Jorge ", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Finetuning neural machine translation on gender-balanced datasets", "year": "2020" }, { "authors": "Marta R Costa-Jussà; Carlos Escolano; Christine Basta; Javier Ferrando; Roser Batlle; Ksenia Kharitonova", "journal": "", "ref_id": "b10", "title": "Interpreting gender bias in neural machine translation: Multilingual architecture matters", "year": "2022" }, { "authors": "Anna Currey; Maria Nadejde; Raghavendra Reddy Pappagari; Mia Mayer; Stanislas Lauly; Xing Niu; Benjamin Hsu; Georgiana Dinu", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "MT-GenEval: A counterfactual and contextual dataset for evaluating gender accuracy in machine translation", "year": "2022" }, { "authors": "Sergey Edunov; Myle Ott; Michael Auli; David Grangier", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Understanding back-translation at scale", "year": "2018" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary; Naman Goyal; Tom Birch; Vitaliy Liptchinsky; Sergey Edunov; Edouard Grave; Michael Auli; Armand Joulin", "journal": "", "ref_id": "b13", "title": "Beyond english-centric multilingual machine translation", "year": "2020" }, { "authors": "Eve Fleisig; Christiane D Fellbaum", "journal": "", "ref_id": "b14", "title": "Mitigating gender bias in machine translation through adversarial learning", "year": "2022" }, { "authors": "Yacine Gaci; Boualem Benatallah; Fabio Casati; Khalid Benabdeslem", "journal": "", "ref_id": "b15", "title": "Debiasing pretrained text encoders by paying attention to paying attention", "year": "2022" }, { "authors": "Yacine Gaci; Boualem Benatallah; Fabio Casati; Khalid Benabdeslem", "journal": "", "ref_id": "b16", "title": "Iterative adversarial removal of gender bias in pretrained word embeddings", "year": "2022" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Hila Gonen; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them", "year": "2019" }, { "authors": "Naman Goyal; Jingfei Du; Myle Ott; Giri Anantharaman; Alexis Conneau", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Larger-scale transformers for multilingual masked language modeling", "year": "2021" }, { "authors": "Jacqueline He; Mengzhou Xia; Christiane Fellbaum; Danqi Chen", "journal": "", "ref_id": "b20", "title": "MABEL: Attenuating gender bias using textual entailment data", "year": "2022" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "", "ref_id": "b21", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Tom Kocmi; Tomasz Limisiewicz; Gabriel Stanovsky", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Gender coreference and bias evaluation at WMT 2020", "year": "2020" }, { "authors": "Abhinav Kumar; Chenhao Tan; Amit Sharma", "journal": "", "ref_id": "b24", "title": "Probing classifiers are unreliable for concept removal and detection", "year": "2022" }, { "authors": "Vaibhav Kumar; Tenzin Singhay Bhotia; Vaibhav Kumar; Tanmoy Chakraborty", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b25", "title": "Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings", "year": "2020" }, { "authors": "Anne Lauscher; Tobias Lueken; Goran Glavaš", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Sustainable modular debiasing of language models", "year": "2021" }, { "authors": "Hongyuan Lu; Haoyang Huang; Dongdong Zhang; Haoran Yang; Wai Lam; Furu Wei", "journal": "", "ref_id": "b27", "title": "Chainof-dictionary prompting elicits translation in large language models", "year": "2023" }, { "authors": "Alireza Mohammadshahi; Vassilina Nikoulina; Alexandre Berard; Caroline Brun; James Henderson; Laurent Besacier", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "a. SMaLL-100: Introducing shallow multilingual machine translation model for low-resource languages", "year": "2022" }, { "authors": "Alireza Mohammadshahi; Vassilina Nikoulina; Alexandre Berard; Caroline Brun; James Henderson; Laurent Besacier", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "What do compressed multilingual machine translation models forget?", "year": "2022" }, { "authors": "Hadas Orgad; Yonatan Belinkov", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Choose your lenses: Flaws in gender bias evaluation", "year": "2022" }, { "authors": "Telmo Pires; Eva Schlinger; Dan Garrette", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "How multilingual is multilingual BERT?", "year": "2019" }, { "authors": "Pedro H Marcelo Or Prates; Luís C Avelar; Lamb", "journal": "Neural Computing and Applications", "ref_id": "b32", "title": "Assessing gender bias in machine translation: a case study with google translate", "year": "2020" }, { "authors": "Shauli Ravfogel; Yanai Elazar; Hila Gonen; Michael Twiton; Yoav Goldberg", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Null it out: Guarding protected attributes by iterative nullspace projection", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "year": "2020" }, { "authors": "Danielle Saunders; Bill Byrne", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Reducing gender bias in neural machine translation as a domain adaptation problem", "year": "2020" }, { "authors": "Danielle Saunders; Rosie Sallis; Bill Byrne", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Neural machine translation doesn't translate gender coreference right unless you make it", "year": "2020" }, { "authors": "Beatrice Savoldi; Marco Gaido; Luisa Bentivogli; Matteo Negri; Marco Turchi", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b37", "title": "Gender bias in machine translation", "year": "2021" }, { "authors": "Chenze Shao; Yang Feng", "journal": "", "ref_id": "b38", "title": "Overcoming catastrophic forgetting beyond continual learning: Balanced training for neural machine translation", "year": "2022" }, { "authors": "Shanya Sharma; Manan Dey; Koustuv Sinha", "journal": "", "ref_id": "b39", "title": "Evaluating gender bias in natural language inference", "year": "2021" }, { "authors": "Gabriel Stanovsky; Noah A Smith; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Evaluating gender bias in machine translation", "year": "2019" }, { "authors": "Chau Tran; Shruti Bhosale; James Cross; Philipp Koehn; Sergey Edunov; Angela Fan", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Facebook AI's WMT21 news translation task submission", "year": "2021" }, { "authors": "Laurens Van Der Maaten; Geoffrey E Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b42", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Kellie Webster; Xuezhi Wang; Ian Tenney; Alex Beutel; Emily Pitler; Ellie Pavlick; Jilin Chen; Ed H Chi; Slav Petrov", "journal": "", "ref_id": "b43", "title": "Measuring and reducing gendered correlations in pre-trained models", "year": "2020" }, { "authors": "Jieyu Zhao; Tianlu Wang; Mark Yatskar; Vicente Ordonez; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 71.88, 94.45, 216.25, 37.26 ], "formula_id": "formula_0", "formula_text": "L (i) GC = - h + ∈H + i log e sim(h i ,h + )/τ h * ∈H + i ∪H - i e sim(h i ,h * )/τ ," }, { "formula_coordinates": [ 3, 70.87, 252.6, 121.87, 14.83 ], "formula_id": "formula_1", "formula_text": "H + i = {h ′ i } ∪ {h j |g j = g i }" }, { "formula_coordinates": [ 3, 87.61, 306.8, 90.38, 14.83 ], "formula_id": "formula_2", "formula_text": "H - i = {h k |g k ̸ = g i }" }, { "formula_coordinates": [ 3, 74.78, 446.19, 210.45, 10.68 ], "formula_id": "formula_3", "formula_text": "L train = (1 -α) • L M T + α • L KD + λ • L GC ," } ]
2023-05-24
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b10", "b21", "b3", "b11", "b7", "b15", "b16", "b24", "b1", "b14", "b24", "b0", "b21", "b3" ], "table_ref": [ "tab_0" ], "text": "The perception module plays a critical role in autonomous driving systems. Achieving efficient and effective fusion of multi-sensor and temporal features is a key research direction to enhance perception performance. While bird's-eye-view (BEV) based algorithms [3,11,22,4,12,8] have garnered significant attention and demonstrated high perceptual performance, we argue that they may not represent the optimal solution for the following reasons:\n• The transformation from image features to the BEV vector space involves reorganizing and rearranging dense features without introducing additional insights. However, this transformation does increase the complexity of the model.\n• Striking a balance between perception range, accuracy, and computational complexity is crucial for achieving optimal results. Different driving scenarios (e.g., highways, urban or rural areas) require specific parameter settings to ensure an effective trade-off between perceptual capabilities and computational efficiency.\n• In the context of end-to-end autonomous driving, instance features produced by sparse-based algorithms hold greater significance as they can be more easily integrated with graph-based models like transformers.\nUnlike BEV-based algorithms, the PETR series [16,17,25] algorithms utilize a query-based architecture and global cross-attention to achieve multi-view feature fusion. PETR excludes the dense view-transformation module, but similar to the vanilla DETR [2], it uses global attention, which results in a high theoretical computational cost. Therefore, it cannot be considered as a purely sparse algorithm. Based on the aforementioned reasons, we remain dedicated to the development of sparse-based algorithms to improve perception performance and prepare for end-to-end autonomous driving. We chose Sparse4D as our baseline algorithm for further enhancements.\nThe temporal module of Sparse4D [15] exhibits a notable limitation where it requires sampling multiple frames of historical data before performing feature fusion, as depicted in Figure 1(a). This leads to a linear increase in computational complexity with the number of historical frames, resulting in reduced inference and training speed, increased GPU memory usage (as shown in Table 1), and challenges in effectively incorporating long-term temporal features. To overcome this drawback, we propose an alternative solution by replacing the multi-frame sampling approach with a recurrent manner that leverages instance features, similar to query-based trackers and SteamPETR [25], as illustrated in Figure 1(b). Specifically, for the first frame, we perform detection using single-frame Sparse4D, which outputs a set of 3D bounding boxes along with their corresponding instance features. For subsequent frames, we transform the output from the previous frame to the current frame. The instance features remain unchanged, while the instance states, such as 3D bounding box, are projected onto the current frame as anchors, leveraging the ego motion data. The position embedding of the anchors is also explicitly re-encoded through an anchor encoder. To fully leverage the temporal instance features, we introduce a temporal cross-attention module in each layer of the decoder. The instance initialized through temporal projection primarily handle the tracklets, which are objects that have been previously detected and tracked over multiple frames. For newly emerging objects, we initialize them with a single-frame single-layer decoder, selecting the subset of instances with the highest scores to be propagated to the subsequent decoders. This design allows our temporal model to avoid increasing the number of anchors, resulting in comparable inference speed to the non-temporal model.\nIn addition to the temporal module, Sparse4Dv2 introduces following improvements: We conducted extensive experiments on the nuScenes 3D detection dataset [1], and the results indicate that Sparse4Dv2 exhibits a high level of competitiveness. It outperforms existing BEV-based algorithms such as SOLOFusion [22] and VideoBEV [4] in terms of perception performance, and also demonstrates an advantage in terms of inference speed.\n2 Related Works" }, { "figure_ref": [], "heading": "Camera-only 3D Detection", "publication_ref": [ "b18", "b25", "b20", "b28", "b30", "b31", "b12", "b5" ], "table_ref": [], "text": "The key aspect of camera-only 3D detection tasks is to estimate the depth or 3D coordinates of objects from 2D images. This field primarily encompasses three research directions: monocular 3D, stereo 3D, and multi-view 3D. Monocular 3D is a ill-posed problem that relies on the powerful fitting capability of neural networks to regress the depth in the camera coordinate system by extracting various information from the image [19,26,21,29]. Stereo 3D involves input from two or more cameras with a significant overlap angle. It utilizes feature point matching and equation-based joint optimization to calculate the depth of target points. Alternatively, depth can be predicted by constructing a cost volume [31,32]. Multi-view 3D lies between monocular 3D and stereo 3D in terms of the percentage of overlap between multiple views. It is a research direction of great interest in the fields of autonomous driving and robotics [13,6]. It primarily focuses on leveraging the fusion of multi-view and temporal features to enhance depth estimation accuracy." }, { "figure_ref": [], "heading": "Multi-view Feature Fusion", "publication_ref": [ "b22", "b11", "b7", "b12", "b15", "b27" ], "table_ref": [], "text": "Multi-view feature fusion can significantly improve the perception of objects across different viewpoints. In theory, it has the potential to improve the accuracy of depth estimation in regions where multiple viewpoints overlap. The LSS algorithm [23] utilizes depth estimation results to project image features into the 3D space and performs dense multi-view feature fusion on the BEV plane.\nWhen the accuracy of depth estimation is high, LSS-based algorithms [12,8] can achieve improved perception accuracy. The computational complexity of LSS is dependent on the resolution of the input feature maps and the size of the output BEV features. The calculations of LSS is dependent on the resolution of the input feature maps and the size of the output BEV features. Indeed, BEV-Former [13] also performs feature fusion in the BEV feature space. However, it differs in its approach by employing 3D-to-2D back-projection and utilizing deformable attention for feature fusion. Since BEVFormer needs to output dense BEV features, the query count in the attention can be large (e.g., 200 × 200), which limits the training and inference efficiency. The PETR [16] series abandons the concept of BEV and instead utilizes sparse queries to perform perception tasks. The correspondence between 2D and 3D information is established through 3D position encoding. PETR employs global attention, and the computational efficiency is significantly affected by the resolution of the image features, making it challenging to handle very high-resolution images (e.g., resolutions above 4K). Sparse-based algorithms like DETR3D [28], utilize sparse queries and sparse feature sampling. The computational complexity of the head part is independent of the image resolution, resulting in theoretically high computational efficiency. However, there is a certain gap in performance compared to dense algorithms." }, { "figure_ref": [], "heading": "Temporal Feature Fusion", "publication_ref": [ "b11", "b5", "b16", "b14", "b21", "b3", "b24" ], "table_ref": [], "text": "Temporal feature fusion can greatly improve the location and velocity estimation performance of single-frame models, leading to increased stability in perception results. Initially, temporal fusion was performed using only two frames. BEVDepth [12] and BEVDet4D [6] cache the BEV feature of the previous frame, warp it to the current time step, and concatenate with the feature of the current frame. In PETRv2 [17], both the features of the previous and current frames are used as keys for cross attention. Later, it was discovered that long-term temporal feature fusion could further enhance perception performance. Sparse4D [15] utilized 4D key points for feature sampling across multiple frames and then fused them using a fully connected network. SOLOFusion [22] caches multiple frame features in a silde window, with a window length of up to 16 frames, achieving a substantial improvement in perception performance. VideoBEV [4] transformed the parallel fusion in SOLOFusion into a recurrent form, reducing the computational complexity of fusion.\nStreamPETR [25] also adopted a recurrent form, using sparse queries to propagate features across the temporal dimension and achieved state-of-the-art performance.\nFigure 2: Overall Framework of Sparse4Dv2, which conforms to an encoder-decoder structure. The inputs consists of three components: multi-view images, camera parameters, and instance information from previous frames. The output is the refined instances (anchors and corresponding features), serve as the perception results for the current frame. Additionally, a subset of these instances is selected and used as input for the next frame.\n3 Methodology" }, { "figure_ref": [], "heading": "Overall Framework", "publication_ref": [ "b23" ], "table_ref": [], "text": "As shown in Figure 2, in Sparse4Dv2, multi-view images are first encoded to extract multi-view multi-scale feature maps I = I s ∈ R N ×C×Hs×Ws |1 ≤ s ≤ S , where S is the number of scales and N is the number of views. These feature maps are then fed into the decoder, which consists of one single-frame layer and five multi-frame layers. The single-frame layer includes three submodules: deformable aggregation, feedforward network (FFN), and the output layer for refinement and classification. The multi-frame layers, in addition to the aforementioned sub-modules, also incorporate two multi-head attention layers [24]. The cross attention is used to enhance the fusion of temporal features, while the self-attention facilitates feature interaction between instances. First, we initialize a set of instances, which includes their anchor boxes and feature vectors. We refine and score them using a single-frame layer, selecting the highest foreground confidence instances as input to the multi-frame layer. The input instances to the multi-frame layer come not only from the single-frame layer, but also from the outputs of previous frames, such as historical frames. The number of anchors in each layer is consistent, whether it is in the multi-frame or single-frame layer.\nThe output of the multi-frame layer serves as the detection result for the current frame, and a portion of the instances with high confidence scores are selected as input to the next frame." }, { "figure_ref": [], "heading": "Instance Temporal Propagation", "publication_ref": [], "table_ref": [], "text": "In Sparse4D, an instance is represented by three parts, which are anchor, instance feature and anchor embedding. Anchor is structured information, which indicates the state of the instance, and has actual physical meaning. The instance feature is a high-order semantic feature extracted from the image, mainly from the image encoder. And the anchor embedding is the feature encoding of the anchor, and a small anchor encoder Ψ is used to map the structured information of anchor to the high-dimensional space. This design completely decouples the image features and structured state of the instance, so we can add prior knowledge more conveniently. For instance temporal propagation, we only need to project its anchor, and use the anchor encoder to encode the projected anchor, and the instance feature can remain unchanged, the formula is as Equation 1.\nA t = Project t-1→t (A t-1 ), E t = Ψ (A t ) , F t = F t-1(1)\nWhere A denotes anchor, E denotes anchor embedding, F is the instance feature and Ψ indicates the anchor encoder. Sparse4D can be used to handle various perception tasks, just need to design different anchor and projection functions for different perception tasks.In particular, for 3D detection, the anchor is defined as a 3D bounding box, and the projection function is as Equation 2.\nA t-1 = {x, y, z, w, l, h, sin yaw, cos yaw, v x , v y , v z } t-1 [x, y, z] t = R t-1→t ([x, y, z] + d t [v x , v y , v z ]) t-1 + T t-1→t [w, l, h] t = [w, l, h] t-1 (2) [cos yaw, sin yaw, 0] t = R t-1→t [cos yaw, sin yaw, 0] t-1 [v x , v y , v z ] t = R t-1→t [v x , v y , v z ] t-1\nwhere d t is the time interval between frame t and t -1, R t-1→t and T t-1→t represent the rotation matrix and translation of the ego vehicle from time step t -1 to t, respectively. If 3D lane line detection is desired, the anchors can be defined as polylines, and the projection function would be the projection of each 3D point on the polyline." }, { "figure_ref": [], "heading": "Efficient Deformable Aggregation", "publication_ref": [], "table_ref": [], "text": "The Deformable Aggregation is designed to achieve feature fusion across multiple scales and views. It involves feature sampling from multiple feature maps and weighted summation. One of the most fundamental implementations is shown in Algorithm 1.\nAlgorithm 1: Basic Deformable Aggregation input : 1) feature maps requires storing numerous intermediate variables for backpropagation during training, especially when the size of f is large. As a result, it consumes a significant amount of GPU memory. Additionally, the frequent HBM access also slows down the inference speed. To address this problem, we encapsulate the feature sampling and scale/view dimension weighting in the Basic Deformable Aggregation as a CUDA operation, directly outputting multi-point features in a single step. We refer to this optimization as Efficient Deformable Aggregation (EDA), as shown in Figure 3. EDA exhibits excellent parallel performance, allowing for complete parallelization in the K and C dimensions. The computational complexity for a single thread is only N × S. Additionally, for multi-view scenarios, a point is projected to at most two views, resulting in a single-thread computational complexity of only 2S. This design leverages the parallel computing capabilities of GPUs and AI chips, significantly improving efficiency and reducing both memory consumption and inference time. EDA can serve as a versatile op, suitable for various applications requiring multi-image and multi-scale fusion.\nI = I s ∈ R N ×C×Hs×Ws |1 ≤ s ≤ S , 2) projected 2D points P ∈ R K×N ×2 , 3) weights W ∈ R K×N ×S×G . C is the" }, { "figure_ref": [], "heading": "Camera Parameter Encoding", "publication_ref": [], "table_ref": [], "text": "In Sparse4Dv1, the weights in Deformable Aggregation are computed through a fully connected layer. The information of camera parameters is gradually embedded into the parameters of this fully connected layer during the training process, which can be seen as an implicit neural representation or as overfitting to the training set. This approach can lead to poor generalization to camera parameters. Specifically, when swapping the input order of two images, the order of weights does not change accordingly, which can impact perception performance. Additionally, if we apply large-scale data augmentation on camera parameters during training, the convergence speed of this parameterized implicit representation may be significantly affected. To address this, we directly input camera parameters into the network and map the transformation matrix from output space to image coordinate space into a high-dimensional feature vector. We then add this feature vector to the instance feature and use the combined feature to calculate the weights for the corresponding view." }, { "figure_ref": [], "heading": "Dense Depth Supervision", "publication_ref": [], "table_ref": [], "text": "During our experiments, we observed that sparse-based methods lacked sufficient convergence capability and speed in the early training stage. To alleviate this problem, we introduced multi-scale dense depth estimation with point clouds as supervision. During inference, this sub-network will not be activated. For each scale of the feature map, we employ a 1 × 1 convolution to output depth values at a pre-defined equivalent focal length. These depth values are then multiplied by the ratio of the camera focal length to the equivalent focal length. The loss function for dense depth supervision uses the vanilla L1 loss. After incorporating dense depth supervision, we will remove the depth-reweight module from Sparse4Dv1." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benckmark", "publication_ref": [ "b0" ], "table_ref": [], "text": "In order to validate the effectiveness of Sparse4Dv2, we utilized the nuScenes benchmark. The dataset consists of 1000 scenes, with 700 scenes for training, 150 for validation, and 150 for testing. Each scene comprises a 20-second video clip captured at 2 frames per second (fps), with 6 viewpoint images. In addition to the 3D bounding box labels for 10 different object classes, the dataset also provides information on the vehicle motion states and camera parameters. The evaluation metrics include mean Average Precision (mAP), mean Average Error of Translation (mATE), Scale (mASE), Orientation (mAOE), Velocity (mAVE), Attribute (mAAE) and nuScenes Detection Score (NDS), where NDS is a weighted average of other metrics. Please refer [1] for details.\nDue to our goal of improving the performance of sparse-based algorithms, our direct baseline for comparison is Sparse4Dv1. Additionally, to showcase the competitiveness of sparse-based algorithms, we will also compare our method against other state-of-the-art BEV-based algorithms." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b4", "b9", "b13", "b8", "b20", "b19", "b32", "b21", "b14" ], "table_ref": [], "text": "We utilized ResNet50, ResNet101 [5], and VoVNet-99 [10] as backbones, along with FPN [14] as the neck architecture. The pretrained weights for ResNet models were obtained from ImageNet-1K [9] and nuImages datasets, while the pretrained weights for VoVNet were provided by DD3D [21]. All experiments were trained for 100 epochs using the AdamW optimizer [20] without CBGS [33].\nSimilar to most open-source algorithms, we employed image data augmentation and lidar rotation augmentation techniques. To improve training efficiency for temporal models, we adopted the sequential iteration approach inspired by SOLOFusion [22]. We utilized a total of 900 instance anchors, with 600 of them being temporal instances from the historical output, and the remaining 300 coming from the single-frame layer. The remaining hyperparameters for network are kept consistent with Sparse4Dv1 [15]." }, { "figure_ref": [], "heading": "Ablation Studies and Analysis", "publication_ref": [], "table_ref": [ "tab_1", "tab_3", "tab_3", "tab_3", "tab_3" ], "text": "In our ablation experiments, we consistently used ResNet50 with ImageNet-1K pretraining. The input image size was set to 256 × 704.\nEfficient Deformable Aggregation significantly impacts GPU memory usage and inference speed, as shown in Table 2.During the training phase, when the batch size is set to 1, the GPU memory consumption decreases from 6328 MB to 3100 MB, resulting in a 51% reduction. The maximum available batch size increases from 3 to 8. Additionally, the total training time for a complete experiment is reduced from 23.5 hours to 14.5 hours. These improvements greatly reduce the training barrier and enhance the efficiency of Sparse4D. In the inference phase, EDA improves the model's FPS from 13.7 to 20.3, resulting in a significant improvement of approximately 42%. Furthermore, EDA reduces GPU memory usage during inference by 53%, which lowers the cost and deployment complexity of Sparse4D. Camera Parameter Encoding explicitly incorporates camera parameters into the network. From the metrics, it can be observed that removing this module results in a decrease of 2.0 mAP and 4.8 mAOE, as shown in Table 3 (Exp.3 and Exp.5),. In theory, estimating the orientation of an object not only relies on its image features but also requires knowledge of the object's position in the coordinate system and camera intrinsic and extrinsic parameters. Therefore, the explicit encoding of camera parameters can enhance the accuracy of orientation estimation.\nDense Depth Supervision is mainly to make Sparse4D easier to train. Compared to Sparse4Dv1, we added more data augmentation and changed the pretraining parameters from FCOS3d to ImageNet, making the model training more challenging. Comparing Exp.4 and Exp.5 in Table 3, it can be observed that removing Dense Depth Supervision leads to a significant decrease in performance metrics (mAP and NDS decrease by 8.5 and 10.4, respectively). This decrease is attributed to the occurrence of gradient collapse during the training process.\nThe temporal decoder consists of a Single-Frame Layer and five Multi-Frame Layers. The singleframe layer is primarily designed to handle the selection of newly generated objects. If all layers in the decoder are changed to multi-frame layers, the number of anchors dedicated to handling new objects will decrease while maintaining the predetermined total anchor quantity. This reduction in the number of anchors for new objects leads to a decrease in detection results, as shown in Exp.2 and Exp.3 in Table 3 (with a decrease of 3.5 in mAP and 2.0 in NDS, respectively). If the multi-frame layers are removed, and all layers in the decoder receive input instances solely from the current frame, it will result in a non-temporal fusion model. Comparing the non-temporal model to the temporal model (Exp.1 and Exp.5 in Table 3) directly showcases the significant impact of temporal fusion. It brings about a significant improvement of 9.8 mAP and 12.5 NDS. " }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "To better control variables, we first compare our method with other approaches on the nuScenes validation dataset, and the results are shown in StreamPETR by 0.7 mAP and 0.2 NDS. Under the setting of ResNet101 and high-resolution input, Sparse4Dv2 also achieves the highest performance metrics. Compared to the baseline Sparse4Dv1, the revised version improves NDS by 3.0 and inference speed by 2.9 times. Compared to StreamPETR, Sparse4Dv2 also demonstrates advantages with a 0.2 NDS improvement and 2 FPS higher inference speed. When we introduce an additional future frame, Sparse4Dv2 still maintains faster inference speed than StreamPETR, and achieves a 1.6 higher NDS score.\nIt is important to note that when low-resolution images are used as input, Sparse4Dv2 exhibits significantly lower inference speed compared to StreamPETR (20.3 vs 26.7 FPS). However, when the resolution is increased to 512 × 1408, Sparse4Dv2 surpasses StreamPETR in terms of speed (8.4 vs 6.4 FPS). The computational workload of the Sparse4Dv2 head is independent of the input resolution, making it more suitable for high-resolution applications such as long-range detection. This further demonstrates the advantages of the sparse decoder.\nTable 5 presents the performance metrics on the nuScenes test dataset. Comparing to Sparse4Dv1, the enhanced version in this paper has improved all metrics and achieved a significant advancement of 4.3 NDS. In terms of the metric NDS, we have achieved state-of-the-art (SOTA) performance, surpassing SOLOFusion, BEVFormerv2, VideoBEV, and StreamPETR." }, { "figure_ref": [], "heading": "Conclusion and Outlook", "publication_ref": [], "table_ref": [], "text": "In this paper, we focuses on enhancing the performance of sparse-based algorithms for multiview temporal perception. Based on Sparse4D, a series of improvements were made, including: 1) Structure: The temporal module was transformed from a multi-frame sampling approach to a recurrent manner, and camera-parameter encoding was incorporated. 2) Training optimization: Dense depth supervision was introduced as an auxiliary supervision signal to improve training performance.\n3) Efficiency: Efficient Deformable Aggregation was proposed to reduce training memory usage and improve training and inference speed. Experiments were conducted on the nuScenes 3D detection dataset. The results demonstrate that our improved model, Sparse4Dv2, not only achieved significant improvements in inference speed but also made substantial advancements in detection performance, reaching SOTA levels.\nFurther exploration is needed to validate Sparse4Dv2, including its generalization, scene robustness, and long-range detection performance. Additionally, there is still significant research potential for sparse-based methods in various areas such as HD map construction, topology, trajectory prediction, and end-to-end planning. In future studies, we hope that Sparse4Dv2 can serve as a new baseline and be further investigated in these directions." } ]
a) Multi-frame Sampling and Fusion (b) Recurrent Temporal Fusion Figure 1: Comparison of two different temporal fusion approaches. (a) Sparse4D requires projecting the anchors of the current frame onto each historical frame, followed by multi-frame feature sampling and fusion. (b) Sparse4Dv2 achieves fusion through the propagation of instance features.
Sparse4D v2 Recurrent Temporal Fusion with Sparse Model
[ { "figure_caption": "( 1 )1A reconstruction of the deformable aggregation operation by combining bilinear grid sampling and weighted sum into a single CUDA operator, significantly reducing the memory footprint during training and to some extent improving training speed. (2) Incorporating camera parameter encoding into deformable aggregation, along with image and output coordinate augmentation during training, to achieve higher perceptual metrics and robustness. (3) Introducing dense depth supervision based on LiDAR point clouds to facilitate training optimization and enhance the detection accuracy.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4 WriteFigure 3 :43Figure 3: Efficient Deformable Aggregation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of inference speed and GPU memory usage between sparse4D v1 and v2. Test results based on NVIDIA RTX 3090. Input image size is 704x256, and the backbone is ResNet50.", "figure_data": "Sparse4Dv1Sparse4Dv2Frames T123579-FPS21.5 15.3 12.6 9.07.16.119.4GPU Mem (M)424515614 792 971 1149432", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation of Efficient Deformable Aggregation. This experiment was conducted using RTX 3090 GPU with 24 GB memory. For measurements of Training GPU Memory, Infer GPU Memory, and Infer FPS, we used a batch size of 1. The Training Time represents the time taken to train the model for 100 epochs using the maximum batch size on 8 RTX 3090 GPUs.", "figure_data": "TrainingInferenceEDA GPU Memory (M) Max Batch Size Time (h) GPU Memory (M) FPS6328323.592513.7✓3100814.543220.3", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Ablation Experiments. MF (Multi-Frame), SFL (Single-Frame Layer), CPE (CameraParameter Encoding), and DDS (Dense Depth Supervision). * of Exp.4 indicates that the experimentfailed during the training process due to training instability.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Under the setting of ResNet50 and lowresolution input, Sparse4Dv2 achieves the best mAP and NDS, surpassing the SOTA BEV-based algorithm VideoBEV by 1.7 mAP and 0.4 NDS, and outperforming the query-based SOTA algorithm", "figure_data": "MethodBackboneImage sizemAP↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓ NDS↑ FPS↑BEVPoolv2 [7]ResNet50256 × 7040.4060.5720.2750.4630.2750.1880.52616.6BEVFormerV2 [30]ResNet50-0.4230.6180.2730.4130.3330.1880.529-SOLOFusion [22]ResNet50256 × 7040.4270.5670.2740.5110.2520.1810.53411.4VideoBEV [4]ResNet50256 × 7040.4220.5640.2760.4400.2860.1980.535-StreamPETR [25]ResNet50256 × 7040.4320.6090.2700.4450.2790.1890.53726.7Sparse4Dv2ResNet50256 × 7040.4390.5980.2700.4750.2820.1790.53920.3BEVDepth [12]ResNet101512 × 1408 0.4120.5650.2660.3580.3310.1900.535-Sparse4D [15]Res101-DCN 640 × 1600 0.4440.6030.2760.3600.3090.1780.5502.9SOLOFusionResNet101512 × 1408 0.4830.5030.2640.3810.2460.2070.582-StreamPETR †ResNet101512 × 1408 0.5040.5690.2620.3150.2570.1990.5926.4Sparse4Dv2ResNet101512 × 1408 0.4850.5550.2720.3670.2560.1820.5808.4Sparse4Dv2 †ResNet101512 × 1408 0.5050.5480.2680.3480.2390.1840.5948.4Sparse4Dv2 † *ResNet101512 × 1408 0.5210.5190.2650.3640.1990.1800.6087.1", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of 3D detection on nuScenes validation dataset. † indicates to use pre-trained weights from the nuImage dataset, and * means to use a future frame.", "figure_data": "MethodBackbonemAP↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓ NDS↑Sparse4DVovNet-990.5110.5330.2630.3690.3170.1240.595HoP-BEVFormer [34]VovNet-990.5170.5010.2450.3460.3620.1050.603SOLOFusionConvNeXt-B [18]0.5400.4530.2570.3760.2760.1480.619BEVFormerv2InternImage-B [27] 0.5400.4880.2510.3350.3020.1220.620VideoBEVConvNeXt-B0.5540.4570.2490.3810.2660.1320.629BEVFormerv2InternImage-XL0.5560.4560.2480.3170.2930.1230.634StreamPETRVovNet-990.5500.4790.2390.3170.2410.1190.636Sparse4Dv2VovNet-990.5570.4620.2380.3280.2640.1150.638", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of 3D detection on nuScenes test dataset.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Xuewu Lin; Tianwei Lin; Zixiang Pei; Lichao Huang; Zhizhong Su
[ { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b0", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b1", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Zehui Chen; Zhenyu Li; Shiquan Zhang; Liangji Fang; Qinhong Jiang; Feng Zhao", "journal": "", "ref_id": "b2", "title": "Bevdistill: Cross-modal bev distillation for multi-view 3d object detection", "year": "2022" }, { "authors": "Chunrui Han; Jianjian Sun; Zheng Ge; Jinrong Yang; Runpei Dong; Hongyu Zhou; Weixin Mao; Yuang Peng; Xiangyu Zhang", "journal": "", "ref_id": "b3", "title": "Exploring recurrent long-term temporal fusion for multi-view 3d perception", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b4", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Junjie Huang; Guan Huang", "journal": "", "ref_id": "b5", "title": "Bevdet4d: Exploit temporal cues in multi-camera 3d object detection", "year": "2022" }, { "authors": "Junjie Huang; Guan Huang", "journal": "", "ref_id": "b6", "title": "Bevpoolv2: A cutting-edge implementation of bevdet toward deployment", "year": "2022" }, { "authors": "Junjie Huang; Guan Huang; Zheng Zhu; Dalong Du", "journal": "", "ref_id": "b7", "title": "Bevdet: High-performance multicamera 3d object detection in bird-eye-view", "year": "2021" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Communications of the ACM", "ref_id": "b8", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "Youngwan Lee; Joong-Won Hwang; Sangrok Lee; Yuseok Bae; Jongyoul Park", "journal": "", "ref_id": "b9", "title": "An energy and gpu-computation efficient backbone network for real-time object detection", "year": "2019" }, { "authors": "Yinhao Li; Han Bao; Zheng Ge; Jinrong Yang; Jianjian Sun; Zeming Li", "journal": "", "ref_id": "b10", "title": "Bevstereo: Enhancing depth estimation in multi-view 3d object detection with dynamic temporal stereo", "year": "2022" }, { "authors": "Yinhao Li; Zheng Ge; Guanyi Yu; Jinrong Yang; Zengran Wang; Yukang Shi; Jianjian Sun; Zeming Li", "journal": "", "ref_id": "b11", "title": "Bevdepth: Acquisition of reliable depth for multi-view 3d object detection", "year": "2022" }, { "authors": "Zhiqi Li; Wenhai Wang; Hongyang Li; Enze Xie; Chonghao Sima; Tong Lu; Qiao Yu; Jifeng Dai", "journal": "", "ref_id": "b12", "title": "Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers", "year": "2022" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b13", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Xuewu Lin; Tianwei Lin; Zixiang Pei; Lichao Huang; Zhizhong Su", "journal": "", "ref_id": "b14", "title": "Sparse4d: Multi-view 3d object detection with sparse spatial-temporal fusion", "year": "2022" }, { "authors": "Yingfei Liu; Tiancai Wang; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b15", "title": "Petr: Position embedding transformation for multi-view 3d object detection", "year": "2022" }, { "authors": "Yingfei Liu; Junjie Yan; Fan Jia; Shuailin Li; Qi Gao; Tiancai Wang; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b16", "title": "Petrv2: A unified framework for 3d perception from multi-camera images", "year": "2022" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b17", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Zechen Liu; Zizhang Wu; Roland Tóth", "journal": "", "ref_id": "b18", "title": "Smoke: Single-stage monocular 3d object detection via keypoint estimation", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b19", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Dennis Park; Rares Ambrus; Vitor Guizilini; Jie Li; Adrien Gaidon", "journal": "", "ref_id": "b20", "title": "Is pseudo-lidar needed for monocular 3d object detection", "year": "2021" }, { "authors": "Jinhyung Park; Chenfeng Xu; Shijia Yang; Kurt Keutzer; Kris Kitani; Masayoshi Tomizuka; Wei Zhan", "journal": "", "ref_id": "b21", "title": "Time will tell: New outlooks and a baseline for temporal multi-view 3d object detection", "year": "2022" }, { "authors": "Jonah Philion; Sanja Fidler", "journal": "Springer", "ref_id": "b22", "title": "Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Attention is all you need", "year": "2017" }, { "authors": "Shihao Wang; Yingfei Liu; Tiancai Wang; Ying Li; Xiangyu Zhang", "journal": "", "ref_id": "b24", "title": "Exploring object-centric temporal modeling for efficient multi-view 3d object detection", "year": "2023" }, { "authors": "Tai Wang; Xinge Zhu; Jiangmiao Pang; Dahua Lin", "journal": "", "ref_id": "b25", "title": "Fcos3d: Fully convolutional one-stage monocular 3d object detection", "year": "2021" }, { "authors": "Wenhai Wang; Jifeng Dai; Zhe Chen; Zhenhang Huang; Zhiqi Li; Xizhou Zhu; Xiaowei Hu; Tong Lu; Lewei Lu; Hongsheng Li", "journal": "", "ref_id": "b26", "title": "Internimage: Exploring large-scale vision foundation models with deformable convolutions", "year": "2022" }, { "authors": "Yue Wang; Campagnolo Vitor; Tianyuan Guizilini; Yilun Zhang; Hang Wang; Justin Zhao; Solomon", "journal": "PMLR", "ref_id": "b27", "title": "Detr3d: 3d object detection from multi-view images via 3d-to-2d queries", "year": "2022" }, { "authors": "Xinshuo Weng; Kris Kitani", "journal": "", "ref_id": "b28", "title": "Monocular 3d object detection with pseudo-lidar point cloud", "year": "2019" }, { "authors": "Chenyu Yang; Yuntao Chen; Chenxin Hao Tian; Xizhou Tao; Zhaoxiang Zhu; Gao Zhang; Hongyang Huang; Yu Li; Lewei Qiao; Lu", "journal": "", "ref_id": "b29", "title": "Bevformer v2: Adapting modern image backbones to bird's-eye-view recognition via perspective supervision", "year": "2022" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tian Fang; Long Quan", "journal": "", "ref_id": "b30", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "Zehao Yu; Shenghua Gao", "journal": "", "ref_id": "b31", "title": "Fast-mvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinement", "year": "2020" }, { "authors": "Benjin Zhu; Zhengkai Jiang; Xiangxin Zhou; Zeming Li; Gang Yu", "journal": "", "ref_id": "b32", "title": "Class-balanced grouping and sampling for point cloud 3d object detection", "year": "2019" }, { "authors": "Zhuofan Zong; Dongzhi Jiang; Guanglu Song; Zeyue Xue; Jingyong Su; Hongsheng Li; Yu Liu", "journal": "", "ref_id": "b33", "title": "Temporal enhanced training of multi-view 3d object detector via historical object prediction", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 188.66, 713.13, 316.01, 10.75 ], "formula_id": "formula_0", "formula_text": "A t = Project t-1→t (A t-1 ), E t = Ψ (A t ) , F t = F t-1(1)" }, { "formula_coordinates": [ 5, 185.91, 125.2, 318.75, 70.54 ], "formula_id": "formula_1", "formula_text": "A t-1 = {x, y, z, w, l, h, sin yaw, cos yaw, v x , v y , v z } t-1 [x, y, z] t = R t-1→t ([x, y, z] + d t [v x , v y , v z ]) t-1 + T t-1→t [w, l, h] t = [w, l, h] t-1 (2) [cos yaw, sin yaw, 0] t = R t-1→t [cos yaw, sin yaw, 0] t-1 [v x , v y , v z ] t = R t-1→t [v x , v y , v z ] t-1" }, { "formula_coordinates": [ 5, 152.01, 336.46, 328.24, 23.73 ], "formula_id": "formula_2", "formula_text": "I = I s ∈ R N ×C×Hs×Ws |1 ≤ s ≤ S , 2) projected 2D points P ∈ R K×N ×2 , 3) weights W ∈ R K×N ×S×G . C is the" } ]
2023-05-23
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b14", "b16", "b12", "b13", "b11", "b12" ], "table_ref": [], "text": "D UE to environment or equipment limitations, images taken under low lighting conditions always result in poor pictures with severe noise, low contrast, and many other problems. Improving the perceptual quality of such low-light images has been a long-standing issue. Traditional solutions include histogram curve adjustment methods [1]- [3] and Retinex-based methods [4]- [6]. Although hand-crafted constraints or priors are helpful in improving the quality of the low-light image, the enhanced output always suffers from over-or under-enhancement in local regions. In recent years, with the surge of deep learning, various data-driven methods have been proposed to tackle this problem, including CNNbased method [7], [8], GAN-based method [9] and flow-based mothed [10].\nHowever, although the above mentioned learning-based methods could achieve promising results, most of them require a huge amount of computational resources and long inferring time, making them difficult to be considered for real-time systems or mobile applications. To address this problem, several lightweight models have recently been developed [11], [12]. They also follow the two typical approaches: curve adjustment-based approach and Retinex-based approach. The Comparison of performance and efficiency. We calculate the average Perceptual Index (PI)↓ [15]- [17] on five real-world datasets. Giga Floating-Point Operations Per Second (GFLOPS)↓ is measured on a 1080P (1080×1920) image. The parameter amounts are provided in brackets. Our SCLM (marked with ⋆) outperforms previous methods in terms of image quality, parameter amount, and GFLOPs.\nrepresentative curve adjustment based methods are the zero-DCE series [13], [14]. They model low-light image enhancement as a task of image-specific curve estimation. Their network structure only contains several convolution layers to estimate the adjusting curves. As one of the recently proposed Retinex-based methods, SCI [12] develops a Self-Calibrated Illumination learning framework for fast low-light image enhancement. This method introduces an auxiliary process to boost model performance in the training phase and discards the auxiliary structures during inference. SCI has only three convolution layers in the test phase. To further push the lightweight LLIE model to the extreme, a single convolution layer model (SCLM) is proposed that could achieve promising results using structural re-parameterization technique. In addition, due to varying ambient light in the real world, different areas on the same image often have various exposure levels. The enhanced image is thus prone to underexposure in some regions while overexposure in others. To address this issue, a local adaptation module is introduced that optimizes underexposed and overexposed areas by learning a set of curve adjustment parameters. In particular, it is worth noting that this additional module is also extremely lightweight. Instead of employing a complex learnable module to learn the mapping for each pixels directly, A set of shared parameters for the adjustment curve are learned to accomplish local exposure correction. Unlike zeroDCE [13] which performs iterative curve adjustment, curve adjustment is only performed once in SCLM. In short, by jointly optimizing the Retinex-based global enhancement module and curve-based local adaptation module, SCLM exhibits advanced enhancement performance, and its ultra-lightweight structure design also makes it suitable for edge devices.\nAs shown in Fig. 1, the proposed SCLM outperforms previous methods regarding image quality, parameter amount, and GFLOPs. To the best of our knowledge, our work is the first to apply the structural re-parameterization technique to the field of LLIE, to build an ultra-simple enhancement structure with one layer of convolution.\nThe remainder of the paper is organized as follows: Section II reviews the related work. Details of the proposed SCLM are given in Section III. Experiments and analyses are provided in Section IV. Finally, the paper is concluded in Section V." }, { "figure_ref": [], "heading": "II. RELATED WORK A. Conventional LLIE Methods.", "publication_ref": [ "b0", "b2", "b3", "b5", "b17", "b3", "b5", "b18", "b1", "b2", "b19" ], "table_ref": [], "text": "Traditional LLIE methods can be divided into two categories: histogram-based methods [1]- [3] and Retinex theroy based methods [4]- [6]. According to Retinex theroy [18], a low-light image can be decoupled into a reflection component and an illumination component. Once the illumination component has been estimated, it can be used to flexibly adjust the exposure level of the image. Specifically, Rahman et al. [4] propose the Multi-Scale Retinex with Color Restoration (MSRCR) algorithm, which transforms low-light images into well-exposured renderings by combining color constancy with local contrast and lightness enhancement. Fu et al. [6] introduce a probabilistic approach to improve the estimation of illumination and reflectance. Guo et al. [19] propose a coarse-to-fine method that initially determines the brightness of each pixel by identifying the highest value among the three channels and then enhances the preliminary illumination map by applying a structural constraint to produce the ultimate illumination map. The histogram-based method attempts to redistribute the luminous intensity on the histogram globally or locally. Coltuc et al. [2] regard the low-light image enhancement problem as a K-dimensional space optimization problem with ordering strict among pixels. Haidi et al. [3] propose a brightness-preserving dynamic histogram equalization method, thus fulfilling the requirement of maintaining the mean brightness of the input and the enhanced image. Lee et al. [20] present a contrast enhancement algorithm that amplifies the gray-level differences between adjacent pixels in a tree-like layered structure." }, { "figure_ref": [], "heading": "B. Learning-based LLIE Methods.", "publication_ref": [ "b20", "b6", "b10", "b11", "b21", "b12", "b13", "b21", "b6", "b10", "b22", "b23", "b11", "b12", "b12", "b13", "b24", "b25" ], "table_ref": [], "text": "In recent years, data-driven methods for low-light image enhancement have received increasing attention. Unlike other restoration tasks such as super-resolution and denoising, some work [21] points out that combining traditional methods and deep learning could achieve better performance than direct end-to-end enhancement due to their physical interpretability. Similar to the traditional technical route, the mainstream learning-based method can also be divided into Retinex-based methods [7], [11], [12], [22] and curve adjustment-based methods [13], [14]. In practice, Retinex-based deep learning methods generally concatenate three channels of color images and then use a neural network to predict the illuminance component. Shen et al. [22] establish a relationship between multi-scale Retinex and the feedforward convolutional neural network to enhance the low-light image. Wei et al. [7] build a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net to learn consistent reflectance shared by paired low/normal-light images. RUAS [11] first develops models to represent the inherent underexposed structure of low-light images and then unfolds optimization processes to construct the comprehensive propagation structure for LLIE. Zhao et al. [23] propose a generative strategy for Retinex decomposition. They build a unified deep framework to estimate the latent illuminance component and to perform low-light image enhancement. Fan et al. [24] propose a low-light image enhancement model to address the problem of uneven exposure or partial overexposure with illumination constraint. Most recently, SCI [12] constructs a self-calibrated module that employs an Auxiliary block to realize the convergence between each stage in the training phase and only uses the single basic block for inference. Some other representative methods are based on the curve adjustment scheme. ZeroDCE [13] formulates light enhancement as a task of image-specific curve adaptation with a deep network. Based on [13], ZeroDCE++ [14] further approximate pixelwise and higher-order curves by iteratively applying itself and also discuss multiple options to balance the enhancement performance and the computation cost of resources for highresolution images. There are also some end-to-end approaches [25], [26] that directly enhance low-light images with a deep model. Although they achieve pleasing results, their high computational complexity and long inference times make them difficult to deploy in practice." }, { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_2", "fig_3" ], "heading": "III. THE PROPOSED SCLM METHOD", "publication_ref": [ "b12", "b17", "b11", "b26", "b27", "b29", "b30", "b31" ], "table_ref": [], "text": "Inspired by the above pioneering work, our approach consists of two enhancement steps from coarse to fine, which can be viewed as combining the Retinex-based approach and the curve estimation approach. Before delving into the specific designs, let's first briefly describe the pipeline of the proposed method. The overview of the proposed method is given in Fig. 2. SCLM is constituted of two parts, the global enhancement module and the local adaptation module. Specifically, The global enhancement module contains only one layer of convolution at the inference time to estimate the illumination component. An enhanced image can be obtained by the original low-light image dividing with the estimated illumination part according to the Retinex theory. Since different areas of an image have diverse illuminations, a learnable quadratic curve is then jointly optimized through fine adjustment for different image regions. Instead of using complex iterative adjustments for each pixel [13], our curve adjustment process is performed only once, and all pixels share the same set of curve adjustment parameters.\nA. Global low light enhancement 1) Plain Structure: The global low-light enhancement (GLLE) module is built upon the Retinex theroy [18] :\ny = z ⊗ x(1)\nwhere x represents illumination component, y represents lowlight observation, and z denotes desired clear image. Generally speaking, the luminance component is considered as the key part of optimization. Therefore, many Retinex-based methods focus on estimating the luminance component more accurately and efficiently. Recently, many ultra-lightweight deep learning models have been proposed, continuously simplifying the structure of network, among which SCI [12] comprises only three convolutional layers.\nTo further push the extreme of structure simplicity for the sake of decreasing computation and complexity. A simple model with only one convolutional layer is first constructed, which is refered as the plain model.\nAs shown in Fig. 3 (a), the plain model consists of only one convolutional layer followed by a batch normalization layer. A residual connection is employed to improve the stability of training and a sigmoid activation function is used to ensure that the estimated luminance component falls between 0 and 1. The training process is carried out in a fully supervised manner. To be specific, the experiments are conducted on the Multi-Exposure dataset [27], which contains image pairs with different exposure levels. Here, we use the exists apparent color distortion in the overall image, and there are significant color unsaturation and artifacts in Fig 4 (c) (e.g., face region) and (d) (e.g., sky region). This experiment indicates the representative capability of the plain structure is limited.\n2) Structural Re-parameterization : Since the plain singlelayer model cannot produce satisfactory results, a natural solution is to deepen the network with sequence layers or adopt a multi-branch structure. However, increasing the depth or width of the network will inevitably lead to increased computational complexity. To increase the representational capability of the model as much as possible while ensuring its simple design, the structural re-parameterization technique is introduced. Structural re-parameterization [28]- [30] refers to utilizing a multi-branch structure during training to enrich the feature space and then transforming the model into an equivalent single-branch structure during inference to enhance the representational capability of a single convolution. Recently, some low-level tasks, such as super-resolution [31], [32], have utilized structural re-parameterization techniques to improve the performance of edge-oriented methods.\nInspired by these previous methods, we design a multibranch model incorporating various convolution and pooling operations to accomplish the estimation of the luminance component. As shown in Fig. 3 (b), our multi-branch GLLE consists of four parallel branches including a 3×3 convolution branch, a 1×1 convolution branch, a sequence of 1×1 convolutions followed by 3×3 convolutions branch, and a sequence of 1×1 convolutions followed by an average pooling branch. We next introduce the transformations in our model, enabling us to collapse the multiple-layer and multiple-branch model into a single convolution. Merge BN into preceding Conv. A common optimization strategy for modern deep learning framework is to merge the batch normalization (BN) into its preceding convolution at inference time to accelerate the speed. This process can be depicted as follows. Let µ and σ denote the accumulated channel-wise mean and standard deviation, γ and β denote the learnable scaling factor and bias. Then the output feature O can be obtained by:\nO = (I ⊛ W -µ). γ σ + β,(2)\nwhere I represents the input feature and W represents the weight of the kernel, ⊛ denotes convolution operator. According to the homogeneity of the convolutional layer, BN can be fused into its preceding Conv for inference by building a new single Conv layer with kernel W ′ and bias b ′ and their values are assigned as:\nW ′ ← γ σ .W b ′ ← - µγ σ .W + β.(3)\nAfter the training of the network, the transformed parameters (kernel and bias) can be saved for model inference.\nMerge sequential Convs. According to the associativity of convolution, the sequence of 1×1 Conv -3×3 Conv can be merged into one single 3×3 Conv:\nO = (I ⊛ W 1 + b 1 ) ⊛ W 3 + b 3 = I ⊛ (W 1 ⊛ W 3 ) merged weight + (b 1 ⊛ W 3 + b 3 ) merged bias(4)\nwhere W 1 and W 3 are the weight of the original 1 × 1 and 3 × 3 Conv, b 1 and b 3 are the bias of 1×1 and 3×3 Conv. For the sequence of 1×1 Conv-AVG Pooling branch, the pooling operation can be regarded as a convolution kernel with prespecified parameters. Precisely, an average pooling with kernel size K and stride S can be replaced by a Conv with the same K and S, and the kernel values are set as 1 K 2 . By utilizing this re-parameter technique, the sequential Convs are shrinked into a single convolutional layer. Merge multiple branch Conv. Since the convolution operation is a linear transformation with additivity, two or more parallel Conv layers with the same kernel size can be merged into a single Conv layer. For two convolution kernels with different kernel sizes (e.g., 3×3 convolution and 1×1 convolution), they can be unified into the same size by padding the smaller convolution kernels with zeros. Then, The merge process of n parallel convolution kernels can be formulated as:\nW ′ ← W (1) + W (2) + ... + W (n) , b ′ ← b (1) + b (2) + ... + b (n)(5)\nwhere W (i) stands for the weight of the i th kernel and b i denotes the weight of the i th bias, W ′ and b ′ denotes the merged weight and bias. By employing these steps of transformations, diverse branches of GLLE can be combined into a single convolution, while the representative capability is maintained. Through experiments, we verify that with the support of structural reparameterization, significant improvements can be achieved in the restoration results. And its performance is comparable to many methods in terms of visual effects whose parameter counts thousands of times higher. As can be seen in Fig. 4, the model with structure reparameterization performs significantly better than the plain structure. Through experiments, it is verified that considerable results can be achieved with very few learnable parameters, indicating that even elementary structures have great potential for LLIE task." }, { "figure_ref": [], "heading": "B. Local Adaptation Module", "publication_ref": [ "b6", "b10", "b11", "b12", "b32", "b12", "b13" ], "table_ref": [], "text": "Although a single convolutional layer can achieve acceptable results, its ability to enhance images under complex low-light conditions needs further improvement. Similar to many Retinex-based methods [7], [11], [12], it is extremely challenging to process images containing different exposure levels in a single image. It is easy for Retinex-based methods to encounter situations where some areas are overexposed while other blacklit regions are insufficiently enhanced. This is due to the reason that methods based on Retinex theory often restrict the illumination map to be within 0 to 1. The enhancement process of dividing the low-light input image by the illumination map can thus only brighten each pixels. In contrast, methods based on curve adjustment are relatively more flexible. Inspired by lighting enhancement [13] and automatic exposure correction methods [33], a curve adjustment module is designed for SCLM to finely enhance the image further.\nThis local adaptation module will learn a set of curve adjustment parameters in a data-driven manner. The curve can be used to enhance dark areas while suppressing overexposure in bright areas. In addition, the curve should have the following characteristics: (1) it should be simple enough and learnable by backpropagation. (2) as different areas require different degrees of enhancement, the curve should satisfy a non-linear mapping relationship. Taking into account these factors and the experience of previous work [13], [14], a quadratic function is used to model this mapping relationship.\nFor each input RGB image, it is first converted to the YCbCr color space and then the difference between the Y channel of each pixel and the median Y channel of the entire image is calculated:\nI y = RGB2Y cbcr(I RGB ) M = I y -median(I y )(6)\nwhere median refers to taking the median value of the image and M denotes a condition map. The condition map is further normalized in range (0,1) to ensure training stability:\nM = M -M in( M) M ax( M) -M in( M)(7)\nwhere M denotes the normalized condition map. In order to reduce the impact of local outlier pixels and introduce more non-linearity, maximum pooling operation is applied to the conditional feature map, which can also ensure the local smoothness of the conditional feature map:\nM = M axP ool(M)(8)\nAfterward, the conditional map M is used to the quadratic function to compute the modulation coefficients for each pixel. We assume that the input image contains N pixels, and the modulation process can be formulated as:\nC i = α * x 2 i + β * x i + γ, where i ∈ (1, N ), x i ∈ M.(9)\nHere, α, β, and γ are three learnable parameters that are shared across all pixel points. In other words, there are only three learnable parameters needed for the entire image." }, { "figure_ref": [], "heading": "C. Output Fusion", "publication_ref": [], "table_ref": [], "text": "Finally, we multiply the Coarse Enhanced image I coarse with the modulated coefficient map C to obtain the enhanced result:\nI en = C ⊗ I coarse ,(10)\nwhere ⊗ denotes element-wise multiplication. During multiplication, some clamp operations are applied to prevent out-of bounds pixel values. So far, the enhancement process of the one layer model has been completed. It is worth noticing that the global low-light enhancement module and local adaptation module are jointly optimized in an end-to-end manner. The entire model has only 87 learnable parameters, including a 3×3 convolution (84 parameters) and three coefficients of a quadratic function (3 parameters). The specific effectiveness of each component and corresponding analysis will be provided in the experimental section." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [ "b26", "b38", "b4", "b18", "b36", "b37", "b35", "b14", "b16", "b16", "b11", "b13", "b39", "b40" ], "table_ref": [ "tab_1" ], "text": "The proposed model is trained on the multi-exposure dataset [27]. This dataset is captured with the Adobe Camera Raw SDK [39] to emulate different digital exposure values (EVs) applied by a camera. They apply varying EVs to each raw-RGB image to simulate actual exposure errors. This involves using relative EVs of -1.5, -1, 0, +1, and +1.5 to produce images with underexposure errors, no gain from the original EV, and overexposure errors, respectively. As the ground truth images, they manually retouched 0 EVs images by an expert photographer. Since our aim is the low-light image enhancement, we only use data with -1.5EV, -1 EV, and 0 Ev as input and do not use overexposed images. The evaluations are conducted on five public real-world shot datasets to simulate real-world low-light scenarios better. Specifically, following the ZeroDCE series, we perform experiments on NPE [5], LIME [19], MEF [37], DICM [38], VV [36]. We also calculate the average illuminance (Y channel) of these five datasets in Table II, It can be seen that there is considerable variation in the average illuminance of the five datasets. Datasets with multiple lighting conditions allow for better testing of the generalization capabilities of the different methods. Since lowlight images in real scenes often do not have corresponding ground truth, we employ two no-reference indicators, PI [15]- [17] and NIQE [17] as evaluation metrics, which are also adopted by many recent LLIE methods [12]- [14].\nWe implement our models with PyTorch on an NVIDIA 1080 GPU. During training, we crop the images into 256×256 patches and set the batch size as 16. Data augmentation operations such as horizontal/vertical flipping and rotation are also performed. The training process is carried out in a fully supervised manner, i.e., the model takes in low-light images as input and is supervised by corresponding ground truth images. L 1 loss is adopted during the training process. For the curve ajustment parameters, we initialize α, β, and γ as 0.6, -1.3, 1.5 respectively. This means that the quadratic function is initialized to: C = 0.6 * x 2 -1.3 * x+1.5. Under this setting, we have: C(0) = 1.5, C(0.5) = 1,C(1) = 0.8. The motivation for setting the initial value in this way is that we hope the areas with relatively low lightness can be further enhanced and, meanwhile, suppress overexposure in relatively brighter areas. We utilize the Adam [40] with β 1 = 0.9 and β 2 = 0.999 as optimizer. The learning rate is initialized as 2 × 10 -4 and is decayed to 1 × 10 -6 with a cosine annealing scheduler [41]. Additionally, our model's lightweight design facilitates a highly efficient training process, resulting in significant time savings and reduce environmental impact with a low carbon footprint. Unlike those huge models that require dozens of hours for training on multiple GPUs, our network converges in less than an hour with just one NVIDIA 1080 card." }, { "figure_ref": [ "fig_7", "fig_1" ], "heading": "B. Benchmark Evaluations", "publication_ref": [ "b18", "b6", "b33", "b7", "b34", "b11", "b10", "b12", "b13", "b34", "b26" ], "table_ref": [ "tab_0", "tab_2", "tab_3" ], "text": "We conduct comparisons with state-of-the-art methods, including traditional method (LIME [19]), four supervised learning based methods (RetinexNet [7], MBLLEN [34], DRBN [8], IAT [35]) and four zero-shot learning method (SCI [12], RUAS [11], ZeroDCE [13], ZeroDCE++ [14]). We utilize publicly available source codes and recommended parameters to reproduce the results.\nThe quantitative comparisons are listed in Table I. As one can see, SCLM achieves the best results in terms of average PI and obtain the second-best performance in terms of average NIQE on five datasets. Meanwhile, SCLM outperforms all the other competitors on all indicators of NPE and DICM. These two datasets contain more diverse and complex lighting scenarios (i.e., containing both dark and light areas). The measurement results show that our method is quite competitive, especially when dealing with complex lighting conditions.\nFrom Fig. 5 to Fig. 9, we compare the enhancement results for five real-world datasets. We provide two sets of enhanced results for each dataset. Subjective results include a variety of situations, including indoor and outdoor scenes, as well as images with different exposure levels. We will first give a few specific analyses of typical scenes and then give a summary. We start by analyzing the enhancement results for images taken in poor daytime lighting conditions, where the shooting angle often causes parts of the image to be too dark. In VV (a), we display a set of scenes containing portraits taken under low-light conditions. The SCI series and RUAS exhibit uneven exposure. The Retinexnet and IAT appear to have color deviation. MBLLEN and DRBN tend to underenhance the low-light input. In comparison, our method and ZeroDCE perform relatively well. Similar to VV (a), in MEF (a), we show a landscape photo taken in backlit conditions. The SCI series and RUAS exhibit uneven exposures. IAT, RetinexNet, and the ZeroDCE series appear to have color deviation. Similar situations can also be observed in NPE (a), NPE (b), and DICM (a). Another typical situation is the dark scene containing locally bright areas, such as an image taken at night with point lights. We observe that some methods may produce significant overexposure of the bright regions. For example, in LIME (a), the enhancement results of RUAS and SCI medium produce overexposure in some bright light areas. Another example is given in MEF (b), where methods including SCI medium, DRBN, IAT, and RUAS all result in overexposure of the point light source, with the overexposed area covering the lampshade. In contrast, curve-based methods can suppress overexposure to some extent, but some of them also result in color deviation. (For example, the enhancement results of the ZeroDCE series in MEF (b).) Thanks to the local adaptation strategy, our method can cope better with these scenes. Although all methods, including our SCLM, cannot handle all scenarios perfectly, our approach is very competitive regarding overall results, especially considering its extremely low complexity. Some methods produce unpleasant artifacts when working with complex low-light scenes, including uneven exposure, color deviation, and color undersaturation. Specifically, earlier methods such as RetinexNet, and DRBN tend to produce unnatural artifacts. The SCI series and RUAS are less robust and tend to produce unevenly exposed results. MBLLEN, on the other hand, shows excessive smoothing and blurring in some natural scenes. Both our method and ZeroDCE produce relatively color-coordinated results, but ZeroDCE may sometimes produce color bias. Meanwhile, ZeroDCE requires calculating per-pixel adjustment parameters and needs to be iteratively correct, which is particularly unfriendly for high-resolution images, while our method shares a set of adjustment parameters globally. III, the results after local adjustment are better than those without local adjustment in most of the evaluation metrics. Subjective visual results also confirm this. We show some typical examples in Fig. 10.\nFor each set of images, we show from left to right the enhancement results of without and with the local adaptation module and the corresponding adaptation map. It can be seen that the enhanced outputs with the inclusion of local adaptation map (C in Equ. 10) have better subjective results, especially for some backlit areas, such as the backlit face of the church in (a), and the person's facial details in (c). We emphasize that it is significantly better to incorporate the local adjustment module for the coarsely enhanced image. This is because the receptive field of the Retinex-based CNN model is very limited, e.g., for a 3×3 convolution, it can only cover three adjacent pixels and lacks global brightness prior. Some work [35] has introduced transformer architecture to increase the receptive field, but the computational cost of this method is very high. In contrast, we introduce the relative brightness prior to the local area in a straightforward but effective way, enabling the model to obtain satisfactory enhancement results quickly and efficiently. Next, we explain the effect of performing maximum pooling on the adaptation map. This operator can alleviate excessive enhancement for dark-colored pixels, especially for some black text parts. A typical example is given in Fig. 12, where it can be seen that the overexposure of the text part is significantly relieved after pooling. It should be noted that this operation is actually only effective for some scenarios. How to prevent excessive enhancement of dark-colored objects (such as black hair) leading to color deviation is still an open problem in the lowlight enhancement field. Adding semantic information may alleviate this phenomenon, but we focus on ultra-lightweight low-light enhancement networks here, so we do not consider it. To summarize, the local adaptation module brings considerable visual improvement with only three learnable parameter. This might suggest that researchers should consider more image brightness priors for low-light enhancement tasks rather than designing complicated structures. Structure topology.\nTo determine which structure is more suitable for estimating illuminance component, we test several multi-branch structures. As shown in Fig. 11, we dub these three structures Diverse Branch (DB), Asymmetric Block (AB), and Triple Duplicate (TD), respectively. The part in the dashed box can be collapsed into a 3×3 convolution equivalently. We train and test these three structures on the muti-exposure dataset [27], and the test results are shown in Table IV. It can be seen that the Diverse Branch (DB) can achieve the best performance, so we choose DB as the super-structure for training. In fact, we find that these multi-branch structures can achieve similar perceptual results except for the plain structure, which has a significantly weaker modeling ability." }, { "figure_ref": [], "heading": "D. Impact of training data", "publication_ref": [ "b41" ], "table_ref": [], "text": "To test the impact of training data, we retrain our model on SICE dataset part1 [42] with different settings. SICE contains sets of images with different exposure levels, and we only adopt the underexposed images and their corresponding welllit counterpart as training data. To be specific, we select three images with different brightness levels for each image " }, { "figure_ref": [], "heading": "E. Speed and efficiency", "publication_ref": [ "b10", "b11", "b18" ], "table_ref": [ "tab_5", "tab_5" ], "text": "Next, we compare the proposed method's inference speed and complexity with other approaches. Specifically, we compare the efficiency of different methods in processing 1080P images (1080×1920), and the results are shown in Table V.\nConsidering the different devices available under different conditions, we test both CPU and GPU inference times. The CPU used for the tests is an Intel(R) Xeon(R) Silver 4310 CPU with 2.10GHz. And the GPU is an NVIDIA 1080 card. We first compare and analyze the model size. As shown in Table V, our model has the minimum number of parameters. It only contains one convolutional layer and three modulation coefficients, totaling 87 parameters. Some other methods (RUAS [11] and SCI [12]) are also very lightweight. This suggests that large models with hundred layers may not be necessary for many low-light sceneries. At the same time, our method also has the least Giga Floating-Point Operations Per Second (GFLOPS) when processing images, indicating its great advantage in power consumption. However, it's worth noting that our method is not the fastest method on GPU. It slightly suffers when compared with SCI and ZeroDCE++. For SCI, it only contains three 3×3 Conv layers which are highly optimized by modern deep learning frameworks. For ZeroDCE++, it downsamples the input image when estimating the high-order curve. In contrast, our local adaptation module is relatively complex and has not been specifically optimized. However, both subjective and objective results verify that these two methods perform worse than our approach. The slight additional time in exchange for better visual quality is thus justifiable. When running on the CPU, the inference speed of all methods is significantly reduced, and our method achieves the fastest inference speed. These results show that our method is well adapted to different devices and achieves the best trade-off between speed and effectiveness. In fact, similar to ZeroDCE++, our approach still has optimization space. For example, when processing high-resolution images, we can downsample the image first, calculate the corresponding illumination component and local adaptation map for the downscaled image, and balance speed and effect. Even without any optimization, our approach can handle low-light input images in real time on an ordinary graphics card(e.g., NVIDIA 1080). Another interesting finding is that some traditional methods (such as LIME [19]) have decent performance but are very slow due to the lack of corresponding parallel optimization and iterative calculations. This once again illustrates the huge potential of combining traditional methods and deep learning strategies for light adaptation tasks." }, { "figure_ref": [ "fig_9" ], "heading": "F. Limitations", "publication_ref": [], "table_ref": [], "text": "Although the proposed one-layer model has achieved promising results, some limitations still exist. The first limitation is that our model often fails to achieve satisfactory results when processing scenes with a lot of noise. We display a typical example in Fig. 14. This is because a onelayer model cannot compete with deeper models that have hundreds of Conv layers with stronger denoising capabilities. Another limitation is that although we have taken measures to avoid unnatural enhancement in local regions of images, there are still very few cases where a picture contains both very bright and very dark regions, resulting in unnatural color and color deviation in the enhanced result. This problem remains unsolved with most of the current methods. We leave these problems for our future work." }, { "figure_ref": [], "heading": "V. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "This paper presents a lightweight structure for low-light image enhancement which contains only a single convolutional layer. We first explore the complexity extreme of learningbased LLIE methods. With the incorporation of structural re-parameterization techniques, we find that a single convolutional layer can achieve promising enhancement results. Then, inspired by curve adjustment schemes, a local adaptation module is further introduced to better adjust the brightness of local regions. The parameter-sharing strategy and the oneadjustment-only scheme also ensure its efficiency. Experimental results show that the proposed method is comparable with state-of-the-art methods in real-world low-light image enhancement tasks with fewer parameters and lower computational complexity." } ]
Low-light image enhancement (LLIE) aims to improve the illuminance of images due to insufficient light exposure. Recently, various lightweight learning-based LLIE methods have been proposed to handle the challenges of unfavorable prevailing low contrast, low brightness, etc. In this paper, we have streamlined the architecture of the network to the utmost degree. By utilizing the effective structural re-parameterization technique, a single convolutional layer model (SCLM) is proposed that provides global low-light enhancement as the coarsely enhanced results. In addition, we introduce a local adaptation module that learns a set of shared parameters to accomplish local illumination correction to address the issue of varied exposure levels in different image regions. Experimental results demonstrate that the proposed method performs favorably against the state-of-theart LLIE methods in both objective metrics and subjective visual effects. Additionally, our method has fewer parameters and lower inference complexity compared to other learning-based schemes.
Learning a Single Convolutional Layer Model for Low Light Image Enhancement
[ { "figure_caption": "Fig. 1.Comparison of performance and efficiency. We calculate the average Perceptual Index (PI)↓[15]-[17] on five real-world datasets. Giga Floating-Point Operations Per Second (GFLOPS)↓ is measured on a 1080P (1080×1920) image. The parameter amounts are provided in brackets. Our SCLM (marked with ⋆) outperforms previous methods in terms of image quality, parameter amount, and GFLOPs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The framework of the proposed SCLM. It consists of two parts, global enhancement, and local illuminance adaptation. The global enhancement module is a typical Retinex-based design, where a low-light image is first fed into a network with only one convolutional layer (during inference) to estimate the illumination map of the image. Then, the original image is divided by the illumination map to obtain the coarsely enhanced result. Meanwhile, the normalized illumination map is fed into a quadratic function curve which is driven by learnable parameters to obtain a local adaptation map. Finally, the coarsely enhanced result is element-wise multiplied with the local adjustment map, outputting the final enhanced image.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Illustrations of the one-layer model. (a) The plain structure, which only consists of a 3×3 convolution layer followed by a batch normalization. (b) A multi-branch structure that contains parallel convolutions and pooling operations in training phase, which can be collapsed into a plain 3×3 convolution during inference.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Enhancement results of different model topology. Plain structure appears severe color unsaturation artifacts, while model with structural reparameterization can produce much decent results. Zoom in for a better view.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. Visual comparison of different methods on LIME [19]. Parts of areas are zoomed in with red boxes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Visual comparison of different methods on MEF [37]. Parts of areas are zoomed in with red boxes.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Visual comparison of different methods on NPE [5]. Parts of areas are zoomed in with red boxes.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Visual comparison of different methods on DICM [38]. Parts of areas are zoomed in with red boxes.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .Fig. 11 .1011Fig. 10. Visual comparison of w/wo local adaptation module. We also display the heatmap of the local adaptation map. It can be seen that the details in some regions of the image have been further enhanced after local adjustment.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "Fig. 14 .14Fig. 14. Limitation of the proposed method. our model cannot cope with the images with serious noise well (e.g., The face region in the enhanced image appears obvious noise). Zoom in for a better visualization.", "figure_data": "", "figure_id": "fig_9", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "INDEX (PI) ↓ / NATURALNESS IMAGE QUALITY EVALUATOR (NIQE) ↓ ON FIVE REAL WORLD LOW-LIGHT DATASETS THE BEST RESULTSARE HIGHLIGHTED WITH RED COLOR, AND THE SECOND BEST RESULTS ARE HIGHLIGHTED WITH BLUE COLOR", "figure_data": "TypeMethodLIMEVVMEFNPEDICMAverageTraditional methodLIME [19]2.88/3.992.88/2.512.43/3.433.10/4.263.08/3.782.88/3.59Supervised learningRetinexNet [7] MBLLEN [34] DRBN [8]4.65/4.62 3.78/4.52 3.52/4.733.56/2.62 3.39/4.16 3.57/3.224.30/4.32 3.76/4.78 3.32/4.244.23/4.59 4.01/4.46 3.36/4.434.25/4.42 3.89/4.16 3.74/3.934.20/4.11 3.77/4.42 3.50/4.11IAT [35]4.31/4.914.40/4.063.68/4.294.02/4.514.22/4.394.13/4.43RUAS [11]3.13/4.423.92/4.482.84/3.914.04/9.224.20/4.853.63/5.37SCI easy [12]2.96/4.233.24/3.062.80/3.742.88/4.023.33/3.833.04/3.78Zero referenceSCI medium [12]3.06/4.293.19/3.022.66/3.723.45/4.493.63/4.243.20/3.95learningSCI difficult [12]2.87/4.122.96/2.842.56/3.702.97/4.143.17/4.172.91/3.79ZeroDCE [13]2.80/3.882.95/2.622.47/3.402.81/3.893.11/3.852.83/3.53ZeroDCE++ [14]2.94/4.042.99/2.612.54/3.472.91/3.973.34/3.932.94/3.60Supervised learningSCLM (Ours)2.85/4.062.86/2.652.58/3.572.75/3.772.83/3.692.77/3.55InputSCI easySCI mediumSCI difficultDRBNIATZeroDCEMBLLENRUASRetinexNetZeroDCE++SCLM (Ours)(a)InputSCI easySCI mediumSCI difficultDRBNIATZeroDCEMBLLENRUASRetinexNetZeroDCE++SCLM (Ours)(b)", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "ILLUMINANCE ON Y CHANNEL OF DIFFERENT DATASETS.", "figure_data": "DatasetLIMEVVMEFNPEDICMAverage illumination38.6166.29 41.06 87.0484.16", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "INDEX (PI) ↓ / NATURALNESS IMAGE QUALITY EVALUATOR (NIQE) ↓ OF W/WO LOCAL ADAPTATION MODULE(LAM).", "figure_data": "LIMEVVMEFNPEDICMAveragew/o LAM3.08/4.25 3.46/3.27 3.07/3.86 2.78/3.76 3.47/4.083.17/3.84w/ LAM2.85/4.06 2.86/2.65 2.58/3.57 2.75/3.77 2.83/3.692.77/3.55", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "INDEX (PI) ↓ / NATURALNESS IMAGE QUALITY EVALUATOR (NIQE) ↓ OF DIFFERENT STRUCTURAL RE-PARAMETERIZATION TOPOLOGY. AB,TD AND DB REFERS TO ASYMMETRIC BLOCK, TRIPLE DUPLICATE AND DIVERSE BRANCH RESPECTIVELY.", "figure_data": "LIMEVVMEFNPEDICMAverageAB2.95/4.333.10/2.86 2.73/3.86 2.97/4.182.95/3.89 2.94/3.83TD2.88/4.173.01/2.74 2.59/3.66 2.67/3.652.83/3.64 2.80/3.57DB2.85/4.062.86/2.65 2.58/3.57 2.75/3.772.83/3.69 2.77/3.55C. Ablation studyLocal Adaptation module. To validate the effectiveness ofthe local adaptive module, we remove this part and traintwo models with the same training configuration. Results arecompared on five real-world datasets as mentioned above. Asshown in Table", "figure_id": "tab_3", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Comparison of different multi-branch topologies. We name these three structures Diverse Branch, Asymmetric Block, and Triple Duplicate. The parts in the dashed box can all be equivalently converted to a 3×3 convolution.", "figure_data": "Conv 1x1 Conv 1x1Conv 1x1Conv 3x3BN: Batch NormalizationBNBNBNBNElement-wise AdditionConv 3x3AVG PoolBNBNAVG Pool Average Pooling(a) Diverse BranchConv 1x1Conv 1x1 Conv 3x3BNBNBNConv 3x3 Conv 3x3 Conv 3x3Conv 3x3BNBNBNBN(b) Asymmetric Block(c) Triple DuplicateFig. 13.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "EFFICIENCY OF DIFFERENT METHODS. THE INFERENCE TIME AND GFLOPS ARE MEASURED WHEN PROCESSING 1080P IMAGE.", "figure_data": "THE BEST RESULTS ARE HIGHLIGHTED WITH RED COLOR, AND THESECOND BEST RESULTS ARE HIGHLIGHTED WITH BLUE COLORMethodsParameter(M)GFLOPS GPU Time CPU TimeLIME [19]---255.35RetinexNet [7]0.555640.050.9010.29MBLLEN [34]0.450930.920.3233.73DRBN [8]0.557335.491.5710.98IAT [35]0.08742.290.6410.11RUAS [11]0.0036.770.112.50SCI [12]0.00031.110.010.13ZeroDCE [13]0.079164.230.122.48ZeroDCE++ [14]0.0110.210.010.28SCLM (Ours)0.0000870.170.020.12series and thus obtain three sub-datasets, which we dub slight-low-light, medium-low-light, and extreme-low-light. Throughexperiment, We find that the training data's luminance greatlyaffects the low light enhancement results. Training with thethree datasets resulted in three models with different low-lightenhancement capabilities. We compare the enhanced resultsof these three models with the results of the model trainedwith multiple exposure datasets [27]. As shown in Fig. 11, theenhancement results of model slight have relatively low lumi-nance, while model", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" } ]
Yuantong Zhang; Baoxin Teng; Daiqin Yang; Zhenzhong Chen; Haichuan Ma; Gang Li; Wenpeng Ding
[ { "authors": "J A Stark", "journal": "IEEE Trans. Image Process", "ref_id": "b0", "title": "Adaptive image contrast enhancement using generalizations of histogram equalization", "year": "2000" }, { "authors": "D Coltuc; P Bolon; J Chassery", "journal": "IEEE Trans. Image Process", "ref_id": "b1", "title": "Exact histogram specification", "year": "2006" }, { "authors": "H Ibrahim; N S P Kong", "journal": "IEEE Trans. Consumer Electron", "ref_id": "b2", "title": "Brightness preserving dynamic histogram equalization for image contrast enhancement", "year": "2007" }, { "authors": "D J Z.-U. Rahman; G A Jobson; Woodell", "journal": "Journal of Electronic imaging", "ref_id": "b3", "title": "Retinex processing for automatic image enhancement", "year": "2004" }, { "authors": "S Wang; J Zheng; H Hu; B Li", "journal": "IEEE Trans. Image Process", "ref_id": "b4", "title": "Naturalness preserved enhancement algorithm for non-uniform illumination images", "year": "2013" }, { "authors": "X Fu; Y Liao; D Zeng; Y Huang; X S Zhang; X Ding", "journal": "IEEE Trans. Image Process", "ref_id": "b5", "title": "A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation", "year": "2015" }, { "authors": "C Wei; W Wang; W Yang; J Liu", "journal": "BMVC. BMVA Press", "ref_id": "b6", "title": "Deep retinex decomposition for low-light enhancement", "year": "2018" }, { "authors": "W Yang; S Wang; Y Fang; Y Wang; J Liu", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b7", "title": "From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement", "year": "2020" }, { "authors": "Y Jiang; X Gong; D Liu; Y Cheng; C Fang; X Shen; J Yang; P Zhou; Z Wang", "journal": "IEEE Trans. Image Process", "ref_id": "b8", "title": "EnlightenGAN: Deep light enhancement without paired supervision", "year": "2021" }, { "authors": "Y Wang; R Wan; W Yang; H Li; L Chau; A C Kot", "journal": "AAAI Press", "ref_id": "b9", "title": "Lowlight image enhancement with normalizing flow", "year": "2022" }, { "authors": "R Liu; L Ma; J Zhang; X Fan; Z Luo", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b10", "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement", "year": "2021" }, { "authors": "L Ma; T Ma; R Liu; X Fan; Z Luo", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b11", "title": "Toward fast, flexible, and robust low-light image enhancement", "year": "2022" }, { "authors": "C Guo; C Li; J Guo; C C Loy; J Hou; S Kwong; R Cong", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b12", "title": "Zeroreference deep curve estimation for low-light image enhancement", "year": "2020" }, { "authors": "C Li; C Guo; C C Loy", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b13", "title": "Learning to enhance low-light image via zero-reference deep curve estimation", "year": "2022" }, { "authors": "Y Blau; T Michaeli", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b14", "title": "The perception-distortion tradeoff", "year": "2018" }, { "authors": "C Ma; C Yang; X Yang; M Yang", "journal": "Comput. Vis. Image Underst", "ref_id": "b15", "title": "Learning a no-reference quality metric for single-image super-resolution", "year": "2017" }, { "authors": "A Mittal; R Soundararajan; A C Bovik", "journal": "IEEE Signal Process. Lett", "ref_id": "b16", "title": "Making a \"completely blind\" image quality analyzer", "year": "2013" }, { "authors": "E H Land", "journal": "Scientific american", "ref_id": "b17", "title": "The retinex theory of color vision", "year": "1977" }, { "authors": "X Guo; Y Li; H Ling", "journal": "IEEE Trans. Image Process", "ref_id": "b18", "title": "LIME: low-light image enhancement via illumination map estimation", "year": "2017" }, { "authors": "C Lee; C Lee; C Kim", "journal": "IEEE Trans. Image Process", "ref_id": "b19", "title": "Contrast enhancement based on layered difference representation of 2d histograms", "year": "2013" }, { "authors": "C Li; C Guo; L Han; J Jiang; M Cheng; J Gu; C C Loy", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b20", "title": "Lowlight image and video enhancement using deep learning: A survey", "year": "2022" }, { "authors": "L Shen; Z Yue; F Feng; Q Chen; S Liu; J Ma", "journal": "", "ref_id": "b21", "title": "Msr-net: Low-light image enhancement using deep convolutional network", "year": "2017" }, { "authors": "Z Zhao; B Xiong; L Wang; Q Ou; L Yu; F Kuang", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b22", "title": "RetinexDIP: A unified deep framework for low-light image enhancement", "year": "2022" }, { "authors": "G Fan; B Fan; M Gan; G Chen; C L P Chen", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b23", "title": "Multiscale lowlight image enhancement network with illumination constraint", "year": "2022" }, { "authors": "J Li; X Feng; Z Hua", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b24", "title": "Low-light image enhancement via progressive-recursive network", "year": "2021" }, { "authors": "C Liu; F Wu; X Wang", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b25", "title": "Efinet: Restoration for low-light images via enhancement-fusion iterative network", "year": "2022" }, { "authors": "M Afifi; K G Derpanis; B Ommer; M S Brown", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b26", "title": "Learning multiscale photo exposure correction", "year": "2021" }, { "authors": "X Ding; Y Guo; G Ding; J Han", "journal": "", "ref_id": "b27", "title": "ACNet: Strengthening the kernel skeletons for powerful CNN via asymmetric convolution blocks", "year": "2019" }, { "authors": "X Ding; X Zhang; N Ma; J Han; G Ding; J Sun", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b28", "title": "RepVGG: Making vgg-style convnets great again", "year": "2021" }, { "authors": "X Ding; X Zhang; J Han; G Ding", "journal": "CVPR. Computer Vision Foundation / IEEE", "ref_id": "b29", "title": "Diverse branch block: Building a convolution as an inception-like unit", "year": "2021" }, { "authors": "X Zhang; H Zeng; L Zhang", "journal": "ACM", "ref_id": "b30", "title": "Edge-oriented convolution block for real-time super resolution on mobile devices", "year": "2021" }, { "authors": "X Wang; C Dong; Y Shan", "journal": "ACM", "ref_id": "b31", "title": "RepSR: Training efficient vggstyle super-resolution networks with structural re-parameterization and batch normalization", "year": "2022" }, { "authors": "L Yuan; J Sun", "journal": "Springer", "ref_id": "b32", "title": "Automatic exposure correction of consumer photographs", "year": "2012" }, { "authors": "F Lv; F Lu; J Wu; C Lim", "journal": "BMVA Press", "ref_id": "b33", "title": "MBLLEN: low-light image/video enhancement using CNNs", "year": "2018" }, { "authors": "Z Cui; K Li; L Gu; S Su; P Gao; Z Jiang; Y Qiao; T Harada", "journal": "BMVA Press", "ref_id": "b34", "title": "You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction", "year": "2022" }, { "authors": "V Vonikakis", "journal": "", "ref_id": "b35", "title": "Busting image enhancement and tonemapping algorithms", "year": "2021" }, { "authors": "K Ma; K Zeng; Z Wang", "journal": "IEEE Trans. Image Process", "ref_id": "b36", "title": "Perceptual quality assessment for multiexposure image fusion", "year": "2015" }, { "authors": "C Lee; C Lee; C Kim", "journal": "IEEE", "ref_id": "b37", "title": "Contrast enhancement based on layered difference representation", "year": "2012" }, { "authors": "", "journal": "", "ref_id": "b38", "title": "Color and camera raw", "year": "2020" }, { "authors": "D P Kingma; J Ba", "journal": "ICLR", "ref_id": "b39", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b40", "title": "SGDR: stochastic gradient descent with warm restarts", "year": "2017" }, { "authors": "J Cai; S Gu; L Zhang", "journal": "IEEE Trans. Image Process", "ref_id": "b41", "title": "Learning a deep single image contrast enhancer from multi-exposure images", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 153.76, 409.01, 146.26, 9.06 ], "formula_id": "formula_0", "formula_text": "y = z ⊗ x(1)" }, { "formula_coordinates": [ 4, 115.75, 589.53, 184.28, 22.31 ], "formula_id": "formula_1", "formula_text": "O = (I ⊛ W -µ). γ σ + β,(2)" }, { "formula_coordinates": [ 4, 94.46, 688.58, 205.56, 22.31 ], "formula_id": "formula_2", "formula_text": "W ′ ← γ σ .W b ′ ← - µγ σ .W + β.(3)" }, { "formula_coordinates": [ 4, 357.15, 284.68, 205.89, 39.34 ], "formula_id": "formula_3", "formula_text": "O = (I ⊛ W 1 + b 1 ) ⊛ W 3 + b 3 = I ⊛ (W 1 ⊛ W 3 ) merged weight + (b 1 ⊛ W 3 + b 3 ) merged bias(4)" }, { "formula_coordinates": [ 4, 366.46, 544.94, 196.58, 26.73 ], "formula_id": "formula_4", "formula_text": "W ′ ← W (1) + W (2) + ... + W (n) , b ′ ← b (1) + b (2) + ... + b (n)(5)" }, { "formula_coordinates": [ 5, 121.96, 474.64, 178.06, 25.66 ], "formula_id": "formula_5", "formula_text": "I y = RGB2Y cbcr(I RGB ) M = I y -median(I y )(6)" }, { "formula_coordinates": [ 5, 113.94, 557.15, 186.09, 26.34 ], "formula_id": "formula_6", "formula_text": "M = M -M in( M) M ax( M) -M in( M)(7)" }, { "formula_coordinates": [ 5, 130.85, 657, 169.17, 8.84 ], "formula_id": "formula_7", "formula_text": "M = M axP ool(M)(8)" }, { "formula_coordinates": [ 5, 109.83, 724.45, 190.19, 26.67 ], "formula_id": "formula_8", "formula_text": "C i = α * x 2 i + β * x i + γ, where i ∈ (1, N ), x i ∈ M.(9)" }, { "formula_coordinates": [ 5, 398.61, 161.92, 164.42, 9.65 ], "formula_id": "formula_9", "formula_text": "I en = C ⊗ I coarse ,(10)" } ]
10.1007/978-3-030-73959-1_19
2023-05-23
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b1", "b1", "b3", "b8", "b5", "b14", "b11", "b10", "b6", "b0", "b8", "b9", "b15", "b4", "b13" ], "table_ref": [], "text": "Processes constitute a useful way of representing and structuring the activities in information systems. The Process Mining field offers techniques to discover, monitor and enhance real processes extracted from the event logs generated by processes execution, allowing to understand what is really happening in a process, which may be different from what designers thought [2]. Process models are usually represented with different notations that represent in a graphical manner the activities that take place in a process as well as the dependencies among them. Process models tend to be enhanced with properties such as temporal information, process execution-related statistics, trends of process key indicators, interactions between the resources involved in the process execution, etc. Information about these properties is shown to users through visual analytic techniques that help to understand what is happening in the process execution from different perspectives. However, in real scenarios process models are highly complex, with many relations between activities, which make them nearly impossible to be interpreted and understood by the users [2]. Furthermore, the amount of information that can be added to enhance the process description is also very high, and it is quite difficult for users to establish its relation with the process model. Additional analytics which summarize quantitative relevant information about a process, such as frequent or infrequent patterns, that make it easier to focus on finding usual or unexpected behaviors, are also very useful. [4] Nevertheless, their correct interpretation is usually difficult for users, since they need to have a deep knowledge about process modelling and process related visual analytics. In this regard, some approaches have been described to automatically generate natural language descriptions of a process aiming to provide users with a better understanding of it [10]. These descriptions aim to explain in a comprehensible way, adapted to the user information needs, the most relevant information of the process. In general, textual information is complementary to process model visualization, which is the usual way of conveying information to users.\nNatural Language Generation (NLG) [6,16] provides different methods for generating insights on data through natural language. Its aim is to provide users with textual descriptions that summarize the most relevant information of the data that is being described. Natural language is an effective way of conveying information to humans because i) it does not rely on human capability to identify or understand visual patterns or trends; and ii) it may include uncertain terms of expressions, which are very effective for communication [13]. Research suggests that in some specialized domains knowledge and expertise are required to understand graphical information [12] and proves that specialists can take better decisions based on textual summaries than on graphical displays [8]. With the integration of NLG and process model and analytics visualization, users with limited expertise on process modeling and analysis are provided with an Au-toAI tool that allows them to understand the relevant information about what is really happening within a process.\nIn the state-of-the-art, techniques for process textual description focus on two perspectives. On the one hand, the Control-Flow perspective aims to align a Business Process Model Notation (BPMN) representation of a process model with its corresponding textual description [1,10,11,17]. The aim of this proposal is to, first, facilitate users to understand the process model structure and the dependencies between activities and how resources are used and second, detect inconsistencies in the process model such as missing activities, activities in different order, or misalignments in order to maintain a stable process model through the different stakeholders of an organization. The main drawback of these approaches is that they focus on hand-made processes (not discovered from real data) with a well-defined structure, and draw all their attention in the validation step of the process design phase. Preventing its application to process models extracted from real-life data, usually unstructured, with many relations between activities, including frequent loops, parallels and choices. On the other hand, Case Description techniques focus on generating textual descriptions about the execution of single activities or activity sequences that have been registered in an event log [5]. The underlying process model is not discovered from the event log, therefore neither the process structure is considered nor the relations between activities. So, these last techniques do not provide a faithful description of what has happened during the process execution.\nIn this paper, we present the Process-To-Text (P2T) framework, a process mining-based framework (the real process model is discovered from the event Fig. 1. Framework for the linguistic description of processes data) for the automatic generation of natural language descriptions of processes. Descriptions include information from both the control-flow, case and specially time perspectives, the later being usually neglected in the literature. P2T is based on a Data-to-Text (D2T) architecture [15] using linguistic protoforms (as a way to handle imprecision) that will be generated into natural language texts following a hybrid template-based realization approach." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "P2T: a framework for textual description of processes", "publication_ref": [ "b13", "b1", "b2", "b17", "b7", "b16", "b1", "b3" ], "table_ref": [], "text": "Fig. 1 depicts the Process-To-Text (P2T) framework, for the automatic generation of textual descriptive explanations of processes. This framework is based on the most widely used architecture for D2T systems [15], which defines a pipeline composed of four stages: signal analysis, data interpretation, document planning, and microplanning and realization. P2T does not include the signal analysis stage since data input are not numerical, but event logs. Also, the document planning stage is subsumed in the microplanning and realization stage.\nProcess discovery. The input of the data interpretation stage is an event log, defined as a multiset of traces. A trace is a particular execution of a process (i.e., a case), and it is represented as an ordered sequence of events, being an event the execution of an activity, which contains context information about the execution said activity (e.g. timestamp, the case it belongs to, the resources involved in its execution, the process variables modified, etc.). However, the event log itself is not used as a direct input to generate the descriptions. Firstly, the process model has to be discovered from the event log by applying a process mining algorithm [2,3], which, traditionally have followed heuristic [19], inductive [9], or evolutionary computation [18] based approaches.\nProcess Analysis. Once the process is mined, the log is replayed (each trace is played over the discovered model) [2]. This gives us both temporal and frequencybased information about activities, arcs (relations between activities) and traces that can be as well used to extract frequent and infrequent behavioral patterns [4]. Then, this information is summarized into indicators (e.g. average duration and frequency of the relation between activities, average and mode duration of a path, changes of mean duration of an activity within a period, etc.) which are computed in the modules depicted in Figure 1 in the process analysis phase. This phase is part of the framework and indicators are computed for any case or domain." }, { "figure_ref": [], "heading": "Protoform generation.", "publication_ref": [ "b18" ], "table_ref": [], "text": "A protoform [20] is an abstracted summary which serves to identify the semantic structure of the object to which applies. In P2T, protoforms include fuzzy temporal references for providing information about the temporal dimension of activities, arcs, or traces. For example, the textual description Most executions of activity α 1 last ten minutes in average more than those of activity α 2 is generated by the protoform Q B activity lasts A. This is an activity-related protoform where Q is the quantifier Most B is a qualifier, in this case, activity α 1 , and A is the summarizer used to describe activity α 1 ten minutes in average more than those of activity α 2 . Note that behavioral patterns can be expressed through relations between activities, meaning that patternrelated protoforms are compositions of arcs-related protoforms. Protoforms have abstraction levels, as summarization does, this allows for a general abstracted summary to produce multiple different summaries depending on the knowledge used for its realization." }, { "figure_ref": [], "heading": "Document planning and Realization", "publication_ref": [ "b12" ], "table_ref": [], "text": "Its objective is to generate the natural language descriptions of the process, taking the protoforms, templates, and expert knowledge as inputs. In our model we follow a hybrid template-based realization, which uses some domain expert knowledge and is more rich and flexible than basic fill-in-the-gap approaches, but simpler and quicker than full fledged NLG system implementations. The SimpleNLG-ES [14] (Spanish version of the SimpleNLG realization engine) realization engine is used in this stage." }, { "figure_ref": [ "fig_0" ], "heading": "Case study", "publication_ref": [], "table_ref": [], "text": "We have applied our framework in a real case study in the health domain: the process related to the patients' management in the Valvulopathy Unit of the Cardiology Department of the University Hospital of Santiago de Compostela. In this Unit, consultations and medical examinations, such as radiography, echocardiogram or Computed Tomography (CT) scans, are performed to patients with aortic stenosis in order to decide the treatment (including surgery) they will undergo. Other information like unexpected events (e.g. emergencies, non-programmed admissions) and patient management activities (inclusion in the process, revisions, etc) are also recorded in the event log data.\nMedical professionals have a real interest in applying process mining techniques to this process, since it allows them to extract valuable knowledge about the Unit like, relations between age, sex, admittance (emergency or normal admission) and the number of successful surgeries or delay between activities, the delays between crucial activities (such as the admission of a patient and its surgery) due to tests like CT scans or echocardiograms, the different paths of the process which patients with different attributes follow, etc. The main goal to reach with all this information is to reduce the delays between process activities, prevent the repetition of activities (loops in the process), minimize patient management time, optimize resources and most important, increase the number of successful treatments.\nIn Fig. 2 the model that describes this process is shown, it has been discovered from an event log with data on 639 patients.\nSince medical, management activities and exceptions are recorded, and since patients' management depends on their pathological state, the frequency of each path (sequence of activities on a trace) in the process is very low (only the twenty most frequent paths from the six-hundred twenty-two total are shown in the figure) giving place to a highly complex model. This makes it difficult for medical professionals to understand what happens within the process even when it is graphically represented. To solve this, linguistic descriptions of the main process analytics are generated, as shown in Table 1, facilitating the understanding of temporal relations and delays between activities and traces, which is the main concern of domain experts in this regard." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [], "table_ref": [], "text": "Our P2T framework integrates the process mining, natural language generation, and fuzzy linguistic protoforms paradigms for the automatic generation of textual descriptive explanations of processes, which include quantitative information (i.e., frequent and infrequent behavior, temporal distances between events and frequency of the relationships between events). A real use-case is presented, showing the potential of P2T for providing natural language explanations addressed to cardiology specialists about consultations and interventions of the patients of the valvulopathy unit. As future work, extensive human validation of the generated descriptions will be conducted with domain experts. -6% of times, after the First Medical-Surgical Session, a Second Session is held around 5 weeks and 3 days later. On the contrary, 33% of times, patient undergoes Surgical Intervention around 7 weeks and 3 days later.\n-Patients who go through Assessment, Medical-Surgical Session and Surgical Intervention stay for 114 days in average at the cardiology service." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This research was funded by the Spanish Ministry for Science, Innovation and Universities, the Galician Ministry of Education, University and Professional Training and the ERDF/FEDER program (grants TIN2017-84796-C2-1-R, ED431C2018/29 and ED431G2019/04)." } ]
In this paper we present the Process-To-Text (P2T) framework for the automatic generation of textual descriptive explanations of processes. P2T integrates three AI paradigms: process mining for extracting temporal and structural information from a process, fuzzy linguistic protoforms for modelling uncertain terms, and natural language generation for building the explanations. A real use-case in the cardiology domain is presented, showing the potential of P2T for providing natural language explanations addressed to specialists.
Process-To-Text: a framework for the quantitative description of processes in natural language
[ { "figure_caption": "Fig. 2 .2Fig. 2. Model of the Valvulopathy process represented with the InVerbis Analytics visualization tools [7].", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Table 1 .1Textual descriptions generated for the valvulopathy process -During the first half of year 2018, 52% less Surgical Interventions were registered compared to the second half of that same year. period. -In the process, 78% less Surgical Interventions than Coronographies were registered. -Waiting time between Consultations is around 5 weeks and 6 days in average. -Waiting time between a Coronography and a CAT is around 6 weeks in average. -Around 7 weeks and 3 days after the Medical-Surgical Session a patient undergoes Surgical Intervention.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Yago Fontenla-Seco; Manuel Lama; Alberto Bugarín
[ { "authors": "H Van Der Aa; H Leopold; H Reijers", "journal": "Springer", "ref_id": "b0", "title": "Detecting inconsistencies between process models and textual descriptions", "year": "2015" }, { "authors": "W M P Van Der Aalst", "journal": "Springer", "ref_id": "b1", "title": "Process Mining: Data Science in Action", "year": "2016" }, { "authors": "A Augusto; R Conforti; M Dumas; M L Rosa; F M Maggi; A Marrella; M Mecella; A Soo", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b2", "title": "Automated discovery of process models from event logs: Review and benchmark", "year": "2019" }, { "authors": "D Chapela-Campa; M Mucientes; M Lama", "journal": "Information Sciences", "ref_id": "b3", "title": "Mining frequent patterns in process models", "year": "2019" }, { "authors": "R M Dijkman; A Wilbik", "journal": "Information Systems", "ref_id": "b4", "title": "Linguistic summarization of event logs: A practical approach", "year": "2017" }, { "authors": "A Gatt; E Krahmer", "journal": "J. Artif. Int. Res", "ref_id": "b5", "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", "year": "2018-01" }, { "authors": "A S Law; Y Freer; J Hunter; R H Logie; N Mcintosh; J Quinn", "journal": "Journal of Clinical Monitoring and Computing", "ref_id": "b6", "title": "A comparison of graphical and textual presentations of time series data to support medical decision making in the neonatal intensive care unit", "year": "2005" }, { "authors": "S J J Leemans; D Fahland; W M Van Der Aalst", "journal": "Springer", "ref_id": "b7", "title": "Discovering block-structured process models from event logs -a constructive approach", "year": "2013" }, { "authors": "H Leopold; J Mendling; A Polyvyanyy", "journal": "Lecture Notes in Computer Science", "ref_id": "b8", "title": "Generating natural language texts from business process models", "year": "2012" }, { "authors": "H Leopold; J Mendling; A Polyvyanyy", "journal": "IEEE Trans. Soft. Eng", "ref_id": "b9", "title": "Supporting process model validation through natural language generation", "year": "2014" }, { "authors": "M Petre", "journal": "Commun. ACM", "ref_id": "b10", "title": "Why looking isn't always seeing: Readership skills and graphical programming", "year": "1995-06" }, { "authors": "A Ramos-Soto; A Bugarín; S Barro", "journal": "Progress in Artificial Intelligence", "ref_id": "b11", "title": "Fuzzy sets across the natural language generation pipeline", "year": "2016" }, { "authors": "A Ramos-Soto; J Janeiro-Gallardo; A Bugarín", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Adapting SimpleNLG to spanish", "year": "2017" }, { "authors": "E Reiter", "journal": "", "ref_id": "b13", "title": "An architecture for Data-to-Text systems", "year": "2007" }, { "authors": "E Reiter; R Dale", "journal": "Cambridge University Press", "ref_id": "b14", "title": "Building Natural Language Generation Systems", "year": "2000" }, { "authors": "J Sànchez-Ferreres; J Carmona; L Padró", "journal": "Springer International Publishing", "ref_id": "b15", "title": "Aligning textual and graphical descriptions of processes through ilp techniques", "year": "2017" }, { "authors": "B Vázquez-Barreiros; M Mucientes; M Lama", "journal": "Information Sciences", "ref_id": "b16", "title": "Prodigen: Mining complete, precise and minimal structure process models with a genetic algorithm", "year": "2015" }, { "authors": "A Weijters; W Aalst", "journal": "Integrated Computer-Aided Engineering", "ref_id": "b17", "title": "Rediscovering workflow models from event-based data using little thumb", "year": "2003-07" }, { "authors": "L A Zadeh", "journal": "", "ref_id": "b18", "title": "A prototype-centered approach to adding deduction capability to search engines-the concept of protoform", "year": "2002" } ]
[]
10.18653/v1/2020.acl-main.747
2023-05-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b0", "b13", "b43", "b4", "b22", "b10", "b20", "b38", "b15", "b40", "b31", "b18", "b30", "b27" ], "table_ref": [], "text": "Scaling up model capacities has become a promising strategy for improving performance on various natural language processing benchmarks. However, increasing the number of parameters in Large Language Models (LLMs) (Chowdhery et al., 2022;Brown et al., 2020;OpenAI, 2023) and Sparse Mixture-of-Experts (SMoE) models (Fedus et al., 2022a;Lepikhin et al., 2020;Zuo et al., 2022;Dai et al., 2022) can result in extremely high computational costs and difficulties in fine-tuning them. As a result, researchers have been exploring lowcapacity models (Park et al., 2021;Jiang et al., 2022;Xu and McAuley, 2022a) as an alternative.\nAdditionally, recent studies have shown that lowcapacity models (Mirzadeh et al., 2020;Xu et al., 2023) can also be collaboratively fine-tuning highcapacity models and valuable plug-ins for Large Language Models. Therefore, there is still a significant need for jointly training multi-capacity models in real-world applications.\nThe encoder-decoder framework with varying capacities has been extensively employed in numerous NLP generation tasks, particularly for multilingual machine translation (Vaswani et al., 2017b;Lewis et al., 2019;Raffel et al., 2020a;Xue et al., 2020). However, there is a lack of research on whether high-capacity and low-capacity models can promote each other in this task. The traditional approach of involving multiple capacity models is knowledge distillation technique (KD). It has been developed to distill knowledge from high-capacity models and improve low-capacity models, resulting in notable success (Tang et al., 2019;Michel et al., 2019;Sun et al., 2020;Rao et al., 2022). Nevertheless, this method still has two main drawbacks. Firstly, the serial training pipeline requires high-capacity models to be prepared before lowcapacity models, which increases the overall time cost. Secondly, the knowledge distillation process is unidirectional, where the low-capacity models receive useful information from the high-capacity models, but not vice versa.\nIn this work, we present a novel one-stop training framework of multiple capacity models to address above challenges. The intuition behind our method is straightforward: to leverage the strengths of models wtih different capacities to facilitate each other, thereby enabling them to find optimal solutions collaboratively, rather than relying solely on individual learning. Specifically, we propose a novel joint training algorithm, called Two-Stage Joint-Training (TSJT), which is designed to jointly train high-capacity and low-capacity model leading more efficient convergence. To further evaluate the effectiveness of TSJT, we introduce two composite model variants, namely shared and indep architecture. These two architectures take into account the variety of model capabilities and the extent of shared data. In addition, TSJT divides the training process into two stages. In the first stage, the submodels work collaboratively to integrate supervisions from each other, ultimately reaching their optimal checkpoint. In the second stage, TSJT empowers submodels to optimize independently and seek their individual optimal solutions. We conduct extensive experiments on the multilignual machine translation benchmark WMT10 to evaluate the effectiveness of our one-stop training schema. The resutls show that our method exhibits superior performances on relatively low-capacity models, and achieves comparable or even better performance on high-capacity models. Furthermore, we delve into a detailed analysis of our approach by closely examining the optimization trajectory and loss visualizations. The results demonstrate that TSJT has a significant impact on the initial training process, leading models to converge faster and reduce the overall training time cost. Subsequently, TSJT empowers models to identify their best possible solutions, while ensuring stability and keeping the loss minimal." }, { "figure_ref": [], "heading": "One-stop Training Framework", "publication_ref": [], "table_ref": [], "text": "In this section, we will introduce one-stop training framework in detail. Our method includes three critical components: the multiple capacity model architecture with shared or independent layers (Section 2.1), the two-stage joint training algorithm (Section 2.2), and the training objective (Section 2.3) which optimizes all capacity models simultaneously during training. The multi-capacity model architectures enables submodels with adaptable depth and width. Once the one-stop training finished, various capacity models can be extracted and utilized. The two-stage joint training algorithm makes the best of joint training supervision. It makes models with varying capacities to achieve faster and better convergence at the initial training process, as well as allows them to explore their own optimal solutions. The training objective determine specific optimization target in different training stage, with or without additional supervision from other models." }, { "figure_ref": [ "fig_0" ], "heading": "Multiple Capacity Models", "publication_ref": [], "table_ref": [], "text": "The model architectures in our one-stop training framework is to verify the Multilingual machine translation task, we use the standard encoderdecoder framework for models with different capacities. Once the training finished, all submodels can be separated from the composite model architecture and utilized. As shown in Figure 1, we propose two variations of model architecture, namely shared and indep (independent) architecture, to cater to the requirements of different capacity models.\nShared architecture. The shared architecture provide two submodels with specific shared parameters. We take MoE and dense model as example in the shared architecture. The MoE model is consist of moe layer in even-numbered layers and standard transformer layer in odd-numbered layers. The dense model shares the standard transformer layer with MoE, but possesses unique parameters in its even-numbered layers. In the forward process, the hidden states of both MoE and dense models go through layers with identical parameters at odd-numbered layers. Meanwhile, in the backward pass, the shared layers are jointly optimized by the two submodels. As a result, submodels within the shared architecture can benefit from the common parameters. However, this sharing necessitates that the two submodels maintain the same width, which constrains the capacity ratio between them.\nIndep architecture. The independent architecture also includes two models with varying capacities. The primary distinction between the two architectures is the existence of shared parameters among the submodels within them. In the independent architecture, the two submodels are entirely separate from one another. Consequently, although there may be a loss in sharing information, the independent architecture can offer submodels with a wider range of capacities. Using the MoE and device model as examples in the independent architecture, the device model typically has half the hidden size and depth of the MoE model. Additionally, the device model inserts layers corresponding to the locations of the MoE layers in the sparse model. It should be noted that within this architectural framework, the high-capacity submodel isn't necessarily required to be a MoE model. It could just as well be an arbitrary large model, such as a large-scale pre-trained language model.\nTheoretically, these two model architectures allow various backbone submodels with flexible ca- pacity. Due to the limit of space, we focus on three representative capacity models in this work, which are sparse, dense and device, listed in descending order of their size.\nThe sparse model is usually a deep and sparse model. The capacity of sparse model have direct influence on the following dense and device model. In this work, we employ a Mixture-of-Experts (MoE) model as sparse model. Note that arbitrary sparser and deeper models can be adopted as the sparse capacity model here. The dense capacity model is a relatively compact and small one, similar to a vanilla encoder-decoder model comprising roughly 300 million parameters. On the other hand, the device capacity represents the smallest model in our schema, with less than 100 million parameters." }, { "figure_ref": [ "fig_1" ], "heading": "Two-stage Joint Training Algorithm", "publication_ref": [], "table_ref": [], "text": "We then propose a two-stage joint training algorithm to train each submodel exhaustively. The illustration of Two-Stage Joint-Traing (TSJT) algorithm is shown in Figure 2.\nThe idea of the TSJT algorithm is similar to the pre-training and fine-tuning of the language model. The first stage emphasizes global optimization, enforcing consistency constraints between submodels of varying capacities to aid in their quicker and more efficient convergence. After reaching the rather optimal region, we transition to the second stage, during which the constraint is removed and the submodels are local fine-tuned to find optimal solution, respectively. It is not reasonable to maintain the strong constraint between models of different capacities throughout, as this could hinder their ability to discover optimal solutions. The timing of the stage transition is determined by the divergence between two submodels.\nSpecifically, we employ the KL loss L KL of the outputs of two submodels as the quantified divergence, and set a separate threshold t sep . Once\nL KL ≤ t sep ,(1)\nthe TSJT algorithm completes the first stage and proceeds to the second stage.\nDuring the first stage, the shared and independent architectures must execute the forward process twice to calculate their respective cross-entropy loss and KL loss. In each backward process step, we optimize the MoE submodel first, followed by the dense or device submodel. In the second stage, the process is nearly identical to the first stage, except that the calculation of KL loss is omitted. Notably, during the second stage, the MoE and device submodel from the independent architecture can be trained asynchronously, which is not applicable in the shared architecture." }, { "figure_ref": [], "heading": "Training Objective", "publication_ref": [], "table_ref": [], "text": "In this section, we will demonstrate the derivation of our composite learning objective for each submodel within our novel model architectures during In the first stage two models are trained with additional KL constraint. In the second stage two models are trained seperately. Note that two models in shared architecture should be updated simultaneously due to the shared parameters.\nthe joint training process. As previously stated, our model architecture comprises two submodels with varying capacities. Our joint training scheme aims to leverage the strengths of each submodel to complement the other, ultimately enabling them to find optimal solutions collaboratively, rather than relying solely on individual learning. Specifically, we add a consistency constraints into the original training objective of each submodel in the first stage of TSJT. Such constrain could employ the knowledge of sparse model to facilitate the learning process of dense and device model, and versa vice.\nUsing submodels from the shared architecture as an example, the original training objective of models is the cross-entropy loss. Given a source sequence x of length S and a target sequence y of length T , the training objective L is defined as:\nL = - 1 T T t=1 log P(y t |x).(2)\nWhile in our schema, the two submodels should also keep consistent with each other during the training process. Considering the output of the MoE model y and the dense model y ′ , then the Kullback-Leibler (KL) divergence between them can be derived as :\nL KL = D KL (y∥y ′ ) + D KL (y ′ ∥y).(3)\nFinally, the training objective of dense model is defined as:\nL ′ = L + α • L KL , (4\n)\nwhere α is a scaling coefficient hyperparameter to control the effect of L KL . Similarly, the training objective of submodels within the independent architecture can be derived in the same way. Noted that for the MoE model, we also add the KL divergence into its training objective. But the α used in Eq. 4 is usually not the same as rather low-capacity models. While in the second stage, we use the standard training objective in Eq. 2. We will report more details in experimental settings." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b35" ], "table_ref": [], "text": "We demonstrate the effectiveness of our methodology on multilingual machine translation tasks. We adopt a prevalent translation benchmark that includes 10 languages.\nWMT10 (Wang et al., 2020) is a benchmark which includes bitext data between English and other 10 languages: French (Fr), Czesh (Cs), German (De), Finnish (Fi), Latvian (Lv), Estonian (Et), Romanian (Ro), Hindi (Hi), Turkish (Tr) and Gujarati (Gu). The training set encompasses a total of 32.5 million sentence pairs. To evaluate the models, we merge all parallel corpora into one training set and assess their performance on individual language test sets. Finally, we present the case-sensitive, detokenized BLEU scores using the sacreBLEU metric1 . " }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "Single Mixture of Experts Model. We employ the Mixture-of-Experts models as sparse models in our schema, and conduct single model training as the baseline. The MoE model is consist of an encoder of 12 layers and an decoder of 12 layers, incorporating a MoE layer in every alternate layer. Each MoE layer includes 8 experts. The embedding dim is set to 768. Single dense and device Model. We adopt the single dense model as our baselines which contains the same number of parameters with the dense and device models in the shared and indep architecture respectively. The dense model is composed of a 6-layer encoder and a 6-layer decoder, while the device model features a 3-layer encoder and a 3layer decoder. The width of the dense model aligns with that of the MoE model, whereas for the device model, it is set to 288, which is the smallest valid width. Both the single dense and device models are trained starting from scratch." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b21" ], "table_ref": [], "text": "Our implementation of our schema is based on Fairseq library 2 (Ott et al., 2019). Following the Switch Transformers (Fedus et al., 2022b), we adopt top-1 gating in our MoE models. Additionally, we employ a balancing loss alongside the cross-entropy loss to balance the load of various experts in the MoE model. The balancing loss is multiplied by 0.01 and added to the total loss. For training, we use the Adam optimzer with 4000 warm-up steps, start learning rate of 5e -4, and 2 https://github.com/facebookresearch/fairseq inverse square root scheduler proposed in Raffel et al. (2020b). We accumulate gradients to make an effective batch size of 32, 768 tokens for all models. For all baselines and our methods, the maximum number of epochs is set to 8. For shared or independent structure, α in Eq. 4 is set to 5 for MoE, and 10 for either the dense or device models to maintain an equal magnitude with the balancing loss." }, { "figure_ref": [], "heading": "Results on WMT10", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We primarily report the results of different models on WMT10 benchmark, wherein the models are evaluated in both translation directions: 'X→En' and 'En→X'. We also report the key model size hyperparameters for comparison. The overall results are summarized in Table 1.\nWe observe that:\n(1) By developing a range of models with varying capacities, our method can harness the strengths of each model to deliver superior performance compared to standard individual training.\n(2) Compared to single MoE model, MoE models from both shared structure and independent structure achieve better performance. Our method significantly enhances performances in the X→En direction, and achieves competitive results in the En→X direction. In particular, MoE from the shared architecture outperforms the single MoE by 1.4% score in the Cs→En direction. While the independent one outperforms the single MoE by 4.1% score in the Hi→En direction.\n(3) For the dense model, the shared one exhibits better performance than the single model in the X→En direction, and also competitive result in the reverse direction. On X→En direction, the dense model from the shared architecture improves performance on every language except for Fr. For instance, both De and Et see a 0.5% score improvement. While in the reverse direction, the dense model is not so good on several low-resource languages.\n(4) The device model is the smallest-capacity model in our setting. We can observe that the device model within the independent architecture improves the performance in two directions. In the X→En direction, our method achieves an average 0.42% score improvement, and a 0.11% score improvement in the reverse direction." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Analysis", "publication_ref": [], "table_ref": [ "tab_2", "tab_2", "tab_2" ], "text": "In this section, we delve deeper into the inner workings of our method. Given the substantial cost of conducting experiments with 10 languages, we select a high-medium-low resource combination from the WMT10 benchmark as the basis for our analysis experiment. We adopt the Fr, Fi and Hi as representations of high, medium and low resource languages, respectively. On the new subset benchmark, we mainly conduct experiments to compare the following strategies or models:\n• Single trains the model without joint training, i.e. the vanilla single model.\n• ConstJT-shared/indep trains models within shared or independent structures with constant constraint along all the training process.\n• TSJT-shared/indep trains models within using TSJT algorithm.\nFor all the experiments, we mainly follow the setting from Section 3.3. We use dictionary of 3 languages, and set the maximum of epochs to 3. The results are summarized in Table 2.\nResults. We can observe from Table 2 that, TSJT algorithm outperforms the single training and constant training comprehensively. Compared to the baselines, TSJT-shared structure improves the result of the dense model up to 1.3% score on average in the X→En direction, and 0.66% score on average in the reverse direction. MoE models from both shared and independent strcuture get improvement about 0.5% score on average compared to the single MoE model. Regarding the constant joint training strategy, the results indicate that it does not consistently surpass the baseline; however, it does outperform in several language trans- lation tasks. However, the ConstJT strategy still somewhat restricts performance improvement in certain situations, such as the MoE model from the ConstJT-shared method. Overall, compared to the baseline, the ConstJT strategy demonstrates limited improvement and is not as effective as our TSJT algorithm. This underscores the necessity of the two-stage method, as such constraints may limit further progress when models move away from the initial point.\nWhy Joint Training? To explore how our TSJT algorithm benefits the training of models, we plot the cross-entropy loss of the three strategies mentioned above.\nThe visualizations are shown in Figure 3. We can observe that the TSJT approach exerts a significant impact on the optimization trajectory, particularly at the outset of the training process. In comparison to the single training approach, the loss of models trained using our two-stage joint-training approach decreases more rapidly and reaches a lower point in a shorter period of time. Furthermore, the TSJT approach maintains a stable lower loss status as the training progresses. Despite that the constant joint-training approach also produces some benefits initially, the KL constraint ultimately has a negative effect and impedes the models from discovering optimal solutions. This indicates that our two-stage joint-training approach can lead the models towards an efficient optimization direction by correcting each other through the KL constraint in the first stage, and subsequently, in the second stage, allows the models to individually find their optimal point while maintaining the advantage gained before.\nWhy Two Stage? As previously stated, it's impractical to impose KL constraints throughout the entire training process. Therefore, we delved deeper into the KL loss between the MoE and the dense or device model in ConstJT and TSJT frameworks to monitor its progression over time. Subsequently, we plot the curve of KL loss between the MoE and dense model in the above two frameworks during the entire training process. Results are shown in Figure 4. We can observe that, as the number of updates increases, the KL loss of the TSJT algorithm progressively decreases, surpassing that of the ConsJT approach. Our TSJT approach is validated by this result, as it reinforces the notion that constraints shouldn't be enforced throughout the entire training process. Additionally, models with varying capacities exhibit improved consistency and discover optimal solutions under TSJT, whereas models trained using Con-stJT experience counterproductive outcomes. The visualization further corroborates the findings in Table 2, where ConstJT initially showed encouraging results but failed to maintain its momentum as the training progressed." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b12", "b41", "b14", "b4", "b1", "b25", "b39", "b3", "b5", "b42", "b8", "b29", "b16", "b11", "b17" ], "table_ref": [], "text": "Mixture of Experts. Mixture-of-Experts (MoE) models which has been proposed about thirty years ago (Jacobs et al., 1991;Jordan and Jacobs, 1994) et al., 2021) and speech recognition (You et al., 2021). In natural language processing, recent studies focus on integrating MoE into Transformers model (Vaswani et al., 2017a). GShard (Lepikhin et al., 2021) and Switch Transformers (Fedus et al., 2022b) scale the original Transformers by replacing the feed-forward layers with experts layers. MoE models have achieved state-of-art performances on various natural language processing tasks, especially neural machine translations (Dai et al., 2022;Chi et al., 2022). However, the extremely high requirement for device and computation resources prevents MoE models from being widely applied to production. Several studies (Rajbhandari et al., 2022;Xue et al., 2022) explore reducing the time and computation cost of MoE models through tensor parallelism, knowledge integration and so on.\nModel Compression. As the scale of deep neural networks grows substantially, model compression has raised great attention in recent years. merous studies aim to address the major challenge of deploying large-scale models in practical scenarios. The most popular techniques of model compression include parameter sharing (Conneau et al., 2020), pruning Fan et al. (2020), quantization (Zafrir et al., 2019) and knowledge distillation (Hinton et al., 2015). Knowledge Distillation (KD) is one of the most common methods, which transfer knowledge of a large teacher model to a small student model. To ensure effective knowledge transfer, KD typically involves a loss function that minimizes the distance between the output of the teacher and student models. Depends on the optimization target, the knowledge distillation method can be roughly categorized into the logitbased KD and feature-based KD (Xu and McAuley, 2022b). Logit-based KD aims to align the logits of the teacher and student model. For example, DistilBERT (Sanh et al., 2020) distills BERT in the pre-training stage using a carefully designed loss function that comprises the initial MLM loss, cosine similarity loss, and KL divergence. MixKD (Liang et al., 2021) leverages mixup which encourages the student model to mimic the teacher's behavior on the linear interpolation of example pairs as well. Feature-based KD (Jiao et al., 2020;Liu et al., 2022) is similar with the logit-based KD, but it further capitalizes more knowledge from the intermedia features from the teacher models. And all of the existing methods are not designed in parallel, i.e., the student model can only be obtained after the teacher model is trained." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel one-stop training schema of multiple capacity models. Concretely, we design two composite model architectures to provide various-capacity models with flexible depth and width. To train different-capacity submodel exhaustively at the same time, we then propose a two-stage joint training algorithm called TSJT. It adjusts the consistency constraint at different stages. Experimental results indicate the effectiveness of our schema, and further analysis reveals the inner working of our TSJT." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although our method demonstrates success on WMT10 benchmark, it is not without limitations. First, due to the limitation of computation resources, we only test our method on encoderdecoder based models and machine translation tasks. In the future, we plan to expand our framework to encompass additional model backbones, such as encoder-only and decoder-only architectures, as well as other tasks like understanding and language modeling. Moreover, the models we uesed are all trained from scratch, but our framework could also be applied to pre-trained models. More exploration on this direction will be better. Second, there are some vital hyper-parameters in our framework, e.g. the separate threshold t sep in TSJT algorithm and the scaling coefficient α in composite training objective Eq.2. We adopt grid search to select the best parameters, which requires considerable GPU resources. An automatic method would be more desirable." } ]
Training models with varying capacities can be advantageous for deploying them in different scenarios. While high-capacity models offer better performance, low-capacity models require fewer computing resources for training and inference. In this work, we propose a novel one-stop training framework to jointly train high-capacity and low-capactiy models. This framework consists of two composite model architectures and a joint training algorithm called Two-Stage Joint-Training (TSJT). Unlike knowledge distillation, where multiple capacity models are trained from scratch separately, our approach integrates supervisions from different capacity models simultaneously, leading to faster and more efficient convergence. Extensive experiments on the multilingual machine translation benchmark WMT10 show that our method outperforms low-capacity baseline models and achieves comparable or better performance on highcapacity models. Notably, the analysis demonstrates that our method significantly influences the initial training process, leading to more efficient convergence and superior solutions.
One-stop Training of Multiple Capacity Models
[ { "figure_caption": "Figure 1 :1Figure 1: Two model architecture variants in our two-stage joint-training schema. (a) Shared architecture variant includes a MoE and a dense model, where they share specific layers. The shared layers are optimized by both of them. Thus dense model has limited width (same as MoE), but flexible depth. (b) Independent architecture variant includes a MoE and a device model, where they are completely independent from each other. Device model has flexible width and depth.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of Two-Stage Joint-Training (TSJT) algorithm for shared and independent model architecture.In the first stage two models are trained with additional KL constraint. In the second stage two models are trained seperately. Note that two models in shared architecture should be updated simultaneously due to the shared parameters.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Optimization trajectory of models training with different algorithms. \"Single\" denotes the vanilla training, \"ConstJT\" denotes the constant joint-training algorithm, and \"TSJT\" denotes the two-stage joint-training algorithm. Shared and indep denotes the model architecture.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Kullback-Leibler (KL) loss along training process of models trained with different algorithms.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Indep Arch: MoE and device are independent.", "figure_data": "NLL LossKL LossNLL LossNLL LossKL LossNLL Loss𝑦 $𝑦 #𝑦 \"𝑦 !Layer !\"#FFNFFN 1FFN 2FFN 3FFN 4FFN 1FFN 2FFN 3FFN 4FFNGateGateLayer !\"$FFNFFNLayer !FFNFFN 1FFN 2FFN 3FFN 4FFN 1FFN 2FFN 3FFN 4FFNGateGateDenseSparseSparseDeviceTokensTokens(a) Shared Arch: MoE and dense share layers.(b)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Models performance on WMT10 benchmark on X→En and En→X seperately. Values are reported as percentage (%). For each model, we report a macro-average. # Para is the number of parameters in models. # Emb represents the size of embedding used in models. # Enc and # Dec are the number of layer in encoder and decoder respectively. # Exp is the number of experts in each MoE layer (if exist).", "figure_data": "Model# Para # Emb # Enc # Dec # ExpCsDeEtFiFrGuHiLvRoTrAvgX→EnSingle MoE Single dense845M 320M76812 612 68 131.70 38.50 24.40 25.90 33.40 20.60 15.80 26.90 33.60 19.80 27.06 31.10 36.60 22.80 24.80 32.50 18.60 16.20 25.80 35.70 19.20 26.33Single device91M28833125.20 29.60 15.70 18.80 26.90 11.00 10.70 18.50 27.20 12.60 19.62TSJT-shared MoE845M1212833.10 39.40 25.60 26.70 33.70 22.00 19.20 28.20 37.70 21.60 28.72TSJT-shared dense 320M76866131.70 37.10 23.30 24.90 32.40 18.80 16.60 25.90 35.90 20.10 26.67TSJT-indep MoE845M1212833.00 39.30 25.20 26.50 33.40 21.60 19.90 28.30 37.90 21.30 28.64TSJT-indep device91M28833125.70 29.70 16.10 19.60 27.40 10.90 10.90 19.50 28.10 12.50 20.04En→XSingle MoE Single dense845M 320M76812 612 68 125.40 33.70 19.10 21.20 31.90 12.00 11.30 24.00 28.40 17.00 22.40 25.10 31.60 16.30 19.40 30.10 8.40 11.10 21.40 26.00 13.60 20.30Single device91M28833119.60 24.10 11.20 13.20 25.90 3.306.90 14.80 19.40 7.80 14.62TSJT-shared MoE845M1212826.50 34.30 18.70 21.60 32.20 10.90 12.00 23.70 28.20 15.70 22.38TSJT-shared dense 320M76866125.40 31.70 16.10 19.40 30.40 8.20 11.10 21.60 25.30 13.00 20.22TSJT-indep MoE845M1212126.20 34.10 18.60 21.70 32.30 10.80 12.10 24.10 27.80 15.60 22.33TSJT-indep device91M28833119.50 24.70 11.20 13.70 25.60 3.706.70 14.90 19.50 7.80 14.73", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The BLEU scores (%) on 3 languages.", "figure_data": "MethodModelFrFiHiAvgX→EnMoE31.8 23.3 14.5 23.20SingleDense 30.1 20.8 12.6 21.17Device 23.9 14.4 8.5 15.60ConstJT-sharedMoE Dense 30.3 21.7 12.8 21.60 31.3 23.2 14.6 23.03ConstJT-indepMoE Device 25.6 15.7 8.1 16.47 31.1 22.9 13.4 22.47TSJT-sharedMoE Dense 30.8 22.3 14.1 22.40 32.1 23.9 15.6 23.87TSJT-indepMoE Device 26.4 16.6 9.5 17.50 32.0 23.9 15.2 23.70En→XMoE30.3 18.4 9.5 19.40SingleDense 28.7 16.1 8.5 17.77Device 23.2 10.0 4.5 12.57ConstJT-sharedMoE Dense 28.8 16.5 8.3 17.87 30.1 18.2 9.4 19.23ConstJT-indepMoE Device 24.1 11.2 4.5 13.27 29.9 17.6 7.7 18.40TSJT-sharedMoE Dense 29.4 17.3 8.6 18.43 30.6 19.0 10.3 19.97TSJT-indepMoE Device 24.6 11.5 5.0 13.70 30.8 19.1 9.4 19.77", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Lan Jiang; Haoyang Huang; Dongdong Zhang; Rui Jiang; Furu Wei
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zewen Chi; Li Dong; Shaohan Huang; Damai Dai; Shuming Ma; Barun Patra; Saksham Singhal; Payal Bajaj; Xia Song; Xian-Ling Mao; Heyan Huang; Furu Wei", "journal": "", "ref_id": "b1", "title": "On the representation collapse of sparse mixture of experts", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b2", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Damai Dai; Li Dong; Shuming Ma; Bo Zheng; Zhifang Sui; Baobao Chang; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Stable-MoE: Stable routing strategy for mixture of experts", "year": "2022" }, { "authors": "Angela Fan; Edouard Grave; Armand Joulin", "journal": "", "ref_id": "b5", "title": "Reducing transformer depth on demand with structured dropout", "year": "2020" }, { "authors": "William Fedus; Barret Zoph; Noam Shazeer", "journal": "The Journal of Machine Learning Research", "ref_id": "b6", "title": "a. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2022" }, { "authors": "William Fedus; Barret Zoph; Noam Shazeer", "journal": "Journal of Machine Learning Research", "ref_id": "b7", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2022" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b8", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Robert A Jacobs; Michael I Jordan; Steven J Nowlan; Geoffrey E Hinton", "journal": "Neural Computation", "ref_id": "b9", "title": "Adaptive mixtures of local experts", "year": "1991" }, { "authors": "Lan Jiang; Hao Zhou; Yankai Lin; Peng Li; Jie Zhou; Rui Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "ROSE: Robust selective finetuning for pre-trained language models", "year": "2022" }, { "authors": "Xiaoqi Jiao; Yichun Yin; Lifeng Shang; Xin Jiang; Xiao Chen; Linlin Li; Fang Wang; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "TinyBERT: Distilling BERT for natural language understanding", "year": "2020" }, { "authors": "Michael I Jordan; Robert A Jacobs", "journal": "Neural Comput", "ref_id": "b12", "title": "Hierarchical mixtures of experts and the em algorithm", "year": "1994" }, { "authors": "Dmitry Lepikhin; Hyoukjoong Lee; Yuanzhong Xu; Dehao Chen; Orhan Firat; Yanping Huang; Maxim Krikun; Noam Shazeer; Zhifeng Chen", "journal": "", "ref_id": "b13", "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "year": "2020" }, { "authors": "Dmitry Lepikhin; Hyoukjoong Lee; Yuanzhong Xu; Dehao Chen; Orhan Firat; Yanping Huang; Maxim Krikun; Noam Shazeer; Zhifeng Chen", "journal": "", "ref_id": "b14", "title": "{GS}hard: Scaling giant models with conditional computation and automatic sharding", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b15", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Kevin J Liang; Weituo Hao; Dinghan Shen; Yufan Zhou; Weizhu Chen; Changyou Chen; Lawrence Carin", "journal": "", "ref_id": "b16", "title": "Mixkd: Towards efficient distillation of largescale language models", "year": "2021" }, { "authors": "Chang Liu; Chongyang Tao; Jiazhan Feng; Dongyan Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Multi-granularity structural knowledge distillation for language model compression", "year": "2022" }, { "authors": "Paul Michel; Omer Levy; Graham Neubig", "journal": "", "ref_id": "b18", "title": "Are sixteen heads really better than one? In Advances in Neural Information Processing Systems", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Mehrdad Seyed Iman Mirzadeh; Ang Farajtabar; Nir Li; Akihiro Levine; Hassan Matsukawa; Ghasemzadeh", "journal": "OpenAI", "ref_id": "b20", "title": "Improved knowledge distillation via teacher assistant", "year": "2020" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "", "ref_id": "b21", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Young Dae; Moon-Hyun Park; Daesin Cha; Bohyung Kim; Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Learning student-friendly teacher networks for knowledge distillation", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu ; A", "journal": "The Journal of Machine Learning Research", "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b24", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Samyam Rajbhandari; Conglong Li; Zhewei Yao; Minjia Zhang; Reza Yazdani Aminabadi; Ammar Ahmad Awan; Jeff Rasley; Yuxiong He", "journal": "", "ref_id": "b25", "title": "DeepSpeed-MoE: Advancing mixture-of-experts inference and training to power next-generation AI scale", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b26", "title": "", "year": "" }, { "authors": "Jun Rao; Xv Meng; Liang Ding; Shuhan Qi; Dacheng Tao", "journal": "", "ref_id": "b27", "title": "Parameter-efficient and studentfriendly knowledge distillation", "year": "2022" }, { "authors": "Carlos Riquelme Ruiz; Joan Puigcerver; Basil Mustafa; Maxim Neumann; Rodolphe Jenatton; André Susano Pinto; Daniel Keysers; Neil Houlsby", "journal": "", "ref_id": "b28", "title": "Scaling vision with sparse mixture of experts", "year": "2021" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b29", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2020" }, { "authors": "Haipeng Sun; Rui Wang; Kehai Chen; Masao Utiyama; Eiichiro Sumita; Tiejun Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Knowledge distillation for multilingual unsupervised neural machine translation", "year": "2020" }, { "authors": "Raphael Tang; Yao Lu; Linqing Liu; Lili Mou; Olga Vechtomova; Jimmy Lin", "journal": "", "ref_id": "b31", "title": "Distilling taskspecific knowledge from bert into simple neural networks", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b32", "title": "a. Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Attention is all you need", "year": "2017" }, { "authors": "Yiren Wang; Chengxiang Zhai; Hany Hassan", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Multi-task learning for multilingual neural machine translation", "year": "2020" }, { "authors": "Canwen Xu; Julian Mcauley", "journal": "", "ref_id": "b36", "title": "A Survey on Model Compression and Acceleration for Pretrained Language Models", "year": "2022" }, { "authors": "Canwen Xu; Julian Mcauley", "journal": "", "ref_id": "b37", "title": "A survey on model compression and acceleration for pretrained language models", "year": "2022" }, { "authors": "Canwen Xu; Yichong Xu; Shuohang Wang; Yang Liu; Chenguang Zhu; Julian Mcauley", "journal": "", "ref_id": "b38", "title": "Small models are valuable plug-ins for large language models", "year": "2023" }, { "authors": "Fuzhao Xue; Xiaoxin He; Xiaozhe Ren; Yuxuan Lou; Yang You", "journal": "", "ref_id": "b39", "title": "One student knows all experts know: From sparse to dense", "year": "2022" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b40", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2020" }, { "authors": "Zhao You; Shulin Feng; Dan Su; Dong Yu", "journal": "", "ref_id": "b41", "title": "SpeechMoE: Scaling to Large Acoustic Models with Dynamic Routing Mixture of Experts", "year": "2021" }, { "authors": "Ofir Zafrir; Guy Boudoukh; Peter Izsak; Moshe Wasserblat", "journal": "", "ref_id": "b42", "title": "Q8bert: Quantized 8bit bert", "year": "2019" }, { "authors": "Simiao Zuo; Xiaodong Liu; Jian Jiao; Jin Young; Hany Kim; Ruofei Hassan; Jianfeng Zhang; Tuo Gao; Zhao", "journal": "", "ref_id": "b43", "title": "Taming sparsely activated transformer with stochastic experts", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 379.67, 499.6, 145.47, 13.05 ], "formula_id": "formula_0", "formula_text": "L KL ≤ t sep ,(1)" }, { "formula_coordinates": [ 4, 112.05, 564.65, 177.81, 33.58 ], "formula_id": "formula_1", "formula_text": "L = - 1 T T t=1 log P(y t |x).(2)" }, { "formula_coordinates": [ 4, 92.17, 704.29, 197.7, 14.63 ], "formula_id": "formula_2", "formula_text": "L KL = D KL (y∥y ′ ) + D KL (y ′ ∥y).(3)" }, { "formula_coordinates": [ 4, 128.7, 759.62, 156.93, 13.76 ], "formula_id": "formula_3", "formula_text": "L ′ = L + α • L KL , (4" }, { "formula_coordinates": [ 4, 285.63, 763.92, 4.24, 9.46 ], "formula_id": "formula_4", "formula_text": ")" } ]
10.18653/v1/2021.emnlp-main.532
2023-10-12
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b45", "b26", "b13", "b20", "b2", "b24", "b13", "b40", "b21", "b6", "b38", "b25", "b37", "b33", "b4" ], "table_ref": [], "text": "While recent progress of conditional generation has improved the text summarization performance dramatically (Lewis et al., 2020;Zhang et al., 2020;Liu et al., 2022), the factuality problem -where models often yield summaries that are not grounded by the source input -remains prominent and critical in abstractive summarization systems. For example, prior research found that 30% of automatic summaries could contain hallucinations (Goodrich et al., 2019;Kryscinski et al., 2019), and this phenomenon persists even in the state-of-the-art pretraining-based models (Cao and Wang, 2021). Unfortunately, such factuality errors cannot be reflected by the traditional summarization metrics An example of prompting the text-davinci-003 language model for factuality evaluation. Chain-of-thought prompting adds more instructions to ask the model to find the evidence. like ROUGE scores (Lin, 2004). Thus, a reliable and effective factual consistency2 evaluation method is desired.\nDifferent from previous work that focuses on training specific natural language inferences or question answering models for factuality evaluation (Goodrich et al., 2019;Wang et al., 2020;Kryscinski et al., 2020;Durmus et al., 2020;Scialom et al., 2021), we explore an alternate approach through directly prompting LLMs. This work is inspired by the recent success of zero/fewshot prompting with LLMs (Liu et al., 2021), particularly on instructing tuning (Sanh et al., 2022;Wei et al., 2022a;Ouyang et al., 2022;Chung et al., 2022) and chain-of-thought prompting (Wei et al., 2022b) which greatly boost the prompt understanding and reasoning abilities of LLMs. Given these latest advances, in this paper, we aim to answer: are LLMs off-the-shelf factuality evaluators for summarization? Vanilla Prompting: Q: Can the following statement be inferred from the above document? Yes or No? \" space invaders \" is the founder of the new japan pro wrestling organization . inoki has appeared in the u.s.-based wwe -which describes him as \" among the most respected men in sports-entertainment \" . A:\nNo." }, { "figure_ref": [], "heading": "Chain of thought prompting:", "publication_ref": [ "b4", "b10", "b27", "b12" ], "table_ref": [], "text": "Q: Can the following statement be inferred from the above document? Please answer with the following structure. 1. Try to find the supporting evidence from the document. 2. Answer Yes or No. \" space invaders \" is the founder of the new japan pro wrestling organization . inoki has appeared in the u.s.-based wwe -which describes him as \" among the most respected men in sports-entertainment \" . A: 1.\nAntonio Inoki is described as \"among the most respected men in sports-entertainment\" and is the founder of the New Japan Pro Wrestling organization. 2. No, Space Invaders is not the founder of the New Japan Pro Wrestling organization.\nSentence-by-sentence prompting: Q: Can the following statement be inferred from the above document? Yes or No? 1. \" space invaders \" is the founder of the new japan pro wrestling organization .\n2. inoki has appeared in the u.s.-based wwe -which describes him as \" among the most respected men in sports-entertainment \" . A: 1.\nNo. 2. Yes.\nTable 1: A zero-shot prompting example with three prompting methods. We bold the content from the source that supports the prediction. The listed prompts do not show the prepended documents for ease of space. We use the exact wording of instructions here in our experiments.\nTo this end, we cast factual consistency evaluation as an entailment task and directly query LLMs whether the summary can be inferred by the source document or not as shown in Figure 1. We incorporate recent developments of LLMs and prompting techniques: (1) Beyond the vanilla prompting, we examine chain-of-thought prompting, which encourages LLMs to articulate a reasoning process for their predictions. Furthermore, we introduce a sentence-by-sentence prompting method designed for long summaries, breaking down lengthy generations into multiple abbreviated summaries, thereby simplifying the task. (2) We consider five powerful LLMs with different sizes and accessibility: GPT-4 (OpenAI, 2023), ChatGPT (OpenAI, 2022), text-davinci-003, code-davinci-002, and the open-source Flan-T5 (Chung et al., 2022) model that has 11 billion parameters and can be deployed with reasonable hardware requirements.\nConcurrent research investigates the use of LLMs to evaluate generated text. While Fu et al. (2023) emphasize general text generation tasks and different aspects, we focus specifically on the factuality of summarization. Furthermore, their approach necessitates access to the logits of generated sequences, which are often unavailable in current LLMs' access. Luo et al. (2023) assess ChatGPT as the factuality evaluator in a manner similar to our method. In comparison, however, our study spans a broader range of LLMs that outperform ChatGPT significantly and introduces a novel sentence-bysentence prompting approach that yields considerable improvements in most settings. Gekhman et al. (2023) have proposed a novel approach to gen-erate synthetic data using LLMs. By evaluating the performance of LLMs fine-tuned on the generated data, the authors successfully demonstrated the efficiency of both the data and the model. This study highlights the significant contribution of LLMs in advancing the field of summary generation and evaluation.\nIn the experiments, we evaluate factuality evaluators on five factuality benchmarks that cover different quality levels of summaries, which are generated from a variety of summarization systems. Empirical results show that prompting LLMs outperforms the existing factuality evaluation methods in all settings. The improvement over best of the compared methods is up to 12.2 absolute points in terms of binary inconsistency classification accuracy. Our results imply that LLMs are better factual consistency evaluators of summaries, which provides supportive evidence for a potential shift in the methodology of factuality evaluation to prompting LLMs." }, { "figure_ref": [], "heading": "Prompting Methods and Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Prompting", "publication_ref": [ "b21", "b14", "b47" ], "table_ref": [], "text": "Following previous work, we cast factual consistency evaluation as an entailment task (Kryscinski et al., 2020;Goyal and Durrett, 2020;Zhao et al., 2020), and ask the model whether the summary could be inferred from the source document. 3 Denote the document as x, the summary as y, and the human-written instruction as i. Then the prompt p ≡ ⟨x, i, y⟩, where ⟨⟩ represents concatenation of the document, instruction, and summary. We feed p as input to the model that is expected to produce short, yes or no answers. We term such a vanilla version of prompting as vanilla prompting. In the following, we describe another two more advanced prompting techniques, while an example of all the prompt formats we studied is illustrated in Table 1: • Chain-of-thought prompting: Wei et al.\n(2022b) demonstrate that it is helpful to ask the model to generate a thought process along with the final prediction. Therefore, we design a chain-of-thought instruction, which aims to guide the model to output the supporting evidence from the document before making the final judgment.\n• Sentence-by-sentence prompting: Summaries often consist of multiple sentences and facts that need to be verified. We design a framework to decompose the summary into smaller text blocks first and then evaluate them one by one. For simplicity, we just decompose the summary by sentence boundary that is already effective empirically, while we leave the study of other decomposition methods (e.g. decomposition through prompting LLMs) as future work.\nTable 1 only exemplifies prompts under zeroshot prompting setting, while we also experiment with few-shot prompting for the three prompting methods. The few-shot demo examples are randomly picked from the validation set. We have included more details on our prompt engineering practice in Appendix A." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b4" ], "table_ref": [], "text": "We study five LLMs: GPT-4 (OpenAI, 2023), ChatGPT (OpenAI, 2022), text-davinci-003, code-davinci-002, and Flan-T5 (Chung et al., 2022). Details are described in Appendix B." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benchmark Datasets", "publication_ref": [ "b30", "b44", "b2", "b39", "b8", "b15", "b2", "b21", "b34", "b23", "b36", "b39" ], "table_ref": [], "text": "A summarization faithfulness benchmark is composed of source documents, model-generated summaries, and annotated faithfulness labels. The formal faithfulness benchmarks mainly use two popular summarization datasets, CNN/Dailymail (CNNDM, Hermann et al. (2015)) and XSum (Nallapati et al., 2016). CNNDM is a multi-sentence summarization dataset for CNN and Dailymail articles. Its reference summaries highly overlap with the source articles, resulting in a low degree of abstractiveness (Zhang et al., 2018). In contrast, summaries in XSum are typically more abstractive, consequently leading summarization models to be more susceptible to generating factual errors within XSum (Cao and Wang, 2021). Due to the disparate characteristics of CNNDM and XSum, we assess factual evaluators on them separately.\nRecently, Tang et al. (2022) aggregates existing faithfulness benchmarks to form a new benchmark, AggreFact. We manually investigate the benchmarks included in AggreFact, and select a subset of them as described below as our testbed which are either commonly used or annotated by the authors or experts:4 SummEval (Fabbri et al., 2021), XsumFaith (Maynez et al., 2020a), Goyal21 (Goyal and Durrett, 2021), CLIFF (Cao and Wang, 2021), FactCC (Kryscinski et al., 2020), and Frank (Pagnoni et al., 2021). These benchmarks cover summaries generated from a wide range of models ranging from pre-transformer methods to SOTA pretrained models. In this paper, we combine Goyal21 and CLIFF -where the summaries are produced by BART (Lewis et al., 2020) or T5 (Raffel et al., 2020) -as a benchmark to study the ability of evaluators to assess highquality summaries produced by SOTA models, we refer to this combined benchmark as XSumSota. We distinguish XSumSota and XSumfaith to echo the findings in Tang et al. (2022) that the performance of faithfulness evaluation methods degrades dramatically as the summaries are from more effective models. Details of these benchmarks are described in Appendix C." }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b39", "b0", "b22", "b39", "b39" ], "table_ref": [], "text": "We apply greedy decoding to obtain output from LLMs in all settings unless otherwise specified. We run sentence-by-sentence prompting in CNNDM datasets only since the summary of the XSum dataset is a single sentence. For ease of spaces, we move the results on FactCC and Frank to Appendix F. In few-shot settings, within a benchmark SummEval only since the summary in XSum contains just one sentence. We bold the numbers that exceed all the previous approaches, and underline the best accuracies on each dataset. We exclude DAE on XSumFaith for a fair comparison since it is trained on the human-annotated data from XSumFaith. GPT-4 † is assessed with zero-shot vanilla prompting on XSum datasets and 2-shot sbs promtping on CNNDM datasets, due to cost consideration. Numbers of previous approaches are from Tang et al. (2022).\nwe prepend the same two randomly picked demo examples (one is a positive example and the other is negative) from the validation set to the original prompt. 5 We perform analysis on the effect of the number of demo examples as well as error types in Appendix D. Metric: We use balanced accuracy (Brodersen et al., 2010) as the evaluation metric following previous work (Laban et al., 2022;Tang et al., 2022), which is defined as:\nBAcc = 1 2 T P T P + F N + T N T N + F P ,(1)\nwhere TP stands for True Positive, FN is False Negative, TN is True Negative, and FP is False Positive. Random predictions would obtain a 50% balanced accuracy score. Different from most prior approaches which need to tune a threshold hyperparameter to convert raw output scores into binary labels (Tang et al., 2022), the LLMs directly produce discrete, yes or no predictions as shown in Table 1." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b14", "b38", "b7", "b22", "b22" ], "table_ref": [], "text": "We compare LLMs with five top-performing evaluators: DAE (Goyal and Durrett, 2020), QuestEval (Scialom et al., 2021), QAFactEval (Fabbri et al., 2022), SummC-ZS (Laban et al., 2022) and SummaC-Conv (Laban et al., 2022). Detailed description can be found at Appendix E." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1" ], "text": "Are LLMs better factual consistency evaluators? The full results comparing different evaluators across the three benchmarks are illustrated in Table 2. We see that LLMs achieved the stateof-the-art performance on all benchmarks. The improvements over the previous best on SummEval, XsumFaith, XsumSota are 12.2, 3.7, and 2.4 absolute points respectively. text-davinci-003 and GPT-4 are the most effective models overall, outperforming the non-LLM approaches on all the three benchmarks. Flan-T5, code-davinci-002 and ChatGPT beat the previous best on two out of three benchmarks. Therefore, we conclude that LLMs are indeed better factual consistency evaluators when properly prompted.\nComparing different prompting methods: As shown in Table 2, chain-of-thought (cot) prompting hurts performance dramatically compared to vanilla prompting in most cases. This is probably because the factual consistency task is less reasoning-intensive compared to numerical and symbolic reasoning tasks where cot archives success. Sentence-by-sentence (sbs) prompting clearly improves over vanilla prompting on SummEval for code-davinci-002 and text-davinci-003, particularly in code-davinci-002, sbs is around or over 10 points better than vanilla prompting in both zero-and few-shot settings. This verifies that decomposing a long summary into smaller blocks makes factual consistency evaluation easier.\nComparing few-shot with zero-shot: While few-shot prompting fails to yield consistent gains over zero-shot prompting on all settings, it especially helps code-davinci-002, for example, it outperforms zero-shot prompting by ∼15/16/10 points on SummEval with vanilla/cot/sbs prompting, by 6.3 points on XSumSota with vanilla prompting, and by ∼12/16 points on XSumFaith with vanilla/cot prompting. After examining the model output, we found that code-davinci-002 often fails to understand and follow the instructions in the zero-shot setting, but is able to do so when provided with exemplars.\nComparing different LLMs: Based on the previous comparisons, we summarize that text-davinci-003 and GPT-4 are two best models for the factual consistency task, while being less sensitive to the availability of exemplars.\nOn the other hand, code-davinci-002 requires providing a few demo examples to potentially work well. Importantly, Flan-T5 achieves surprising results in general -under a zero-shot setting in SummEval and XSumSota, Flan-T5 not only beats all the baselines, but also outperforms both the GPT-3.5 variants that are orders of magnitude larger. The unsatisfying performance of Flan-T5 on XSumFaith may be due to a lack of per-dataset prompt tuning for XSumFaith.\nAre we there yet? While LLMs make great progress in factual consistency evaluation as shown in Table 2, we observe distinct patterns. Taking text-davinci-003 as an example, it has pushed the ACC of the CNNDM benchmark SummEval to 88%, but its performance on the two XSum benchmarks is no more than 75%. These results imply that it remains challenging to evaluate the faithful-ness of highly abstractive summaries. Therefore, faithfulness evaluation has to continue relying on human labor in practice at the current stage, and automatic metrics still have a long way to go." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "In this paper, we focus on one of the central tasks in summarization, factual consistency evaluation of summaries, and explore to prompt large language models to address it. We perform a comprehensive empirical study demonstrating large language models are better factual consistency evaluators when properly prompted. We note that prompting LLMs is a highly flexible approach and could go beyond the usage in this paper for factuality consistency evaluation." }, { "figure_ref": [], "heading": "A Analysis on Different Prompts", "publication_ref": [], "table_ref": [], "text": "Q: Can the following statement be inferred from the above document? Yes or No? Q: Is the following statement factually consistent with the above document? Yes or No? Q: Does the above document entail the following statement? Yes or No?\nTable 3: The three different instructions we use to conduct the robustness experiment.\nPrompt Engineering: LLMs are notoriously known to be sensitive to the precise wording of prompts, and thus prompt engineering is required in the prompting process. We try several instructions in our experiments and select the best one in terms of the validation performance, while we perform robustness analysis for different prompts.\nNote that we deliberately avoid the use of the term \"summary\" in the instruction but replace it with \"statement\", it is because we found that the term \"summary\" would reveal that the generated text is intended to be a summary of the source, and consequently, the model is inclined to function as a general summarization evaluation task rather than focusing only on factual consistency. We emphasize that we use the same instructions across all models and benchmarks without tuning them separately for each dataset.\nRobustness on different prompts: We run experiments using the three prompts listed in Table 3, and report the mean and the standard deviation of balanced accuracy. We evaluate Flan-T5, text-davinci-003, and code-davinci-002 on SummEval, XSumFaith, and XSumSota. For fewshot settings, we also randomly shuffle the order of the exemplars in addition to varying the instruction. We utilize vanilla prompting on XSumFaith and XSumSota, and sentence-by-sentence prompting on SummEval. Results are reported in Table 4. text-davinci-003 exhibits the smallest variance, demonstrating strong abilities to follow different, synonymous instructions. code-davinci-002 and Flan-T5 are more sensitive to the wording of prompts." }, { "figure_ref": [], "heading": "B Models", "publication_ref": [ "b49", "b36", "b43" ], "table_ref": [], "text": "Below we describe the 5 LLMs that we study. GPT-4 (OpenAI, 2023) are not released and we use it through the OpenAI API.\nChatGPT (OpenAI, 2022) is a sibling model for instructGPT. It's trained on plenty of instructions and responses by supervised learning and it then went through Reinforcement Learning from Human Feedback (RLHF). These two techniques enable it to follow human instructions efficiently. We also examine ChatGPT through API.\ntext-davinci-003 is a variant in the GPT-3.5 series. It is obtained after joint pretraining of text and code and then further tuned with annotated instruction data6 -consequently, text-davinci-003 is much more powerful than the original GPT-3 model (Brown et al., 2020). text-davinci-003 is likely to have 175 billion parameters following GPT-3 but cannot be verified from public information. We also use it through API.\ncode-davinci-002 is another GPT-3.5 variant. While it was intended to be used in the code domain, recent work indicates that code-davinci-002 is more effective than the text davinci models on text numerical reasoning tasks (Zhou et al., 2022).\nSimilar to text-davinci-003, we examine code-davinci-002 through API.\nFlan-T5 (Chung et al., 2022) is fine-tuned from the T5 model (Raffel et al., 2020) on around 2000 NLP datasets with instruction tuning, demonstrating great performance on a variety of tasks through prompting. We experiment with the largest Flan-T5 model, flan-t5-xxl. Notably, flan-t5-xxl has released model weights and is 11-billion-parameter large, a much smaller size compared to GPT-3.5. Flan-T5 is probably the most capable language model with prompting that is open-source and could be deployed in relatively common hardware conditions (e.g. two 40GB GPUs). We select Flan-T5 in our study to indicate what open-source, easyto-deploy LLMs can achieve in factual consistency evaluation. We use the transformers package (Wolf et al., 2020) to evaluate Flan-T5." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b2", "b15", "b8", "b23", "b45", "b36", "b7", "b22", "b48", "b35", "b5", "b15", "b2", "b21", "b34", "b22", "b9", "b17", "b39", "b39", "b39" ], "table_ref": [], "text": "Table 5: Metadata of the three benchmarks that we focus on. XSumSota is a combined benchmark of Cao and Wang (2021) and Goyal and Durrett (2021) for summaries generated by the state-of-the-art summarization models.\nSummEval (Fabbri et al., 2021) is the most complete faithfulness benchmark for CNNDM as far as we know. The summaries are from 16 different models including both pre-transformer models and the state-of-the-art summarization models such as BART (Lewis et al., 2020), PEGASUS (Zhang et al., 2020), and T5 (Raffel et al., 2020). The annotations are from 5 crowd-sourced annotators and 3 authors of the benchmark.\nXSumFaith (Maynez et al., 2020b) is the most commonly used faithfulness benchmark for XSum (Fabbri et al., 2022;Laban et al., 2022;Zhou et al., 2021). It contains summaries generated from 5 models which do not include the SOTA summarization models. The transformer-based models studied in XSumFaith are GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2019). The annota-tions are from 3 trained annotators.\nGoyal21 (Goyal and Durrett, 2021) contains both the CNNDM and the XSum samples. The two authors of this work manually annotated the summaries. We use the XSum split in this dataset where all the annotated summaries are generated from a tuned BART model.\nCLIFF (Cao and Wang, 2021) consists of summaries generated by SOTA models (including T5 and BART). The annotations are from 2 experts. Similar to Goyal21, we take the XSum split from this dataset.\nFactCC (Kryscinski et al., 2020) consists of summaries generated by pre-transformer models. Two authors of this work annotated this dataset. It contains only CNNDM samples.\nFrank (Pagnoni et al., 2021) consists of summaries generated by various summarization models from pre-transformer models and SOTA models. Three crowd-sourced annotators annotated this dataset. It contains both CNNDM and Xsum samples.\nWe have noticed that there is another benchmark SummaC (Laban et al., 2022), which is an integration of six datasets including CoGenSumm (Falke et al., 2019), XsumFaith, Polytope (Huang et al., 2020), FactCC, SummEval, and Frank. Here we do not include SummaC as a whole since the CoGen-Summ benchmark ranks pairs of generated summaries rather than detecting factually consistent summaries, and pairs of summaries can be both factually consistent or inconsistent. Also, some annotated error types in Polytope such as addition, omission, or duplication are unrelated to factuality errors by definition. As a result, we think that the SummaC benchmark as a whole may not be suitable for factuality evaluation, as mentioned in Tang et al. (2022) as well. Note that we do not separate the SOTA summaries out in SummEval since there are only 3 negative samples out of 200 SOTA test samples in total -SOTA models rarely make factual errors on less abstractive summarization, and we think it is not representative either to use just 3 negative samples to characterize the inconsistency detection ability of evaluators. The benchmark metadata is shown in Table 5. accuracy results, we perform a more fine-grained analysis on the different types of factuality errors detected to obtain a deeper understanding of the evaluators' predictions. We resort to the error type annotations from AggreFact that aggregates error type definitions from prior work and establishes a unified factuality error type scheme (Tang et al., 2022). Specifically, it defines six factuality error types as a set {intrinsic, extrinsic} × {nounphrase, predicate, sent}. Intrinsic errors denote hallucinated content using the information in the source document, while extrinsic errors are synthesized generations that ignore the source document altogether. For example, introducing new nouns or verbs not related to the source text. {nounphrase, predicate, sent} indicates the errors happen at a noun phrase, a predicate, or the entire summary. We refer the readers to Tang et al. (2022) for more detailed explanations and examples of these error types. We report recall of the identified errors on XSumFaith and XSumSota, 7 and compare text-davinci-003 with the best prompting method (in terms of the overall balanced accuracy) on each dataset against the baselines. As shown in Figure 2, text-davinci-003 identifies 7 There is no error type annotation for SummEval. We use up to 4 exemplars for text-davinci-003 due to the context window size limit (4000 tokens)." }, { "figure_ref": [], "heading": "D Analysis", "publication_ref": [], "table_ref": [], "text": "more errors in XSumFaith than all the baselines on 5 out of 6 error types. The results on XSumSota are more mixed, where text-davinci-003 outperforms the baselines on 3 out of 6 error types. These findings suggest a similar conclusion as in Tang et al. ( 2022) that current factuality systems cannot be uniformly good at identifying every error type across datasets." }, { "figure_ref": [], "heading": "Effect of number of exemplars:", "publication_ref": [], "table_ref": [], "text": "We vary the number of exemplars on SummEval and XsumFaith where few-shot learning helps the most.\nWe study text-davinci-003 and code-davinci-002. Flan-T5 is excluded since more than 2 exemplars do not fit within its context window. We adopt the best prompting method on each benchmark for the analysis -sentence-bysentence prompting and chain-of-thought prompting for SummEval and XSumFaith respectively. Results are shown in Figure 3. The balanced accuracy is not monotonically increasing as we increase the number of shots. While the best performance of code-davinci-002 is achieved with 4 shots on SummEval, 2 shots is the best configuration in other settings. This may be due to the long context of few-shot prompts in summarization." }, { "figure_ref": [], "heading": "E Our Baselines", "publication_ref": [ "b14", "b38", "b7", "b22", "b22" ], "table_ref": [], "text": "DAE (Goyal and Durrett, 2020) is an arcgrained entailment-based evaluation method. It evaluates the factuality of each dependency arc in the generated summary separately and combines them as the final result.\nQuestEval (Scialom et al., 2021) is a QA-based approach which aggregates the answer overlap scores from questions generated from the summary and answered with the document, and from questions generated from the document and answered with the summary.\nQAFactEval (Fabbri et al., 2022) is another QA-based approach that computes the answer overlap scores from questions generated from the summary and answered with the document, but with improved components at each stage. SummaC-ZS (Laban et al., 2022) is an entailement-based method which computes the maximum entailment score for each summary sentence, and aggregates all the scores through an averaging operation to obtain the final score. SummaC-Conv (Laban et al., 2022) is an extension of SummaC-ZS where for each summary sentence, SummaC-Conv computes the entailment scores with respect to all the source sentences, passes the obtained scores as features to a convolution layer to produce the summary sentence score, and then averages as in SummaC-ZS.\nWe emphasize that we evaluate all the baselines in a threshold-per-dataset setting -the baselines use a different threshold hyperparameter (detailed in §3.2 on each benchmark tuned separately, while the LLMs use the same instructions across all datasets." }, { "figure_ref": [], "heading": "F Results on another two benchmarks", "publication_ref": [ "b39", "b22" ], "table_ref": [], "text": "We report results of three most powerful LLMs on another two benchmarks: FactCC and Frank. We utilize sentence-by-sentence prompting for the CNNDM samples and vanilla prompting for XSum samples in both benchmarks. The results are shown in Table 6. Here the numbers of previous approaches for FactCC are from Tang et al. (2022), the numbers for Frank are from Laban et al. (2022). We can see the best performance is also achieved by LLMs on both two benchmarks." }, { "figure_ref": [], "heading": "G Related Work", "publication_ref": [ "b13", "b21", "b14", "b47", "b40", "b6", "b38", "b35", "b1", "b25", "b37", "b33", "b4", "b18", "b19", "b11", "b3" ], "table_ref": [], "text": "Factual consistency evaluation: Prior factuality evaluation approaches can be divided into entailment-based methods and question answering (QA) methods. Entailment-based methods aim to determine whether a summary is entailed by the original document or not. They often apply both semantically-variant and semanticallyinvariant transformations to the summaries to construct a classification dataset to train the evaluation model (Goodrich et al., 2019;Kryscinski et al., 2020;Goyal and Durrett, 2020;Zhao et al., 2020). Relying on heuristic transformations cannot cover all types of factual errors in summarization, limiting its performance as mentioned in Kryscinski 2020). On the other hand, QA methods automatically yield questions to probe the facts in the document or summary, and then assess whether the facts in the document and the summary are consistent by answering these questions (Wang et al., 2020;Durmus et al., 2020;Scialom et al., 2021) These approaches need to train additional models for question generation, question answering, or answer comparison, where corresponding annotations are required and the errors of intermediate components could propagate to the final predictions.\nPrompt-based learning: Prompts in the context of language models refer to text instructions concatenated with the test input (zero-shot) or with few exemplars and the test input (few-shot). Large language models are able to perform various tasks without tuning (Radford et al., 2019;Brown et al., 2020;Liu et al., 2021) when the prompts are fed into as the input to signal the task. Instruction tuning futher lifts the ability of LLMs to follow instructions in the prompts (Wei et al., 2022a;Sanh et al., 2022;Ouyang et al., 2022;Chung et al., 2022;Iyer et al., 2022). Recently, chain-of-thought prompting is proposed to trigger the reasoning abilities of LLMs, by asking the model to explain the thinking process during generation (Kojima et al., 2022;Wei et al., 2022b). In addition to text prompts, Gao et al. (2022) and Chen et al. (2022) introduce programof-thought prompting to generate executable code to perform numerical reasoning tasks." } ]
Detecting factual errors in summaries has been an important and challenging subject in summarization research. Inspired by the emergent ability of large language models (LLMs), we explore evaluating factual consistency of summaries by directly prompting LLMs. We present a comprehensive empirical study to assess the ability of LLMs as factual consistency evaluators, which consists of (1) analyzing different LLMs such as the GPT model series and Flan-T5; (2) investigating a variety of prompting methods including vanilla prompting, chain-of-thought prompting, and a sentence-by-sentence prompting method to tackle long summaries; and (3) evaluating on diverse summaries generated by multiple summarization systems, ranging from pre-transformer methods to SOTA pretrained models. Our experiments demonstrate that prompting LLMs is able to outperform the previous best factuality systems in all settings, by up to 12.2 absolute points in terms of the binary classification accuracy on inconsistency detection.
Evaluating Factual Consistency of Summaries with Large Language Models
[ { "figure_caption": "Figure 1:An example of prompting the text-davinci-003 language model for factuality evaluation. Chain-of-thought prompting adds more instructions to ask the model to find the evidence.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "DatasetModelsPromptSummEvalXsumFaithXsumSotaPrevious ApproachesDAE-69.9-72.8QuestEval-71.359.766.6SummaC-ZS-67.752.156.5SummaC-Conv-73.766.063.1QAFactEval-76.660.266.0vanilla85.2 / 78.758.6 / 58.075.1 / 74.7Flan-T5cot67.7 / 52.655.0 / 57.661.5 / 60.3sbs70.9 / 75.3--vanilla61.3 / 76.453.2 / 65.253.5 / 59.8code-davinci-002cot56.1 / 72.152.3 / 68.851.6 / 54.0sbs76.6 / 86.3--vanilla81.5 / 84.660.3 / 65.274.1 / 67.2text-davinci-003cot62.2 / 72.666.8 / 69.065.5 / 59.2sbs83.4 / 88.0--vanilla65.3 / 68.967.5 / 67.263.3 / 65.2ChatGPTcot59.9 / 68.569.7 / 66.070.1 / 67.0sbs83.3 / 80.0--GPT-4 †-88.867.275.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "is the newest and most powerful model in GPT-family. The model weights", "figure_data": "ModelSettingDatasetSumEXSFXSSFlan-T50-shot 2-shot76.7±5.0 60.1±2.4 72.2±4.2 79.5±4.2 58.5±1.0 73.4±1.1code0-shot 65.0±10.4 51.8±2.0 47.7±6.3 2-shot 78.9±6.7 59.8±5.3 58.5±2.3text0-shot 2-shot84.9±2.0 60.5±0.5 73.9±0.6 87.8±1.0 65.2±0.3 66.6±1.4Table 4: Mean and standard deviation of balanced ac-curacy (%) on SummEval (SumE), XSumFaith (XSF),and XSumSota (XSS). We use sentence-by-sentenceprompting for SumE and vanilla prompting for XSF andXSS. code and text are short for code-davinci-002and text-davinci-003 respectively.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Shiqi Chen; Siyang Gao; Junxian He
[ { "authors": "Kay Henning Brodersen; Soon Cheng; Klaas Ong; Joachim M Enno Stephan; Buhmann", "journal": "IEEE", "ref_id": "b0", "title": "The balanced accuracy and its posterior distribution", "year": "2010" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Shuyang Cao; Lu Wang", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization", "year": "2021" }, { "authors": "Wenhu Chen; Xueguang Ma; Xinyi Wang; William W Cohen", "journal": "", "ref_id": "b3", "title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b4", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Esin Durmus; He He; Mona Diab", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization", "year": "2020" }, { "authors": "Alexander Fabbri; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "QAFactEval: Improved QAbased factual consistency evaluation for summarization", "year": "2022" }, { "authors": "Alexander R Fabbri; Wojciech Kryściński; Bryan Mc-Cann; Caiming Xiong; Richard Socher; Dragomir Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "SummEval: Re-evaluating summarization evaluation", "year": "2021" }, { "authors": "Tobias Falke; Leonardo Fr Ribeiro; Prasetya Ajie Utama; Ido Dagan; Iryna Gurevych", "journal": "", "ref_id": "b9", "title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference", "year": "2019" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b10", "title": "GPTScore: Evaluate as you desire", "year": "2023" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b11", "title": "Pal: Program-aided language models", "year": "2022" }, { "authors": "Zorik Gekhman; Jonathan Herzig; Roee Aharoni; Chen Elkind; Idan Szpektor", "journal": "", "ref_id": "b12", "title": "Trueteacher: Learning factual consistency evaluation with large language models", "year": "2023" }, { "authors": "Ben Goodrich; Vinay Rao; Peter J Liu; Mohammad Saleh", "journal": "ACM", "ref_id": "b13", "title": "Assessing the factual accuracy of generated text", "year": "2019-08-04" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Evaluating factuality in generation with dependency-level entailment", "year": "2020" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Annotating and modeling fine-grained factuality in summarization", "year": "2021" }, { "authors": "Karl Moritz Hermann; Tomás Kociský; Edward Grefenstette; Lasse Espeholt; Will Kay; Mustafa Suleyman; Phil Blunsom", "journal": "", "ref_id": "b16", "title": "Teaching machines to read and comprehend", "year": "2015-12-07" }, { "authors": "Dandan Huang; Leyang Cui; Sen Yang; Guangsheng Bao; Kun Wang; Jun Xie; Yue Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "What have we achieved on text summarization?", "year": "2020" }, { "authors": "Srinivasan Iyer; Xi Victoria Lin; Ramakanth Pasunuru; Todor Mihaylov; Dániel Simig; Ping Yu; Kurt Shuster; Tianlu Wang; Qing Liu; Punit Singh Koura", "journal": "", "ref_id": "b18", "title": "OPT-IML: Scaling language model instruction meta learning through the lens of generalization", "year": "2022" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b19", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Wojciech Kryscinski; Nitish Shirish Keskar; Bryan Mc-Cann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Neural text summarization: A critical evaluation", "year": "2019" }, { "authors": "Wojciech Kryscinski; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul N Bennett; Marti A Hearst", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b22", "title": "SummaC: Re-visiting NLIbased models for inconsistency detection in summarization", "year": "2022" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "", "ref_id": "b25", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "Yixin Liu; Pengfei Liu; Dragomir Radev; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "BRIO: Bringing order to abstractive summarization", "year": "2022" }, { "authors": "Zheheng Luo; Qianqian Xie; Sophia Ananiadou", "journal": "", "ref_id": "b27", "title": "Chatgpt as a factual inconsistency evaluator for abstractive text summarization", "year": "2023" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Ramesh Nallapati; Bowen Zhou; Caglar Cicero Dos Santos; Bing Gulcehre; Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond", "year": "2016" }, { "authors": " Openai", "journal": "OpenAI Blog", "ref_id": "b31", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b32", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b33", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "", "ref_id": "b34", "title": "Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics", "year": "2021" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b35", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b37", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Thomas Scialom; Paul-Alexis Dray; Sylvain Lamprier; Benjamin Piwowarski; Jacopo Staiano; Alex Wang; Patrick Gallinari", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "QuestEval: Summarization asks for fact-based evaluation", "year": "2021" }, { "authors": "Liyan Tang; Tanya Goyal; Alexander R Fabbri; Philippe Laban; Jiacheng Xu; Semih Yahvuz; Wojciech Kryściński; Justin F Rousseau; Greg Durrett", "journal": "", "ref_id": "b39", "title": "Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors", "year": "2022" }, { "authors": "Alex Wang; Kyunghyun Cho; Mike Lewis", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Asking and answering questions to evaluate the factual consistency of summaries", "year": "2020" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b41", "title": "a. Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b42", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Fangfang Zhang; Jin-Ge Yao; Rui Yan", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "On the abstractiveness of neural document summarization", "year": "2018" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter J Liu", "journal": "", "ref_id": "b45", "title": "PEGASUS: pre-training with extracted gap-sentences for abstractive summarization", "year": "2020-07" }, { "authors": " Pmlr", "journal": "", "ref_id": "b46", "title": "", "year": "" }, { "authors": "Zheng Zhao; Shay B Cohen; Bonnie Webber", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Reducing quantity hallucinations in abstractive summarization", "year": "2020" }, { "authors": "Chunting Zhou; Graham Neubig; Jiatao Gu; Mona Diab; Francisco Guzmán; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Detecting hallucinated content in conditional neural sequence generation", "year": "2021" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Olivier Bousquet; Quoc Le; Ed Chi", "journal": "", "ref_id": "b49", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 84.45, 593.06, 205.41, 24.43 ], "formula_id": "formula_0", "formula_text": "BAcc = 1 2 T P T P + F N + T N T N + F P ,(1)" } ]
2023-06-07
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "uated the results considering the best candidates and the top ten candidates. The evaluation was done both automatically (for MWEs) and manually (for grammatical structures). The results obtained for MWEs show that BERTimbau Large surpassed both the other models in predicting the correct masked element. However, the average accuracy of the best model was only 52% when only the best candidates were considered for each sentence, going up to 66% when the top ten candidates were taken into account. As for the grammatical tasks, results presented better prediction, but also varied depending on the type of morphosyntactic agreement. Cases such as connectors and impersonal verbs, which do not require any agreement in the produced candidates, had precision of 100% and 98.78% among the best candidates, while other tasks that require morphosyntactic agreement to produce good candidates had results consistently below 90% overall precision, with the lowest scores being reported for nominal agreement and verb agreement, both having below 80% overall precision among the best candidates. Therefore, we identified that a critical and widely adopted resource for Brazilian Portuguese NLP although mostly proficient in these tests, also presents issues concerning MWE vocabulary and morphosyntactic agreement. These models are the de-facto core component in current NLP systems, and our findings demonstrate the need of additional improvements in these models and the importance of widely evaluating computational representations of language." }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b30", "b3", "b11", "b23", "b33", "b28", "b5", "b4", "b17", "b6", "b38", "b36", "b19", "b2", "b4", "b15", "b26", "b40", "b13", "b32", "b0", "b29" ], "table_ref": [], "text": "Learning computational representations that accurately model language is a critical goal of Natural Language Processing (NLP) research. These representations, or word embeddings, are not only helpful for language technology tasks like machine translation, question answering and text adaptation. They can also improve our understanding of human cognition, for instance, correlating well with human predictions about words (Mandera, Pawel and Keuleers, Emmanuel and Brysbaert, Marc, 2017;Schrimpf et al., 2020), and also helping to highlight the existence of explicit and implicit social constructs, including gender and racial bias (Bolukbasi et al., 2016;Kumar et al., 2020), or helping to detect misinformation (Oshikawa et al., 2018;Su et al., 2020) and cultural change (Savoldi et al., 2021;Dinan et al., 2020), among others. With the recent popularisation of NLP and the availability of large scale computing resources, we have witnessed a quick evolution of many new computational architecture proposals for language representation. As a consequence, there are currently pre-trained language models available in an unprecedented scale, for over a hundred languages (Devlin et al., 2018), including low-resourced ones. However, these models lack transparency and interpretability (Marcus, 2020) and are often seen as blackboxes, all of which limit their application in tasks that require explainability. This fast-paced evolution raises the need for careful assessment of the kinds of information that a representation incorporates, both in terms of linguistic and common-sense or world information. Moreover, the evaluation procedures need to adopt standard protocols that are independent of the architecture proposed, and be applicable to classes of models in general, setting the standards for what we expect them to have learned and be proficient in.\nIn fact, much recent effort has been devoted to defining evaluation protocols that allow us to have an insight into the knowledge that the model incorporates (Ettinger, 2020;Warstadt et al., 2020;Vulić et al., 2020). These evaluations may target different phenomena, but their main distinction is between intrinsic and extrinsic evaluations. Intrinsic evaluations usually involve comparing predictions made by a model to human judgements, existing resources like WordNet (Miller et al., 1990) or psycholinguistic norms. Extrinsic evaluations are often based on applications (Bakarov, 2018) and examine the ability of embeddings to be used as the feature vectors of supervised machine learning algorithms for tasks like natural language entailment and question answering. The assumption is that better quality and accurate embeddings lead to better quality applications. Recently, the BERT (Devlin et al., 2018) architecture (and its variations, such as RoBERTa (Liu et al., 2019), Distil-BERT (Sanh et al., 2019), XLNet (Yang et al., 2019) and ALBERT (Lan et al., 2019)) has been used as a language encoding component in several applications that enhanced the state-of-the-art in their several NLP fields. These developments can be seen as an extrinsic evaluation of the BERT architecture, as they seem to suggest that BERT more successfully encodes syntactic and semantic information of the language than alternative models. However, what these evaluations do not indicate is which specific information is encoded, and if it is used for a given task in ways that are compatible with human use of language. In this paper we focus on intrinsic evaluations with an aim at gaining a better understanding of how linguistically proficient these models really are.\nFor different languages, the stage of development and proficiency of the models varies considerably, and so do the datasets available for evaluation. For Portuguese, in particular, only a few evaluation initiatives are available, such as Souza et al. (2020); Abdaoui et al. (2020); Schneider et al. (2020), and, despite being valuable evaluation contributions to NLP in Portuguese, these correspond to only extrinsic evaluations and target very specific tasks. Overall, the Portuguese versions of BERT have not been subjected to much intrinsic evaluations in comparison to their English counterparts, and there is great uncertainty about the coverage and quality of the linguistic information about Portuguese encoded in these models. As a consequence, there is also uncertainty about the stability and quality of the output produced by systems built using these models. In this paper, we address some of these issues, assessing the linguistic generalisation of language models for Portuguese. In particular, we propose a model-agnostic test set to measure the ability of a model to capture expected linguistic patterns found in Brazilian Portuguese. The main contributions of this study are:\n-A dataset for testing generalisation both related to multiword expression (MWE) and grammatical information. The MWE dataset targets 33 noncompositional MWEs, while the set for testing grammatical information is divided into 6 different tasks. -An analysis that generated a language profile of BERT models for Brazilian\nPortuguese.\n-A comparison of the BERT models' quality considering only the best generated candidates and the ten best candidates, also highlighting cases where the model puts more confidence on wrong candidates.\nThe proposed dataset will be available on github.\nThe following sections are organised as follows: Section 2 summarises the most influential model-agnostic proposals for intrinsic evaluation, including those targeting Portuguese. Section 3 presents the BERT models for Portuguese, describing their characteristics, and the models used in this work. Then we detail the methodology employed to create seven tasks comprised on our test set in Section 4. The results of the different models and their analyses are presented in Section 5. Finally, in Section 6 we discuss the conclusions of our findings as well as future work." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b38", "b6", "b9", "b14", "b8", "b18", "b18", "b8", "b14", "b18", "b7", "b20", "b18", "b20", "b18", "b1", "b22", "b1" ], "table_ref": [], "text": "The performance that state-of-the-art models achieve on language tasks is seen as an indication of their ability to capture linguistic information. However, their lack of transparency and interpretability prevents an in-depth analysis of the linguistic patterns and generalisations that they capture. Therefore, considerable attention has been devoted to the development of intrinsic evaluation protocols for inspecting the information captured by these models.\nIntrinsic evaluations usually involve experiments in which word embeddings are compared with human judgements about word relations, and may be divided into various categories. For example, Bakarov (2018) arranges them as follows: a conscious evaluation including tasks like semantic similarity, analogy, thematic fit, concept categorisation, synonym detection, and outlier word detection; b subconscious evaluation in tasks like semantic priming, neural activation patterns, and eye movement data; c thesaurus-based evaluations, including thesaurus vectors, dictionary definition graph, cross-match test, semantic difference and semantic networks; and d language-driven of phonosemantic analysis and bi-gram co-occurrence frequency.\nIn this paper, we concentrate mainly on the conscious evaluation tasks. These vary from testing morphosyntactic agreement patterns, like number and gender (Warstadt et al., 2020), to more semantic-related information, like the implications of negation (Ettinger, 2020;Kassner and Schütze, 2020). As an approach to evaluate sensitivity to syntactic structures in English, Linzen et al. (2016) proposed an evaluation focusing on number agreement for subjectverb dependencies, which is something that we also address in this paper. They generated a dataset of 1.35 million number prediction problems based on Wikipedia (9% for training, 1% for validation, and 90% for testing). These problems were used to evaluate four tasks: a number prediction, with a binary prediction task for the number of a verb; b verb inflection, with a variation of the number prediction when the singular form of the upcoming verb is also given to the model; c grammaticality judgements as another classification task, but the model must indicate the grammaticality of a given sentence; and d language modelling in which the model must predict a correct word with the highest probability than a wrong word in a given sentence.\nA related dataset, the \"colorless green ideas\" test set, was proposed by Gulordava et al. (2018) including four languages (Italian, English, Hebrew, Russian). The test items focus on long-distance number agreement evaluation, evaluating the accuracy in cases of subject-verb agreement with an intervening embedded clause and agreement between conjoined verbs separated by a complement of the first verb. Their test set is composed of syntactically correct sentences from a dependency treebank which are converted into nonce sentences by replacing all content words with random words with the same morphology, aiming to avoid semantic cues during the evaluation.\nTo avoid very implausible sentences that violate selectional restrictions (e.g., the apple laughs), Marvin and Linzen (2018) present an automatically constructed dataset for evaluating the grammaticality of the predictions of a language model. Marvin and Linzen (2018) use templates to automatically generate 350,000 English sentence pairs, consisting of pairs of grammatical and ungrammatical sentences, for examining subject-verb agreement, reflexive anaphora and negative polarity.\nTo evaluate the performance obtained by BERT in these datasets (Gulordava et al., 2018;Linzen et al., 2016;Marvin and Linzen, 2018), Goldberg (2019) fed a masked version of a complete sentence into BERT, then compared the score assigned to the original correct verb to the score assigned to the incorrect one. Mueller et al. (2020) evaluated LSTM language models and the monolingual and multilingual BERT versions on the subject-verb agreement challenge sets. They used CLAMS (Cross-Linguistic Assessment of Models on Syntax), which extends Marvin and Linzen (2018) to include English, French, German, Hebrew and Russian. To construct their challenge sets, Mueller et al. (2020) use a lightweight grammar engineering framework (attribute-varying grammars), aiming at more flexibility than the hard-coded templates of Marvin and Linzen (2018). One may argue that this approach is related to ours because they use a grammar with attributes to guide the sentence generation while our work uses pre-selected seeds. Our seeds are related to their attributes, and our generation of grammatical sentences is the combination of seed and original target word while the ungrammatical sentences may be generated by a crosscombination of seed and original target word. Their experiments on English BERT and mBERT suggest that mBERT seems to learn syntactic generalisations in multiple languages, but not in the same way in all languages. In addition, its sensitivity to syntax is lower than that of monolingual BERT.\nRegarding evaluations for Portuguese, Bacon and Regier (2019) evaluated BERT's sensitivity to four types of structure-dependent agreement relations for 26 languages, including Portuguese. They showed that both the monolingual and multilingual BERT models capture syntax-sensitive agreement patterns. Bacon and Regier (2019) evaluated the models in a cloze task, in which target words would share the morphosyntactic features of the word with which they agree. In the cloze task, the data comes from version 2.4 of the Universal Dependencies (UD) treebanks (Nivre et al., 2017), using the part-of-speech and dependency information to identify potential agreement relations. BERT responses are then annotated with morphosyntactic information from both the UD and the UniMorph projects (Sylak-Glassman, 2016) to be compared with the morphosyntactic features of the word with which they agree. With respect to Portuguese, Bacon and Regier (2019) evaluate 47,038 sentences in the cloze test and 2,107 in the feature bundles using mBERT, and the results obtained show that it performed remarkably well, achieving an accuracy close to 100%.\nIn another multilingual evaluation, S ¸ahin et al. ( 2020) introduced 15 probing tasks at type level for 24 languages, finding that some probing tests correlate to downstream tasks, especially for morphologically rich languages. They performed an extrinsic evaluation using downstream tasks, assessing the performance in POS tagging, dependency parsing, semantic role labelling, named entity recognition, and natural language inference targeting German, Russian, Turkish, Spanish and Finnish. Moreover, they propose the contextless prediction of the morphological feature of a given word, aiming to identify the feature indicated in the UniMorph dictionary (Sylak-Glassman, 2016). For that, S ¸ahin et al. ( 2020) removed ambiguous forms and partially filtered infrequent words. This evaluation included features like the following for Portuguese: number, gender, person, and tense. However, they did not report the models' performance for Portuguese." }, { "figure_ref": [], "heading": "BERT models for Portuguese", "publication_ref": [ "b0", "b0", "b31", "b32", "b37", "b29", "b29", "b37", "b31", "b32" ], "table_ref": [], "text": "Large pre-trained language models are valuable assets for language processing. These models support technological and scientific improvements. For English, for example, we can easily find 2,795 models accessing Huggingface website1 .\nHowever, this abundance of models is not true for most languages. For example, there are only 67 models for Portuguese available in the same repository2 , and most of these are for Machine Translation and Automatic Speech Recognition, and Text2Text Generation. In this section, we highlight three models dedicated to language processing: smallermBert and BERTimbau, which are based on Portuguese for general purposes, and BioBERTpt, which is based on Portuguese for special Purposes.\nImproving the mBERT model, Abdaoui et al. (2020) proposed to generate smaller models based on mBERT that are language-specific. In other words, remove the parameters related to the non-target language. Then, they identified the vocabulary of each language and rebuilt the embedding layer to generate the corresponding models. Following the original mBERT, Abdaoui et al. (2020) started from the entire Wikipedia dump of each language, initially covering 15 languages, but since the work's publication, they have been extending it to other languages, including Portuguese3 .\nIn a different perspective, Souza et al. (2019Souza et al. ( , 2020) ) trained languagespecific models for Brazilian Portuguese (nickname BERTimbau), using data from brWaC (Wagner Filho et al., 2018). Their models were trained over 1,000,000 steps using the BERT-Base and BERT-Large Cased variants architectures. In order to evaluate their models, they resorted to three probing tasks (sentence textual similarity, recognising textual entailment, and named entity recognition) in which BERTimbau improved the state-of-the-art compared to multilingual models and previous monolingual approaches.\nTargeting clinical NLP, Schneider et al. (2020) trained BioBERTpt, a deep contextual embedding model for Portuguese, by fine-tuning mBERT. BioBERTpt is trained using clinical narratives and biomedical-scientific papers in Brazilian Portuguese. Schneider et al. (2020) trained three versions of BioBERTpt: BioBERTpt(bio) trained only on biomedical literature from scientific papers from Pubmed and Scielo; BioBERTpt(clin) trained on clinical narratives from electronic health records from Brazilian Hospitals; and BioBERTpt(all) trained on clinical narratives and biomedical literature in the Portuguese language.\nAiming to evaluate the BERT models for Brazilian Portuguese, we selected two models that were trained using brWaC (Wagner Filho et al., 2018) and multilingual BERT, allowing a comparison point with BERT models trained in other languages. We used both the BERTimbau Base and Large models (Souza et al., 2019(Souza et al., , 2020)), which have, respectively, 12 layers and 110M parameters, and 24 layers and 335M parameters. These models were chosen because they are uniquely trained in Brazilian Portuguese, thus avoiding issues from different variations, and they are also widely adopted, being the most popular models for Portuguese on the Huggingface webpage (BERTimbau Base was downloaded 17k times and BERTimbau Large was downloaded 11k times). Therefore, the findings of this paper may have a wide impact in downstream applications for Brazilian Portuguese." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the steps performed for the development of the dataset proposed in this work. We first compiled the MWE test set (see Subsection 4.1) by selecting idiomatic MWEs and five context sentences for each of them. For the grammar tests (Subsection 4.2) we also selected context sentences, but we looked for specific grammar patterns in this step. Once we obtained the sentences for both tasks, we created seeds for the sentences, aiming to use them as cues to the specific target. Finally, we fed BERT model with our evaluation sentences, and manually evaluate the model's output (see Section 5\n). An overview of this methodology is presented on Figure 1." }, { "figure_ref": [], "heading": "Multiword Expression Tests", "publication_ref": [ "b24", "b24", "b39", "b27", "b10" ], "table_ref": [], "text": "Multiword Expressions (MWEs), like nominal compounds (NCs) and idioms, can be defined as linguistic structures that cross word boundaries (Sag et al., 2002). Due to their many potential linguistic and statistical idiosyncrasies (Sag et al., 2002), knowledge about MWEs is often considered a mark of proficiency for language learners. In the MWE task, our goal is to assess the MWE inventory of a model by measuring the models' ability to identify an MWE given a context. For that, we use 33 highly idiomatic two-word NCs4 from LexSubNC (Wilkens et al., 2017), such as pão duro (stingy person; lit. hard bread ) and pé quente (person who brings good luck; lit. hot foot), and then we select 5 sentences from Corpus Brasileiro (Sardinha, 2010) for each MWE. The sentence selection was done by shuffling concordance results on Sketch Engine (Kilgarriff et al., 2004) and selecting 5 full sentences of different lengths. This process resulted in 165 sentences for testing MWEs.\nFor assessing the models' capacity to retrieve the target NC given a compatible context, we adopt a masking protocol, in which we feed the model an input sentence using a mask to replace one word of the NC. The model output is then evaluated in terms of whether or not the model produces the correct word for replacing the mask. Although the mask expect only one possible answer, different words may also be used to replace the mask, and we analyse if the target word is among the responses produced by the model. Moreover, as the NCs are composed of two words (and adjective and a noun), for each NC we generated two test items, keeping one of the NC components as cue for the model and masking the other one. For example, the sentence Presidente, trago uma notícia em primeira mão. (President, I bring you first-hand news.), which contains the NC primeira mão (first-hand ), is processed twice:\n-Presidente, trago uma notícia em [MASK] mão.\n-Presidente, trago uma notícia em primeira [MASK].\nWe expect that together, these sentences, along with one of the NC components, should provide enough context about the NC to trigger the model to produce the missing word. The output of this process resulted in 10 candidate words for each test item, in other words, 20 candidate words for each MWE, with 10 for the first component word of the compound and 10 for the second." }, { "figure_ref": [], "heading": "Grammatical Tests", "publication_ref": [ "b12", "b21", "b35", "b21", "b35" ], "table_ref": [], "text": "To assess the models' grammatical information, we developed a broad range of cloze tests, each one targeting a specific phenomenon, with several subphenomena, where each sub-phenomenon was tested using a set of sentences and a series of variations. The targets of these tests were: impersonal verbs, subject agreement, verb agreement, nominal agreement, passive and connectors. These test were inspired by the work of Kurita et al. (2019), who propose a template-based method to quantify social bias in BERT. Each test sentence is composed of a mask and a set of seeds as parameters. Like in the MWE test, the mask is the \"blank\", and the seeds work as a linguistic cue to the expected answer. For example, in the nominal agreement test, the sentence \" É precisamente na revelaćão <SEED>[MASK] que reside o valor dessas cartas, diz.\" (It is precisely in the revelation <SEED>[MASK] that lies the value of those letters, he/she says.) is used 40 times, each one using a different seed such as das relaćões (of the relations), das capacidades (of the capacities), da alma (of the soul ), da condićão (of the condition), dos seres (of the beings), dos segredos (of the secrets), do ser (of the being), and do espírito (of the spirit). In this example, we use four types of seeds each one targeting a different output in the masked work (male singular, male plural, female singular and female plural). The different seed sets are specific for each sentence, because they have to take into consideration its syntactic and semantic restrictions.\nFor nominal agreement, we automatically generated seed sets using a mask where the original seed would be, this allowed us to have more extensive coverage. In this process, we selected the top 10 candidates from base-BERT and then we manually evaluated the candidates to eliminate bad fits (i.e., we removed the candidates with agreement errors or a different meaning). For verb and subject agreement, and for impersonal verbs, we automatically generated a set of seeds based on the UNITEX-PB dictionary (Muniz et al., 2005;Vale and Baptista, 2015), and then also validated each seed in context. In the case of passive voice, due to the complexity of the seeds, we had to manually generate sets of seeds for each sentence. Finally, in the case of connectors, we could not use any type of seed, because this would require us to generate full clauses, so we only used five sentences per connector that was tested.\nUsing the manually validated templates composed of sentences and their seeds, we generated 10 candidate answers for each masked word using BERTimbau Large5 . These candidates were also annotated with part-of-speech using the Unitex-PB dictionary (Muniz et al., 2005;Vale and Baptista, 2015). 6 Then, all candidate words from BERTimbau Large were evaluated by a linguist according to their syntactic and semantic suitability in context. Although we used seeds as cues for the expected type of answer (for instance, we used generic passive structures to induce the model to produce a nominal participle as candidate in the passive voice tests), the actual answer of the system could be from a different category, and still the sentence would be grammatically and semantically correct. As such, the evaluation took into account any answer that would fit in the context, not necessarily only the ones that were expected. The ones that were correct, but deviated from the expected target answer, were identified, and are further discussed in the results.\nIn total, the dataset for grammatical evaluation consists of 6 dimensions that contain 1,231 tests and 688 seeds." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "This section presents the evaluation results considering the two different subsets of tests: MWEs and grammatical information. The MWE results consider three models: BERTimbau Large, BERTimbau Base and mBERT. These models were evaluated in terms of accuracy of the best prediction and accuracy among the top ten candidates. The grammatical evaluation was done on BERTimbau Large, and considered results in terms of precision of the best candidate (or accuracy) and precision at the top ten candidates." }, { "figure_ref": [], "heading": "MWE Tests", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "Aiming to assess the model generalisation concerning MWE vocabulary, we calculate the accuracy of the output. However, this information might not be fully representative of its generalisation capacities. Thus, for each masked sentence we looked for the correct word in the top ten predictions of the model. In the evaluation step, we resorted to accuracy at ten (acc@10). In other words, we evaluated the presence or absence of the expected word in the top ten predictions.\nThe multilingual model, mBERT, performed consistently worse than the dedicated model trained for Portuguese. Moreover, BERTimbau Large performs better than the base version. In both these cases, it was the larger models that performed better, suggesting that the ability to learn MWEs is related to the size of the model. In terms of the difficulty of the task, as shown in Table 1, apart from the multilingual BERT, the models were able to predict the missing MWE component as the first alternative in 40-52.73% of the cases. However, using this more strict evaluation scenario where only the top choice is considered, the quality of the prediction is poor. In a more lenient scenario, When we analyse the ability of a model to predict the missing word among the top 10 most likely words, the results substantially improve. While mBERT has an average increase of 9% accuracy in comparison with only the top prediction, the BERTimbau models have a much more substantial increase: BERTimbau Base has an improvement of 17.58% in relation to the top candidate and BERTimbau Large of 15.75% from 51.52% to 67.27% accuracy. Although mBERT shows good capacities in different NLP tasks, the model captured little to no information to predict idiomatic items that are specific to Brazilian Portuguese. Additionally, we observed that the gain of including more candidate words is mainly restricted to the top two candidates.\nThe changes from BERTimbau Base to BERTimbau Large allowed the model to learn a larger MWE inventory, considering the target MWEs, and have more accurate prediction given the clues in the masking task (Table 2). However, there were two cases in which performance decreased using the larger model: sangue azul (blue blood) and pão duro (stingy person; lit. hard bread ); and other cases that are still not accurately represented by either model (Table 3). Overall, the difference in performance between BERTimbau Large and Base was not as big as the difference between them and mBERT, but BERTimbau Large was able to learn more MWE and displayed more confidence in correct predictions. However, we also noticed that the values of accuracy at ten for BERTimbau Base were similar to BERTimbau Large, indicating that exploring more outputs from the Base model might have the same performance of BERTimbau Large, which require more processing power. " }, { "figure_ref": [], "heading": "Grammatical Tests", "publication_ref": [], "table_ref": [], "text": "This section presents the results of the grammatical tests to which BERTimbau Large was submitted. As there were six very different grammatical tests, we report and discuss their results individually as precision at 1 (or accuracy) and precision at 10, aiming to analyse the model's proficiency. At the end of this section, we make a brief comment on the overall performance of the model. " }, { "figure_ref": [], "heading": "Impersonal Verbs", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The set of sentences with impersonal verbs tested whether the model would produce candidates that were not subjects. The verbs used as cue represented meteorological verbs (such as to rain, to snow) and existential verbs (such as to exist, to have been), which do not accept any subject, and are therefore defective in their conjugation. As such, we expected the model to produce answers that were not nouns or pronouns.\nAs we can see in Table 4, the models perform well in this task, with meteorological verbs having slightly worse results, as the model still produced some answers that did not fit. Results considering the top candidate were near 100%, and the average precision considering the top 10 candidates decreases to 76.04%. For existential verbs, results were much higher, with a precision of 97.50% among the top 10 candidates and 100% for the top 1. The model was able to generate punctuation marks, such as parenthesis or quotation marks, in most of the templates, which were a good fit for the test sentences.\nLooking at the different tenses, to check whether there is any impact of the verb form used as a cue for the model, we see that only the pluperfect form produced results below 100% for the top candidate, and was also the worse cue on the precision at 10 evaluation, with 65.33%. This could be a reflect of the fact that the pluperfect tense in Brazilian Portuguese is usually written in a compound form, using \"ter\" (have) as auxiliary verb, conjugated in the imperfect tense, associated with the past participle of the main verb, so that the pluperfect simple form, which was the form used in the test sentences, is not widely used anymore. Assuming that lower frequencies have an impact on the quality of the representation, they might not have been learned well by the model. Nonetheless, the performance confirms the proficiency of the model with impersonal verbs. " }, { "figure_ref": [], "heading": "Subject Agreement", "publication_ref": [], "table_ref": [ "tab_5", "tab_6" ], "text": "For the subject agreement task, the model was expected to produce a subject that would agree in person and number with the provided verb seeds. Given that Portuguese allows for hidden or indeterminate subject, we expected that the system would produce some results that would not fit as subjects, but that would fit in the sentences.\nConsidering the type of subject that was expected given the verb conjugation, as seen in Table 6, results both in the top 1 and in the top 10 were varied, ranging from 64.10% for the second person singular to 100% for the third person singular. The model had a hard time to produce good candidates when the expected subject should fit a second person singular or plural, which are less commonly used conjugations, but it is interesting to see that the model has higher confidence in wrong answers when we look at the results for the third person plural.\nWhen we look at the tense and mode of the conjugated seed (Table 7, results also varied, and it is interesting to notice that the tense that yielded worst results in the top 1 candidates (barely above 75% precision) was present indicative, which is one of the most common tenses, while future indicative was the tense with best results (above 92% precision). Among the top 10 results, pluperfect indicative was the worst, with 71.78%, and the best result was again the future indicative, with above 90% precision.\nTo sum up, the model has a fairly precise capacity of generating subjects that fit in the sentences, but some verb conjugations (second persons, and third person plural) and tenses (present and pluperfect indicative) proved to be a challenge. " }, { "figure_ref": [], "heading": "Nominal Agreement", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "For the task of nominal agreement, we looked into the adjective agreement using a noun as cue. Since Portuguese adjectives agree with nouns in gender and number, we had four different test categories: masculine singular, masculine plural, feminine singular and feminine plural.\nThe results (Table 8 shows that results were good across all categories, reaching up to 93.22% in masculine plural when all 10 candidates were considered for each mask. Here we see that the suggestion in which the model has most confidence is not always the best, as the results on p@1 were consistently worse then in the top 10, especially for feminine singular, which achieved only 78% precision on the top 1 candidates. Nonetheless, the model displays fairly good proficiency in nominal agreement." }, { "figure_ref": [], "heading": "Verb Agreement", "publication_ref": [], "table_ref": [ "tab_8", "tab_0" ], "text": "The task of verb agreement was designed to check whether the model can produce verb candidates that correctly agree with the cue. The sentences used for this test had temporal cues or specific language patterns that would induce or require certain verb conjugations. As seeds, we used pronouns and nouns in singular and plural, but also used some verb structures to check for the production of infinitives and gerunds.\nTable 9 shows the results for each expected verb form. Indicative forms (first two rows) had much better results than subjunctive forms (last three rows), while the non-conjugated forms (rows 3 and 4) had the best results overall, reaching up to 100% precision considering the top 1 candidate.\nIn this specific case, it was observed that some cues were not as effective as others, so we investigated the results based on the different pronouns and nouns that were used as cues. In Table 10, pronouns such as \"tu\" (you singular ), \"nós\" (we) and \"vós\" (you plural ) presented a much worse cue, as the model did not seem to be able to produce forms that agree with them. Even when considering only the top 1 candidates, \"tu\" had a result barely above 50%, while \"vós\", a very formal and infrequent pronoun, did not reach 10% of precision. While \"tu\" and \"vós\" are pronouns that are less commonly used in Brazilian Portuguese, being frequently substituted, respectively, by \"você\" and \"vocês\", it is hard to explain why the model does not produce good candidates for a cue like \"nós\". One possibility is that this form may be replaced by a more informal option (a gente, we; lit. the people), which uses a conjugation that is homograph with the third person singular. Although responses with the expected verb tenses were induced in most cases, 5.00% of the responses provided by the model were correct, meaning that they fitted well in the sentence, but were not verbs or did not agree with the cue provided." }, { "figure_ref": [], "heading": "Connectors", "publication_ref": [], "table_ref": [ "tab_9", "tab_9", "tab_10" ], "text": "In this task the goal was to check whether the model could produce cohesive elements to link sentences together. We used specific connectors, which are shown in Table 11, as seeds for selecting the original set of five sentences for each connector. In this task we had no cues for an expected connector, as usually it is the connectors that establish a meaningful relation between clauses, either by coordination or subordination. This means that, although we had an original connector in the test sentences, the evaluation accepted as correct other forms of cohesion that could change the semantics of the sentence, as long as they produced a sentence with meaning.\nAs Table 11 shows, the model was able to predict connectors with very good precision, reaching 100% in all cases among the top 1 candidates, and then varying in precision among the top 10 candidates. In terms of correct candidates, 10.77% were not conjunctions in the traditional sense, but most of these were textual connectors, such as \"finalmente\" (finally), \"também\" (also). Some of the sentences had a very specific requirement in terms of connector, such as the ones with \"ora\" (now), which is a dual connector (\"ora..., ora\" ∼ now..., now ), and thus had no margin for many other connector options, which explains the poor precision among the top 10 candidates in some cases.\nThe results obtained suggest that these models can proficiently use connectors. ). This is different to what happens, for instance, with compound verb tenses, such as the compound pluperfect, where the verb participle of the main verb is used and thus there is no requirement for agreement. An example of this can be seen in this example: Ela tinha escolhido aquela cor. (She had chosen that colour.; lit. She F emSing had chosen N oAgreement that F emSing colour F emSing ).\nTo test this case, we used 13 sentences with a varied number of cues that were made up by verbal constructions with the verb \"ser\" (to be). Results in Table 12 show that the model was able to produce correct candidates most of the time, with 86.65% precision among the top 1 candidates and 78.38% among the top 10 candidates.\nInterestingly, among the candidates that were not correct, 44.99% were a participle, but had incorrect nominal agreement with the subject of the sentence. The results in the table also point to the model having trouble producing good candidates for the feminine cases, in particular for the singular form, since for the plural it had better confidence in the correct candidates.\nFinally, within the list of correctly generated candidates, not many deviated from the expected word form, as only 5.07% of the candidates were adjectives that fit the context resulting in a grammatically correct option, instead of the target nominal participle.\nFor this test too, the models are proficient in generating the agreement for the passive form for most cases apart from the feminine singular. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b1", "b8" ], "table_ref": [ "tab_11" ], "text": "Summing up the results of the grammatical tests, we can see in Table 13 that tasks that require no agreement had the best results, with 100% precision for connectors and 98.78% for impersonal verbs. Where morphsyntactic characteristics of the language played a role, we see that the results fall below 90% even considering only the candidate in which the model has the most confidence.\nInterestingly, the nominal agreement test was the only one that shows better results among the top 10 candidates in comparison to the top 1. This could possibly mean that, for selecting adjectives, lexical cues on the context are stronger than the morphosyntactic information in the cue word. Considering the results reported by Bacon and Regier (2019) for Portuguese on mBERT, we see that the model here had much worse performance for nominal agreement. This could be because we evaluated all candidates, and not only the ones that had the same part-of-speech as the masked word.\nAs the only closed class task, we expected the largest difference from the evaluation among the top 1 to the top 10 candidates for connectors, as the model would eventually run out of options to fit in the context. However, the worse performance overall was seen in the task of verb agreement, where the model had problems finding good candidates for a few personal pronouns, both at the top 1 and at the top 10. Moreover, frequency also seems to play a role in model performance, and prediction accuracy decreases for less frequent forms, including very formal pronouns or those that are frequently omitted. In this work we addressed the problem of intrinsic evaluation of large scale language models and proposed a battery of model-agnostic tests to assess linguistic proficiency in Portuguese. This paper focused on some widely adopted neural language models of Brazilian Portuguese, and evaluated their performance in lexical, syntactic and semantic tasks. We developed a dataset with evaluation items that cover two particular areas: MWE inventory of idiomatic items, and proficiency in 6 grammatical tasks.\nOverall, the larger and language-specific models performed in the MWE task. Although mBERT shows good capacities in different NLP tasks, the model captured little to no information to predict idiomatic items of Brazilian Portuguese. Despite the small difference in performance between BERTimbau Large and Base, the larger version presented a better recognition of MWE. However, exploring more outputs from the BERTimbau Base might have the same performance as BERTimbau Large.\nThe grammatical tests showed that BERTimbau Large has a good overall precision in the generation of candidates for masked items, especially when we looked only at the top 1 candidates, going up (or very close) to 100% precision both for connectors and impersonal verbs. Even so, for tasks that required morphosyntactic agreement, there was a fall in precision, with the worse results (below 80% among the top 1 candidates) being reported for nominal and verb agreement. The case of verb agreement was especially challenging for the model, because it consistently failed to produce good results for certain personal pronouns (first person plural, and second person singular and plural), which could be a sign of poor morphosyntactic generalisation, or be a side effect of the training corpus.\nWe adopted two evaluation criteria, one more strict, considering only the best candidate, and a more lenient one, which includes the top 10 candidates. Moreover, by considering not only the expected target word forms during the evaluation, but also considering alternative, grammatically correct outputs, we were able to detect the capacity of the model to produce different types of word forms for different contexts. Although deviant word forms did not represent much of the correct responses, amounting to around 5% in a few tasks, they showed that a syntactic cue might not be as strong as the overall context of the sentence, as argued by Gulordava et al. (2018). The results obtained confirm that the model achieves proficiency levels in tasks that do not require morphosyntactic agreement. However, it still lacks quality in certain items, in particular related to feminine singular forms. We also observed that there are instances (e.g. nominal agreement) in which the model has higher confidence (i.e. higher probability) in inadequate responses. All these evaluations led to a profile of the model's linguistic information.\nAs future work, we intend to extend the battery of tests to other linguistic aspects, such as selectional preferences for verbs, and inventory of collocations and terminology. We also intend to investigate whether possible biases in the distribution of the training data can affect the performance of these patterns.\nFinally, we plan to develop a multilingual version of the test, adapting it to closely related languages that share some of these linguistic patterns, and assessing if language proximity can be beneficial for few-shot learning scenarios." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements This work has been developed in the framework of the project COURAGE (no. 95567), funded by the Volkswagen Foundation in the topic Artificial Intelligence and the Society of the Future. It is also partly funded by the EPSRC project MIA: Modeling Idiomaticity in Human and Artificial Language Pro-cessing (EP/T02450X/1) and by Research England, in the form of the Expanding Excellence in England (E3) programme." }, { "figure_ref": [], "heading": "Conflict of interest", "publication_ref": [], "table_ref": [], "text": "The authors declare that they have no conflict of interest." } ]
Much recent effort has been devoted to creating large-scale language models. Nowadays, the most prominent approaches are based on deep neural networks, such as BERT. However, they lack transparency and interpretability, and are often seen as blackboxes, which affect their applicability in downstream tasks as well as the comparison of different architectures or even the same model trained on different corpora or hyperparameters. In this paper, we propose a set of intrinsic evaluation tasks that inspect the linguistic information encoded in models developed for Brazilian Portuguese. These tasks are designed to evaluate how different language models generalise information related to grammatical structures and multiword expressions (MWEs), thus allowing for an assessment of whether the model has learned different linguistic phenomena. The dataset that was developed for these tasks is composed by a series of sentences with a single masked word and a cue that narrows down the context. This dataset is divided into MWEs and grammatical structures, and the latter is subdivided into 6 tasks: impersonal verbs, subject agreement, verb agreement, nominal agreement, passive and connectors. The subset for MWEs was used to test BERTimbau Large, BERTimbau Base and mBERT. For the grammatical structures, we used only BERTimbau Large, because it yielded the best results in the MWE task. In both cases, we eval-
Assessing Linguistic Generalisation in Language Models A Dataset for Brazilian Portuguese
[ { "figure_caption": "Fig. 11Fig. 1 Methodology for creating the test dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Accuracy of different sentences for Word1 and Word2. acc@10 means the accuracy considering the 10 most likely candidates. For example, if we consider only the word with the highest probability from BERTMbau-Base, it achieves an accuracy of 40%, however, if we consider all 10 candidates, evaluating only if the correct one is on the list, it goes to 57.58%", "figure_data": "ModelWordACCACC@10BERTimbau Base140.00%57.58%BERTimbau Base245.45%64.24%BERTimbau Large152.73%65.45%BERTimbau Large251.52%67.27%mBERT16.67%16.97%mBERT23.64%11.52%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison between BERTimbau Large and BERTimbau Base. MWEs which were predicted by the BERTimbau Large, but not by BERTimbau Base.", "figure_data": "Word 1Word 2livro abertolivro abertomontanha russamontanha russanó cegoolho mágicoolho mágicopau mandadopão duropavio curtopé friopé friopé quentepé quentepeso mortopeso mortoplanta baixaplanta baixasangue azulsaia justasangue friosangue frio", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparison between BERTimbau Large and BERTimbau Base: MWEs were not predicted by either model.", "figure_data": "Word 1Word 2bode expiatório bode expiatóriocheiro verdegato pingadoelefante brancolonga metragemovelha negranó cegopavio curtoolho gordopente finopente finoroleta russavista grossasaia justa", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Impersonal verbs: Verb types", "figure_data": "Case nameP@1P@10Meteorological Verbs 97.56%76.04%Existential Verbs100.00% 97.50%", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Impersonal verbs: Results by tense and mode", "figure_data": "Case nameP@1P@10Past Indicative100.00% 82.00%Present Indicative100.00% 85.63%Future Indicative100.00% 80.00%Imperfect Indicative100.00% 95.71%Pluperfect Indicative86.67%65.33%Future Subjunctive100.00% 79.00%", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Subject agreement: Results by expected person and number", "figure_data": "Case nameP@1P@10First Person Singular97.06%93.46%Second Person Singular 64.81%61.73%Third Person Singular100.00% 95.85%First Person Plural94.59%92.33%Second Person Plural66.67%63.70%Third Person Plural85.00%93.52%", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Subject agreement: Results by tense and mode", "figure_data": "Case nameP@1P@10Present Indicative75.61% 75.13%Past Indicative78.72% 87.10%Pluperfect Indicative81.25% 71.78%Future Indicative92.86% 90.46%Past Tense Future Indicative82.05% 79.78%Imperfect Indicative79.49% 75.28%", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Nominal agreement: Gender and number", "figure_data": "Case nameP@P@10Feminine Singular78.00% 92.00%Masculine Singular86.84% 91.58%Feminine Plural87.18% 88.46%Masculine Plural91.89% 93.22%", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Verb agreement: Different expected tenses and modes", "figure_data": "Case nameP@1P@10Past Indicative87.51%78.44%Present or Future Indicative94.63%68.59%Infinitive100.00% 96.10%Gerunds94.81%74.88%Present Subjunctive58.00%33.40%Past Subjunctive62.78%25.08%Future Subjunctive/Conditional 68.15%57.10%Table 10 Verb Agreement: Breakdown of pronounsCase nameP@1P@10Eu (I )80.00% 52.67%Tu (You singular ) 51.61% 34.84%Ele/Ela (He/She)79.41% 60.59%Nós (We)30.77% 13.08%Vós (You plural)7.69%6.92%Eles/Elas (They)85.71% 44.29%", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Connector PredictionPassive voice in Portuguese has an important characteristic, shared with other Romance languages, which is a somewhat long distance agreement of the nominal participle with the subject. This agreement is illustrated in the following example: A escolha foi feita. (The choice was made.; lit.The F emSing choice F emSing was made F emSing", "figure_data": "Case nameP@1P@10Caso (If )100.00% 26.67%Conforme (According to) 100.00% 46.67%Contudo (However )100.00% 86.67%Enquanto (While)100.00% 66.67%Nem (Nor )100.00% 50.00%Ora (Now )100.00% 20.00%Pois (Because)100.00% 50.00%Porque (Because)100.00% 40.00%Portanto (Therefore)100.00% 90.00%Quando (When)100.00% 26.67%Se (If )100.00% 50.00%Todavia (However )100.00% 96.67%5.2.6 Passive", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Passive: Gender and number breakdown", "figure_data": "Case nameP@1P@10Feminine Singular17.86%50.71%Masculine Singular95.89%89.32%Feminine Plural100.00% 55.00%Masculine Plural98.53%81.47%", "figure_id": "tab_10", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Summary of Grammatical Proficiency Tests", "figure_data": "TestP@1P@10Nominal Agreement 77.53%89.67%Verb Agreement79.56%59.55%Subject Agreement83.38%75.70%Connectors100.00% 54.17%Impersonal Verbs98.78%86.77%Passive86.65%78.38%", "figure_id": "tab_11", "figure_label": "13", "figure_type": "table" } ]
Rodrigo Wilkens; Leonardo Zilio; Aline Villavicencio
[ { "authors": "A Abdaoui; C Pradel; G Sigel", "journal": "", "ref_id": "b0", "title": "Load what you need: Smaller versions of mutlilingual bert", "year": "2020" }, { "authors": "G Bacon; T Regier", "journal": "", "ref_id": "b1", "title": "Does bert agree? evaluating knowledge of structure dependence through agreement relations", "year": "2019" }, { "authors": "A Bakarov", "journal": "", "ref_id": "b2", "title": "A survey of word embeddings evaluation methods", "year": "2018" }, { "authors": "T Bolukbasi; K.-W Chang; J Zou; V Saligrama; A Kalai", "journal": "", "ref_id": "b3", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "year": "2016" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "E Dinan; A Fan; L Wu; J Weston; D Kiela; A Williams", "journal": "", "ref_id": "b5", "title": "Multi-dimensional gender bias classification", "year": "2020" }, { "authors": "A Ettinger", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "year": "2020" }, { "authors": "Y Goldberg", "journal": "", "ref_id": "b7", "title": "Assessing bert's syntactic abilities", "year": "2019" }, { "authors": "K Gulordava; P Bojanowski; E Grave; T Linzen; M Baroni", "journal": "", "ref_id": "b8", "title": "Colorless green recurrent networks dream hierarchically", "year": "2018" }, { "authors": "N Kassner; H Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly", "year": "2020" }, { "authors": "A Kilgarriff; P Rychly; P Smrz; D Tugwel", "journal": "", "ref_id": "b10", "title": "The sketch engine", "year": "2004" }, { "authors": "V Kumar; T S Bhotia; V Kumar; T Chakraborty", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b11", "title": "Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings", "year": "2020" }, { "authors": "K Kurita; N Vyas; A Pareek; A W Black; Y Tsvetkov", "journal": "", "ref_id": "b12", "title": "Quantifying social biases in contextual word representations", "year": "2019" }, { "authors": "Z Lan; M Chen; S Goodman; K Gimpel; P Sharma; R Soricut", "journal": "", "ref_id": "b13", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "year": "2019" }, { "authors": "T Linzen; E Dupoux; Y Goldberg", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "Assessing the ability of lstms to learn syntax-sensitive dependencies", "year": "2016" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "", "ref_id": "b15", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Pawel Mandera; Emmanuel Keuleers; Marc Brysbaert", "journal": "JOURNAL OF MEMORY AND LANGUAGE", "ref_id": "b16", "title": "Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting : a review and empirical validation", "year": "2017" }, { "authors": "G Marcus", "journal": "", "ref_id": "b17", "title": "The next decade in AI: four steps towards robust artificial intelligence", "year": "2020" }, { "authors": "R Marvin; T Linzen", "journal": "", "ref_id": "b18", "title": "Targeted syntactic evaluation of language models", "year": "2018" }, { "authors": "G A Miller; R Beckwith; C Fellbaum; D Gross; K J Miller", "journal": "International journal of lexicography", "ref_id": "b19", "title": "Introduction to wordnet: An on-line lexical database", "year": "1990" }, { "authors": "A Mueller; G Nicolai; P Petrou-Zeniou; N Talmina; T Linzen", "journal": "", "ref_id": "b20", "title": "Cross-linguistic syntactic evaluation of word prediction models", "year": "2020" }, { "authors": "M C Muniz; Maria Das Graças; V N Laporte; E ", "journal": "", "ref_id": "b21", "title": "Unitexpb, a set of flexible language resources for brazilian portuguese", "year": "2005" }, { "authors": "J Nivre; Ž Agić; L Ahrenberg; L Antonsen; M J Aranzabe; M Asahara; L Ateyah; M Attia; A Atutxa; L Augustinus", "journal": "", "ref_id": "b22", "title": "Universal dependencies 2", "year": "2017" }, { "authors": "R Oshikawa; J Qian; W Y Wang", "journal": "", "ref_id": "b23", "title": "A survey on natural language processing for fake news detection", "year": "2018" }, { "authors": "I A Sag; T Baldwin; F Bond; A Copestake; D Flickinger", "journal": "Springer", "ref_id": "b24", "title": "Multiword expressions: A pain in the neck for NLP", "year": "2002" }, { "authors": "G G S ¸ahin; C Vania; I Kuznetsov; I Gurevych", "journal": "Computational Linguistics", "ref_id": "b25", "title": "LIN-SPECTOR: multilingual probing tasks for word representations", "year": "2020" }, { "authors": "V Sanh; L Debut; J Chaumond; T Wolf", "journal": "", "ref_id": "b26", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "T B Sardinha", "journal": "Informática", "ref_id": "b27", "title": "Corpus brasileiro", "year": "2010" }, { "authors": "B Savoldi; M Gaido; L Bentivogli; M Negri; M Turchi", "journal": "", "ref_id": "b28", "title": "Gender bias in machine translation", "year": "2021" }, { "authors": "E T R Schneider; J V A De Souza; J Knafou; L E S E Oliveira; J Copara; Y B Gumiel; L F A D Oliveira; E C Paraiso; D Teodoro; C M C M Barra", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "BioBERTpt -a Portuguese neural language model for clinical named entity recognition", "year": "2020" }, { "authors": "M Schrimpf; I Blank; G Tuckute; C Kauf; E A Hosseini; N Kanwisher; J Tenenbaum; E Fedorenko", "journal": "bioRxiv", "ref_id": "b30", "title": "Artificial neural networks accurately predict language processing in the brain", "year": "2020" }, { "authors": "F Souza; R Nogueira; R Lotufo", "journal": "", "ref_id": "b31", "title": "Portuguese named entity recognition using bert-crf", "year": "2019" }, { "authors": "F Souza; R Nogueira; R Lotufo", "journal": "", "ref_id": "b32", "title": "BERTimbau: pretrained BERT models for Brazilian Portuguese", "year": "2020-10-20" }, { "authors": "Q Su; M Wan; X Liu; C.-R Huang", "journal": "Natural Language Processing Research", "ref_id": "b33", "title": "Motivations, methods and metrics of misinformation detection: An nlp perspective", "year": "2020" }, { "authors": "J Sylak-Glassman", "journal": "", "ref_id": "b34", "title": "The composition and use of the universal morphological feature schema (unimorph schema)", "year": "2016" }, { "authors": "O Vale; J Baptista", "journal": "", "ref_id": "b35", "title": "Novo dicionário de formas flexionadas do unitex-pb: avaliação da flexão verbal (new dictionary of inflected forms of unitex-pb: Evaluation of verbal inflection)", "year": "2015" }, { "authors": "I Vulić; S Baker; E M Ponti; U Petti; I Leviant; K Wing; O Majewska; E Bar; M Malone; T Poibeau; R Reichart; A Korhonen", "journal": "Computational Linguistics", "ref_id": "b36", "title": "Multi-SimLex: A large-scale evaluation of multilingual and crosslingual lexical semantic similarity", "year": "2020" }, { "authors": "J A Wagner Filho; R Wilkens; M Idiart; A Villavicencio", "journal": "", "ref_id": "b37", "title": "The brwac corpus: A new open resource for brazilian portuguese", "year": "2018" }, { "authors": "A Warstadt; A Parrish; H Liu; A Mohananey; W Peng; S.-F Wang; S R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b38", "title": "BLiMP: The benchmark of linguistic minimal pairs for English", "year": "2020" }, { "authors": "R Wilkens; L Zilio; S R Cordeiro; F Paula; C Ramisch; M Idiart; A Villavicencio", "journal": "", "ref_id": "b39", "title": "LexSubNC: A dataset of lexical substitution for nominal compounds", "year": "2017" }, { "authors": "Z Yang; Z Dai; Y Yang; J Carbonell; R Salakhutdinov; Q V Le", "journal": "", "ref_id": "b40", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" } ]
[]
10.1145/3539618.3592086
2023-05-23
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b3", "b0", "b3", "b23", "b6", "b4" ], "table_ref": [], "text": "The Tip-of-the-tongue (ToT) retrieval task involves identifying a previously encountered item for which a searcher was unable to recall a reliable identifier. ToT information needs are characterized by verbosity, use of hedging language, and false memories, making retrieval challenging [1,4]. As a consequence, searchers resort to communities like r/TipOfMyTongue and WatzatSong, where they can post descriptions of items that they know exist but cannot find, relying on other users for help. Recent research of ToT information needs explored how searchers pose these requests in specific domains like movies [1,4], or games [24]. Music-ToT, however, is under-explored despite being frequent: it represents 18% of all posts made in a five-year period in the r/TipOfMyTongue community (cf. §3.1). Our work is motivated by the need to understand how such requests are expressed in the music domain.\nWe examined the r/TipOfMyTongue community, focusing on requests looking for musical entities like albums, artists or songs. We show that these requests often refer to multiple modalities (cf. §4) and thus encompass a broad set of retrieval tasks-audio fingerprinting, audio-as-a-query, lyric search, etc. In our work, we focus on song search. We create ToT 1 : the dataset consists of 2,278 solved information needs pertaining to a song, each of which is linked to the corresponding correct answer in the publicly available Wasabi Corpus [7]. Using ToT , we develop a schema for Music-ToT information needs to reveal what information is contained in them (cf. §3.2). In addition, we are interested in the extent to which standard text retrieval approaches are able to deal with ToT queries. To this end, we benchmark a subset of ToT information needs 2 on the Wasabi corpus, as well as Spotify search. Across both settings, the low effectiveness-compared to non-ToT queries-of our evaluated retrieval methods underscores the necessity of novel methods to tackle this task. Lastly, we conduct a preliminary study on reformulating Music-ToT queries using GPT-3 [5]; we find that the task remains very challenging." }, { "figure_ref": [], "heading": "BACKGROUND", "publication_ref": [ "b30", "b0", "b3", "b15", "b23", "b2", "b12", "b25", "b18", "b19", "b3", "b23", "b0", "b15", "b0", "b3", "b10", "b27", "b29", "b34", "b35", "b36", "b21", "b31", "b37", "b9", "b11", "b26", "b28", "b39", "b13", "b20", "b16", "b24", "b38", "b14", "b32", "b22", "b33", "b0", "b0", "b23" ], "table_ref": [], "text": "Tip-of-the-tongue (ToT) retrieval is related to known-item retrieval (KIR) or item-re-finding [31], however ToT queries are typically issued only once-not multiple times-and importantly, lack concrete identifiers, instead relying on verbose descriptions, frequently expressed uncertainty and possible false memories [1,4,16,24]. Approaches for simulating such queries [3,13,26] may lack realistic phenomena like false memories [19,20], necessitating the collection of real world data. Data on a large scale is available for only one domain, movies [4]; smaller scale datasets are available for games [24] and movies [1]. Hagen et al. [16] collect a corpus of general known-item queries, including music; however their focus was on general known-item queries and false-memories, and lacked retrieval experiments. Our focus is on the music domain, examining modalities employed by searchers and how they express Music-ToT queries. We build upon Arguello et al. [1] and Bhargav et al. [4], with key differences in (1) the domain-music, (2) the corpus size-millions of items instead of thousands, and, (3) reformulation experiments utilizing an LLM. Music-ToT relates to several research areas in Music IR (MIR).\nLyric-and text-based retrieval involves retrieving a song using lyrics or text [11,28]. Techniques to handle misheard lyrics are common [30,[35][36][37], including modeling speech sounds [22], which may be insufficient, since ToT queries can contain descriptions of lyrics, requiring semantic methods [32], or utilizing the audio itself [38]. Apart from lyrics, Music-ToT queries are frequently free-form natural language queries (cf. §4), requiring methods that can retrieve audio using text, as well as tags, genre or humangenerated descriptions [10,12,27,29,40].\nContent-based audio retrieval [14] includes query-by-example (QBE) [21], where the audio is being queried as-is, e.g. audio fingerprinting [17]. Alternatively, users can imitate the wanted audio by vocalizing it, termed query-by-vocal-imitation (QBV) [25,39], which includes query-by-humming (QBH) [15]. ToT queries frequently contain references to user created audio-clips as well as existing media like audio contained in videos (cf. §4).\nOther modalities like videos may need to be handled as well, necessitating multi-modal or cross-modal (retrieving one modality using another) methods [33], e.g. retrieving audio using video [23,34]. Approaches to solve Music-ToT have to account for multiple modalities and free-form natural language including noise, e.g., uncertainty [1] and/or false memories [1,24]." }, { "figure_ref": [], "heading": "METHODOLOGY 3.1 Data Collection", "publication_ref": [], "table_ref": [], "text": "Gathering ToT . We gathered posts made across 2017-2021 in the r/TipOfMyTongue community, yielding 503,770 posts (after filtering out posts not marked Solved or Open), each containing two fields: title and description. We extracted text categories from the title, e.g. SONG from \"[SONG] Slow dance song about the moon?\". We manually identified a set of 11 overarching music-focused categories (e.g. Music Video, Band, Rap Music). We discarded the remaining non-music posts, resulting in ToT : 94,363 (60,870 solved and 33,493 unsolved) Music-ToT posts. These posts form a large proportion-18.73%-of the 503K posts we started out with." }, { "figure_ref": [], "heading": "Extracting ToT", "publication_ref": [ "b3", "b6", "b4" ], "table_ref": [], "text": ". We extracted answers from Solved posts following Bhargav et al. [4], retaining Solved posts which have a URL as an answer. If the URL points to a track on Spotify, obtaining the answer was trivial. Otherwise, the title portion of the markdown inline URLs, formatted as [title](url) (with title often formatted as 'Artist-Song') was used as a query to the Spotify search API. Since the API returns multiple results, we created a classifier3 with 31 features based on the scores of the retriever, the edit distances between title and artist name, song title, etc. We used the classifier to predict if a title matches the track and artist, scoring 100% on precision on a held out set of 100 samples. Low-confidence candidates were filtered out. This left us with a set of 4,342 posts with Spotify tracks as answers. Lastly, we only retained those posts where the ISRC 4 of the answer track is also present in the Wasabi Corpus [7]: a total of 2,278 posts. We call this collection ToT .\nGathering reformulations. We gathered reformulations for all posts in ToT by prompting GPT-3 [5] 5 with the respective post description and a word count limit: <description> Summarize the query above to <N> words, focusing on musical elements. We used = {10, 25, 50}. 6 We also employed a prompt without a specific word limit: <post description> Shorten the query above, focusing on musical elements." }, { "figure_ref": [], "heading": "Music-ToT Schema", "publication_ref": [ "b0", "b1", "b8" ], "table_ref": [ "tab_0" ], "text": "Our annotation process involved three steps. We first developed and then refined a schema to describe Music-ToT information needs; in the final step, we annotated 100 samples from ToT .\nDeveloping the schema in 2 steps. A preliminary study conducted with one author (self-rated music expertise 7 out of 10) and two volunteers (music expertise 8/10 and 7/10 respectively) involved assigning one or more labels to 78 sentences from 25 randomly sampled posts from ToT . We focused on developing new labels specific to Music-ToT, while also re-using labels from Arguello et al. [1]: specifically the Context labels, pertaining to the context an item was encountered in (Temporal Context, Physical Medium, Cross Media, Contextual Witness, Physical Location, Concurrent Events), and Other annotations (Previous Search, Social, Opinion, Emotion, Relative Comparison). The latter are generally applicable across ToT information needs. This preliminary study revealed 25 new music labels, in addition to 11 labels from prior work (6 × Context and 5 × Other). In the second step, the three authors (self-rated musical expertise 7, 6 and 5 respectively) of this paper labeled 110 sentences (20 posts from ToT ) to validate the schema. Based on our results and discussions, we combined a few finer-grained categories with low support into more general categories, e.g. specific musical elements like Rhythm / Repetition, Melody, Tempo, etc., were combined to Composition, resulting in 28 labels in total.\nAnnotating. Lastly, in step 3, two authors employed the final schema to annotate 536 sentences corresponding to 100 posts. The resulting labels, their frequency, category, inter-rater agreement (Cohen's [2,9]) along with their description and an example, are presented in Table 1." }, { "figure_ref": [], "heading": "DATA ANALYSIS", "publication_ref": [ "b0", "b0" ], "table_ref": [ "tab_0" ], "text": "We now first discuss Table 1, followed by a brief discussion about the modalities present in the whole collection, ToT .\nAnnotation results. Among the music-focused annotations, Genre and Composition, a description of musical elements and how they fit together, are the two most frequent labels. This is followed by Music Video Description, and either direct quotes (Lyric Quote) or a description of the lyrics (Story/Lyric Description) further highlighting the different information needs that need to be addressed i.e., lyric search, text search and multi-modal search. However, a simple extraction of Genre and metadata such as Time Period/Recency, Instrument, etc., may not be useful without considering the most frequent label, Uncertainty. Search systems therefore would have to handle these elements, as well as consider potential false memories. Furthermore, annotations like Social, Opinion are also fairly common occurrences in our data, which may have limited utility for retrieval [1], motivating reformulations (cf. §3.1). Searchers also express their queries in terms of other music entities in a Relative Comparison, and describe Previous Search attempts, explicitly ruling out certain candidates. References to other modalities like user created clips (Recording) or existing media (Embedded Music) also pose a challenge. We now explore this challenge with a brief study of references to external content in the entire collection, ToT .\nCross-modal references Music-ToT, like other ToT domains, contains cross-modal and media references [1], where a searcher refers to external content. We here show that Music-ToT posts in particular contain such references frequently. To this end, we gathered frequent websites that appear in ToT . One author manually labeled these as one of: ( with a small number of posts containing references to both types (1.1%). Therefore, Music-ToT information needs are inherently multimodal. We characterize the remaining 57.7% of queries as descriptive queries, which include references to lyrics, or story descriptions (cf. §3.2). In summary, Music-ToT information needs are characterized by uncertainty and multi-modality, requiring methods like text-based audio retrieval, content based audio retrieval/fingerprinting and multi-or cross-modal retrieval." }, { "figure_ref": [], "heading": "BENCHMARKS 5.1 Experimental Setup", "publication_ref": [ "b5", "b6", "b1", "b17" ], "table_ref": [], "text": "Corpora. We run experiments on two corpora. The first is the Wasabi 2.0 Corpus [6,7]. It consists of 2M commercial songs from 77K artists and 200K albums. Crucially, (1) songs have the ISRC linked, enabling linking to data in Spotify; (2) it is an open dataset, consisting of rich information that includes lyrics, extensive metadata, and music snippets. We index the Song Name, Artist Name and Lyrics7 of all songs using Elasticsearch (BM25 with default parameters). The second corpus corresponds to the Spotify US catalog, consisting of hundreds of millions of tracks. The Spotify search system [18] utilizes multiple retrieval stages (including lexical-and semantic search) and incorporates historic log data for retrieval purposes.\nQueries. We conducted experiments on the 1,256 posts (849 train, 191 validation, and 216 test) from ToT that contain no URLs in the post title or post text; we make this choice as in the most extreme case, the entire post may contain just a URL, requiring audio-based search while we focus on text-based methods. From each post, we create different queries and label them as follows:\n(1) Title: using the post title only; Evaluation. We report Recall@K, equivalent to Success@K (i.e., one correct answer) for = {10, 100, 1000} on Wasabi. All reported results are on the test set. For Spotify search we describe the observed trends (due to the proprietary nature of the system)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "Table 2 provides an overview of our Wasabi results.\nPost parts as query. The low success across queries and underscores the difficulty of the task. On Wasabi, Title queries are more effective than Text queries-increased verbosity leads to retrieval failure. However, the text may indeed contain data useful in retrieval, with comparable or higher effectiveness scores for Title+Text over Title at = {100, 1000}, motivating keyword extraction: crucial details might be present in the text, but including the entire On Spotify search we observe a different trend: Title+Text is the most effective query followed by Title.\nLLM reformulations as query. Examining Table 2, reformulations have limited success compared to Title queries. Reform 25 and Reform 50 perform as well as Title on S@1000, with Reform ∞ outperforming it. While Keywords beat all but Reform 25 on S@10, it is outperformed by reformulations on S@100 and S@1000. On Spotify search, we find that reformulations fare worse than Title queries for S@10, but see limited success on S@100, with Reform 25 and Reform 50 achieving higher effectiveness. Most importantly, there is no ideal on either index, with varying success across metrics. We thus conclude that in our study, reformulations generated using state-of-the-art LLMs have only mixed success." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We explored Tip-of-the-Tongue retrieval for music. Of the 94K posts corresponding to Music-ToT information needs from an online community for ToT requests, we linked 2,278 posts to the corresponding answers in the Wasabi corpus, resulting in ToT , thus enabling further research for this challenging task.\nWe iteratively developed and refined a Music-ToT schema that contains 28 fine-grained labels as shown in Table 1. Labeling 100 posts using this schema, we showed that users express uncertainty frequently, and almost as often refer to other modalities. We benchmarked a subset of 1.2K descriptive queries from ToT , and highlight the difficulty of the task. Future work should leverage cross-and multi-modal retrieval as well as better approaches for reformulations." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank Gulfaraz Rahman and Ruben van Heusden for helping with the preliminary annotation work. The authors also thank Daniel Lazarovski and Humberto Corona Pampín for their input. Part of this research was supported by the NWO Innovational Research Incentives Scheme Vidi (016.Vidi.189.039). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors." } ]
We present a study of Tip-of-the-tongue (ToT) retrieval for music, where a searcher is trying to find an existing music entity, but is unable to succeed as they cannot accurately recall important identifying information. ToT information needs are characterized by complexity, verbosity, uncertainty, and possible false memories. We make four contributions. (1) We collect a dataset-ToT of 2,278 information needs and ground truth answers. (2) We introduce a schema for these information needs and show that they often involve multiple modalities encompassing several Music IR sub-tasks such as lyric search, audio-based search, audio fingerprinting, and text search. (3) We underscore the difficulty of this task by benchmarking a standard text retrieval approach on this dataset. (4) We investigate the efficacy of query reformulations generated by a large language model (LLM), and show that they are not as effective as simply employing the entire information need as a query-leaving several open questions for future research.
When the Music Stops: Tip-of-the-Tongue Retrieval for Music
[ { "figure_caption": "( 2 )2Text: post text; (3) Title+Text: title & text concatenated; and finally, (4) Keywords: extracting up to ten keywords from the post text 8 with Yake [8]; (5) Reform : reformulations with = {10, 25, 50, ∞}.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Annotation Schema: Label, frequency of occurrence in 100 submissions / 536 sentences (F), annotator agreement ( ) and description of label, along with an example for each label. Conveys an opinion or judgment about some aspect of the music. I don't remember the lyrics or title, only that it was a kind of angsty teen \"I want to set the world on fire\" Describes other people involved in the listening experience. A few years back, a friend of mine showed me an . . .", "figure_data": "LabelFDescriptionExampleComposition870.74 Describes (part of) the composition of a piece of music including. . . playing the same major-key pattern over each chord in arhythm, melody, tempo, pitch, chords, notes, and keys; or howfairly simple repeating loop.they are composed into a cohesive piece of music.Genre770.92 References a genre.It sounded like a reggae/ska type beatMusic Video Description750.89 Describes a music video associated with a song.However, once the music starts, the store is lit up and the toneshifts completely as everything in that store has a pastel colourscheme.Lyric Quote650.89 Directly quotes lyrics that the user overheard, not including. . . it wasn't until he said something about the \"just somebodysounds / vocalizationsthat I used to know\" song that I . . .Story/Lyric Description600.71 Describes either the story conveyed by the lyrics, or the gist ofThe song is a woman singing to/about a man that she was inMUSIC ANNOTATIONSArtist Description Time Period / Recency Instrument Vocals Name Popularity54 49 30 28 23 18the lyrics instead of directly quoting it. 0.92 Describes the artist. 0.89 References the time period the user thought the music was pro-duced. 0.86 Mentions instruments that were overheard. 0.69 Describes the voice or vocal type. 0.81 Describes a song/artist/album name, what it resembles/contains, or what the searcher remembers of it. 0.83 Describes the popularity of the music, artist, album or musiclove with and died, I think he was in the military and got killed and she had a baby at home? He was maybe a tad overweight, shaggy hair, maybe curly. Late 90s-early 2000s hip hop song that sounds similar to clip The guy performing was at a keyboard/piano . . . High pitched but kind of floaty female vocals, a bit . . . I'm surprised I can't find it since I can remember many spe-video.cific lyrics, I guess it's more obscureRecording150.80 A description or reference to user-created contentI did a vocaroo of the tune, sorry about my voice and any pos-sible background guinea pig noises: URLLanguage / Region140.92 Either mentions the language of the piece of music and/or refer-A Japanese song that I don't remember any words to or howences a particular region like state, country, etc.the tune goes at all,Album Cover51.00 Describes the album cover.Social540.77 Communicates a social nicety.Any help appreciated!Opinion 0.44 Temporal Context 43 36 0.87 Describes when the music was heard, either in absolute terms orrelative terms.Listening Medium260.75 References the medium associated with the item. (e.g., radio,I heard it on the radio a couple of times in . . .streaming service, etc)Embedded Music260.58 References or describes extant media (e.g., Youtube / Twitch URL),I do have a video with the song (this video at around minuteincluding timestamps.4:21: URL)Other Cross Media260.19 Describes exposure to the piece of music through different media,. . . I'm pretty sure was performed on one of the early seasonsexcluding other Cross Modal labelsof Glee or maybe Smash.Previous Search250.67 Describes a previous attempt to find the item, including negativeI've tried humming it into shazam and other sites, looking upresults (i.e., it is not song X).the two generic lyrics I remember, even doing those rhythmtapping things and nadaRelative Comparison250.77 Describes a characteristic of the music in relative (vs. absolute)The melody I remember resembles the beginning of the songterms, by explicitly comparing it with another song / artist / al-\"Run to the hills\" by Metallicabum.Emotion250.05 Conveys or describes how a piece of music made the viewer feelEven talking about it makes me tear up.Concurrent Events180.09 Describes events relevant to the time period when music was en-. . . when I was driving down the country but for the life of mecountered, but excluding descriptions of the music itself.can't remember the name.Physical Location90.61 Describes physical location where music was encountered.. . . record a 9 second portion of this song at a Marriott hotelbar in downtown Chicago . . .Contextual Witness90.49", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "1) User Created: a clip uploaded by a user, e.g., Vocaroo, Clyp.it, Google Drive, Dropbox, Instaudio, musiclab, Onlinesequencer, Streamable, Speakpipe. (2) Extant Media: a clip unlikely to be uploaded by a user, e.g. an existing clip, corresponding to content/social media websites like Spotify, Twitch, Tiktok, or YouTube. (3) Other URL: Not belonging to the previous two categories. We find that Extant Media forms a larger proportion of queries (19K, 20.9%) compared to User Created queries (14K, 15.3%),", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overview of retrieval experiments on Wasabi, using Elasticsearch (BM25). query might harm effectiveness. Our keyword selection method though fails to outperform other queries except for Text on S@10.", "figure_data": "Query S@10 S@100 S@1000Title 0.0370 0.08330.1389Keywords 0.0231 0.04630.0787Text 0.0139 0.06480.0926Title+Text 0.0324 0.08330.1713Reform 10 0.0139 0.05090.1204Reform 25 0.0278 0.06020.1389Reform 50 0.0185 0.07410.1389Reform ∞ 0.0139 0.07410.1574need as a", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Samarth Bhargav
[ { "authors": "Jaime Arguello; Adam Ferguson; Emery Fine; Bhaskar Mitra; Hamed Zamani; Fernando Diaz", "journal": "Association for Computing Machinery", "ref_id": "b0", "title": "Tip of the Tongue Known-Item Retrieval: A Case Study in Movie Identification", "year": "2021" }, { "authors": "Ron Artstein; Massimo Poesio", "journal": "Computational linguistics", "ref_id": "b1", "title": "Inter-coder agreement for computational linguistics", "year": "2008" }, { "authors": "Leif Azzopardi; Maarten De Rijke; Krisztian Balog", "journal": "Association for Computing Machinery", "ref_id": "b2", "title": "Building Simulated Queries for Known-Item Topics: An Analysis Using Six European Languages", "year": "2007" }, { "authors": "Samarth Bhargav; Georgios Sidiropoulos; Evangelos Kanoulas", "journal": "Association for Computing Machinery", "ref_id": "b3", "title": "It's on the Tip of My Tongue': A New Dataset for Known-Item Retrieval", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Michel Buffa; Elena Cabrio; Michael Fell; Fabien Gandon; Alain Giboin; Romain Hennequin; Fabrice Jauvat; Elmahdi Korfed; Franck Michel; Johan Pauwels; Guillaume Pellerin; Maroua Tikat; Marco Winckler", "journal": "", "ref_id": "b5", "title": "The WASABI Dataset and RDF Knowledge Graph", "year": "2020" }, { "authors": "Michel Buffa; Elena Cabrio; Michael Fell; Fabien Gandon; Alain Giboin; Romain Hennequin; Franck Michel; Johan Pauwels; Guillaume Pellerin; Maroua Tikat; Marco Winckler", "journal": "Springer International Publishing", "ref_id": "b6", "title": "The WASABI Dataset: Cultural, Lyrics and Audio Analysis Metadata About 2 Million Popular Commercially Released Songs", "year": "2021" }, { "authors": "Ricardo Campos; Vítor Mangaravite; Arian Pasquali; Alípio Mário Jorge; Célia Nunes; Adam Jatowt", "journal": "Springer International Publishing", "ref_id": "b7", "title": "YAKE! Collection-Independent Automatic Keyword Extractor", "year": "2018" }, { "authors": "Jacob Cohen", "journal": "Educational and psychological measurement", "ref_id": "b8", "title": "A coefficient of agreement for nominal scales", "year": "1960" }, { "authors": "Seungheon Doh; Minz Won; Keunwoo Choi; Juhan Nam", "journal": "", "ref_id": "b9", "title": "Toward Universal Text-to-Music Retrieval", "year": "2022" }, { "authors": "J S Downie; Sally Jo Cunningham", "journal": "", "ref_id": "b10", "title": "Toward a Theory of Music Information Retrieval Queries: System Design Implications", "year": "2002" }, { "authors": "Benjamin Elizalde; Shuayb Zarar; Bhiksha Raj", "journal": "", "ref_id": "b11", "title": "Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio", "year": "2019" }, { "authors": "David Elsweiler; David E Losada; C José; Ronald T Toucedo; Fernandez", "journal": "Association for Computing Machinery", "ref_id": "b12", "title": "Seeding Simulated Queries with User-Study Data for Personal Search Evaluation", "year": "2011" }, { "authors": "Jonathan T Foote", "journal": "Multimedia storage and archiving systems II", "ref_id": "b13", "title": "Content-based retrieval of music and audio", "year": "1997" }, { "authors": "Asif Ghias; Jonathan Logan; David Chamberlin; Brian C Smith", "journal": "", "ref_id": "b14", "title": "Query by humming: Musical information retrieval in an audio database", "year": "1995" }, { "authors": "Matthias Hagen; Daniel Wägner; Benno Stein", "journal": "Springer International Publishing", "ref_id": "b15", "title": "A Corpus of Realistic Known-Item Topics with Associated Web Pages in the ClueWeb09", "year": "2015" }, { "authors": "Jaap Haitsma; Ton Kalker", "journal": "Ismir", "ref_id": "b16", "title": "A highly robust audio fingerprinting system", "year": "2002" }, { "authors": "Helia Hashemi; Aasish Pappu; Mi Tian; Praveen Chandar; Mounia Lalmas; Benjamin Carterette", "journal": "Association for Computing Machinery", "ref_id": "b17", "title": "Neural Instant Search for Music and Podcast", "year": "2021" }, { "authors": "Claudia Hauff; Matthias Hagen; Anna Beyer; Benno Stein", "journal": "Association for Computing Machinery", "ref_id": "b18", "title": "Towards Realistic Known-Item Topics for the ClueWeb", "year": "2012" }, { "authors": "Claudia Hauff; Geert-Jan Houben", "journal": "Springer", "ref_id": "b19", "title": "Cognitive Processes in Query Generation", "year": "2011" }, { "authors": "Marko Helén; Tuomas Virtanen", "journal": "IEEE", "ref_id": "b20", "title": "Query by example of audio signals using Euclidean distance between Gaussian mixture models", "year": "2007" }, { "authors": "Hussein Hirjee; Daniel G Brown", "journal": "", "ref_id": "b21", "title": "Solving Misheard Lyric Search Queries Using a Probabilistic Model of Speech Sounds", "year": "2010" }, { "authors": "Sungeun Hong; Woobin Im; Hyun S Yang", "journal": "", "ref_id": "b22", "title": "Deep learning for contentbased, cross-modal retrieval of videos and music", "year": "2017" }, { "authors": "Ida Kathrine; Hammeleff Jørgensen; Toine Bogers", "journal": "Association for Computing Machinery", "ref_id": "b23", "title": "A Qualitative Analysis of Video Game Re-Finding Requests on Reddit", "year": "2020" }, { "authors": "Bongjun Kim; Bryan Pardo", "journal": "IEEE", "ref_id": "b24", "title": "Improving content-based audio retrieval by vocal imitation feedback", "year": "2019" }, { "authors": "Jinyoung Kim; W Bruce Croft", "journal": "Association for Computing Machinery", "ref_id": "b25", "title": "Retrieval Experiments Using Pseudo-Desktop Collections", "year": "2009" }, { "authors": "Andreea-Maria Sophia Koepke; Joao Oncescu; Zeynep Henriques; Samuel Akata; Albanie", "journal": "IEEE Transactions on Multimedia", "ref_id": "b26", "title": "Audio retrieval with natural language queries: A benchmark study", "year": "2022" }, { "authors": "Meinard Müller; Frank Kurth; David Damm; Christian Fremerey; Michael Clausen", "journal": "Springer", "ref_id": "b27", "title": "Lyrics-Based Audio Retrieval and Multimodal Navigation in Music Collections", "year": "2007" }, { "authors": "Andreea-Maria Oncescu; Joao F Koepke; Zeynep Henriques; Samuel Akata; Albanie", "journal": "", "ref_id": "b28", "title": "Audio retrieval with natural language queries", "year": "2021" }, { "authors": "Nicholas Ring; Alexandra L Uitdenbogerd", "journal": "Springer", "ref_id": "b29", "title": "Finding 'Lucy in Disguise': The Misheard Lyric Matching Problem", "year": "2009" }, { "authors": "Sargol Sadeghi; Roi Blanco; Peter Mika; Mark Sanderson; Falk Scholer; David Vallet", "journal": "Association for Computing Machinery", "ref_id": "b30", "title": "Identifying Re-Finding Difficulty from User Query Logs", "year": "2014" }, { "authors": "Shoto Sasaki; Kazuyoshi Yoshii; Tomoyasu Nakano; Masataka Goto; Shigeo Morishima", "journal": "", "ref_id": "b31", "title": "LyricsRadar: A Lyrics Retrieval System Based on Latent Topics of Lyrics", "year": "2014" }, { "authors": "Federico Simonetta; Stavros Ntalampiras; Federico Avanzini", "journal": "", "ref_id": "b32", "title": "Multimodal Music Information Processing and Retrieval: Survey and Future Challenges", "year": "2019" }, { "authors": "K Wang; Qiyue Yin; Wei Wang; Shu Wu; Liang Wang", "journal": "", "ref_id": "b33", "title": "A Comprehensive Survey on Cross-modal Retrieval", "year": "2016" }, { "authors": "Xin Xu; Tsuneo Kato", "journal": "Springer", "ref_id": "b34", "title": "Robust and Fast Two-Pass Search Method for Lyric Search Covering Erroneous Queries Due to Mishearing", "year": "2012" }, { "authors": "Xin Xu; Masaki Naito; Tsuneo Kato; Hisashi Kawai", "journal": "", "ref_id": "b35", "title": "Robust and Fast Lyric Search based on Phonetic Confusion Matrix", "year": "2009" }, { "authors": "Hongliang Ye; Wanning Zhu; Yue Yu; Lei Hong", "journal": "", "ref_id": "b36", "title": "A Cross-language Music Retrieval Method by Using Misheard Lyrics", "year": "2020" }, { "authors": "Yi Yu; Suhua Tang; Francisco Raposo; Lei Chen", "journal": "ACM Trans. Multimedia Comput. Commun. Appl", "ref_id": "b37", "title": "Deep Cross-Modal Correlation Learning for Audio and Lyrics in Music Retrieval", "year": "2019-02" }, { "authors": "Yichi Zhang; Zhiyao Duan", "journal": "IEEE", "ref_id": "b38", "title": "Visualization and interpretation of Siamese style convolutional neural networks for sound search by vocal imitation", "year": "2018" }, { "authors": "Tiange Zhu; Raphaël Fournier-S'niehotta; Philippe Rigaux; Nicolas Travers", "journal": "Big Data and Cognitive Computing", "ref_id": "b39", "title": "A Framework for Content-Based Search in Large Music Collections", "year": "2022" } ]
[]
10.18653/v1/S19-2007
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b23", "b24", "b25", "b14", "b5", "b31", "b14", "b20", "b10", "b31", "b15", "b20", "b10", "b1" ], "table_ref": [], "text": "The wide spread of social media allowed us to communicate and share our opinions quickly and conveniently. However, it gives place to abusive content as well, which leaves some groups of people vulnerable. To push back abusive online content, various automated systems, and more importantly datasets, were introduced covering various text genres such as forum (de Gibert et al., 2018), Twitter (Struß et al., 2019) or Instagram posts (Suryawanshi et al., 2020) of various languages (Vidgen and Derczynski, 2020), user groups such as women (Fersini Figure 1: Two-step approach: M E is trained on the (external) datasets we already have followed by its adaptation to the target task (M t ) with only a few-shots. Labels not directly used for the target task are underlined, target labels not contained in the external datasets are bolded. et al., 2018) or LGBTQ+ (Leite et al., 2020) and tasks including hate speech (de Gibert et al., 2018), offensive language (Zampieri et al., 2019) or toxicity (Leite et al., 2020) detection, etc. However, there is constantly a need to annotate new datasets supporting novel target scenarios. To reduce annotation costs previous work leveraged transfer learning to build systems across languages (Ranasinghe and Zampieri, 2020) and domains (Glavaš et al., 2020). But finding the right source datasets is often challenging, since the label sets or even the definition of the same label could differ, e.g., the offensive label of the OLID dataset includes profane language (Zampieri et al., 2019), while the same label does not in HASOC (Mandl et al., 2019). To alleviate the problem previous work manually altered the label sets of the source datasets in order to match them to the target requirements (Ranasinghe and Zampieri, 2020;Glavaš et al., 2020;Bigoulaeva et al., 2023). However, this requires expertise in abusive language datasets, since the already developed rules for manual label matching are not reusable due to the rapid change in the application scenarios. Additionally, novel labels do not have alternatives to be transferred from. Our goal is to eliminate the need for such rules making information transfer more flexible.\nIn this paper, we introduce a method leveraging multiple already existing (external) datasets in order to build an abusive language aware foundation model which can cheaply be adapted to the target requirements across languages and text genres without the need for manual dataset modifications. As shown in Figure 1, different datasets can inform the model about different types of abusive content. Some labels can directly be leveraged for the target task due to their matching abusive content definitions, while others, which we call external only labels, i.e., labels of the external datasets which are not contained in the target dataset, contribute to the general abusive language awareness of the foundation model for easy adaptability to future data sources. Our approach consists of two steps: jointly training a language model on multiple datasets using prompt-learning. We then adapt the resulting model to the target requirements in the second step, using only a few samples per label from the target task (4-shots in the main experiments), which could even be created on-the-fly as moderators or affected people face them. Since the target task can contain unseen labels, i.e., labels which are not contained in any of the external datasets, at least a few annotated samples are needed.\nWe test our method on various tasks (e.g., hate, abuse and misogyny detection or target identification) in both monolingual and cross-lingual (English to German, Italian, Brazilian Portuguese and Hindi) setups. Additionally, our datasets cover multiple platforms including longer forum posts and shorter Twitter messages. On top of improved performance compared to the baseline systems, our analysis shows that not only seen but unseen target labels are positively affected indicating that our models can better understand abusive content in general due to the use of datasets we already have. Our contributions are the following: 1\n• a multi-dataset learning (MDL) approach, using prompt-learning based fine-tuning, for an efficient few-shot adaptation which supports the ever-changing nature of abusive language detection, 1 Code at: https://github.com/hangyav/multi_hs\n• applicability across languages and text genres to support a wide range of target tasks, • and various ablation studies including the analysis of the effect of external only and unseen labels as well as various few-shot training sizes and the use of related datasets in the case of cross-lingual application." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b20", "b29", "b17", "b2", "b22", "b19", "b13", "b28", "b16", "b12", "b26" ], "table_ref": [], "text": "To alleviate the issues of missing datasets for a given target task previous work leveraged transfer learning techniques. Ranasinghe and Zampieri (2020) built hate speech classifiers for Hindi, Spanish and Bengali by relying only on an English training dataset, while Glavaš et al. ( 2020) followed a similar approach for cross-domain experiments. In order to make the train and test corpora compatible, the labels of the training dataset were adapted based on a few rules. Although the rules are simple, a deep understanding of the annotation methodology is needed in order to create them. Furthermore, Wiegand et al. (2018) showed that by adding seemingly similar English samples to a small amount of German training data the results decreased, while Nozza (2021) found that in zero-shot cross-lingual models language specific interjections are often misinterpreted which leads to errors. Thus, such approaches have limited real-world applicability.\nIn this work, we leverage external datasets without any modifications and use only a few-shots from the target dataset to learn its specificities. The goal of our approach is to transfer abusive language related general knowledge from external datasets having different label sets to the target task. In contrast to transfer learning, the goal of multitask learning is to build a single shared model using various tasks in order to improve the performance on all of them by exploiting common information in some tasks (Caruana, 1997). Stickland and Murray (2019) proposed a multitask method based on pre-trained language models by introducing task specific parameters in each layer achieving better results than single task learning. Due to negative task interference however, single task models perform best in many cases. Similarly, Pfeiffer et al. (2021) used adapters (Houlsby et al., 2019) in a multitask setting showing that fusing information learned by task specific adapters can further boost the performance on a target task. To mitigate the issues of task interference, a set of auxiliary tasks were used to improve the performance on the target task in (Watanabe et al., 2022). Similarly, Mehmood et al. (2020) perform a final training step on the target biomedical NER task after multitask learning. However, these methods rely on a large set of training data for the target task in order to improve performance. In comparison, our work differs in that i) we consider strongly related tasks, i.e., various abusive language datasets, ii) we only leverage external datasets in the multitask training step (for which achieving the best possible performance is not the goal), and iii) most importantly we only assume a few training samples for the target tasks, since our goal is to be able to cheaply build systems for novel abusive language scenarios in contrast to improving the performance for tasks for which we already have proper training corpora.\nOur approach is also related to meta-learning (Hospedales et al., 2021) where the goal is to build a general model that is cheaply adaptable to a target task. Wang et al. (2021) showed that meta-learning has similar performance to multitask learning and meta-learning usually does not involve numerous closely related tasks as in our setup, so we do not consider it in this work." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b21" ], "table_ref": [], "text": "We consider two sets of training corpora: external datasets (D E = {D e i : i = 1..N }) which are not directly related to the target task and the target dataset (D t ) which is the target task for which we aim to build a classifier. The former are off-theshelf datasets created for other tasks and/or languages containing a few thousands or sometimes tens of thousands of samples. In contrast, since our main goal is to reduce the costs of building systems for novel target tasks, D t contains only a few samples, 4-shots per label in our main experiments. We build abusive language classifiers in two steps (Figure 1). First, we train a single model by fine-tuning a pre-trained LM (M 0 ) using only the external datasets in order to learn general abusive language understanding (resulting in M E ), which we adapt to the specificities of the target task in the second step (resulting in M t ). In contrast to multitask learning where the final model supports multiple tasks, our final models (M t ) are built for a single target task. This imitates the use cases of social media platforms which need to build a specialized model supporting their own specific requirements. On a technical level, we use promptlearning to build our models, since it was shown to be effective in low-resource settings (Schick and Schütze, 2021). First, we discuss prompt-learning followed by the introduction of our proposed approach." }, { "figure_ref": [], "heading": "Prompt-Learning", "publication_ref": [ "b21" ], "table_ref": [], "text": "Prompt-learning was shown to be effective when only a small training set is available (Schick and Schütze, 2021). Instead of using classification heads on top of pre-trained LMs, it relies on the masked language modeling task (MLM) to perform text classification. Using pattern-verbalizer-pairs (PVPs) an input sentence is first transformed using the pattern, e.g., I'll kill you. → I'll kill you. It was [MASK], and the task is to predict the masked token. Finally, the verbalizer maps the highest probability token, out of a set of valid tokens, to labels of a given dataset, e.g., threatening → threat or neutral → normal. During training the model is fine-tuned using the MLM objective." }, { "figure_ref": [], "heading": "Multi-Dataset Training", "publication_ref": [], "table_ref": [], "text": "Step 1: General Model Training (M 0 → M E ) In each step of the training process we randomly select an external dataset D e i and a batch of samples from it. Other than the shared model core, i.e., the pre-trained LM, we use the PVP related to D e i for the forward-backward pass. For each dataset D e i we use cross-entropy loss as the objective function L e i to update the model. We run this process until convergence.\nStep 2: Model Specialization (M E → M t ) In order to adapt M E to the target task we simply continue training it on D t by using the D t specific PVP. Similarly as above, we use cross-entropy loss L t to update the model until convergence. As shown by our experiments, the general abusive language understanding learned by M E helps this step to build a better model using only very few training samples. Used PVPs: In our multi-dataset setup, we define PVPs for all external datasets and the target dataset separately, (P V P E = {P V P e i : i = 1..N } and P V P t ), i.e., each dataset has its own pattern and verbalizer, which makes our approach easy to be specialized for each dataset and at the same time easy to use, since it is not needed to select a single pattern that works well across all datasets and a verbalizer that can handle all of their labels. Furthermore, no additional parameters to the base LM architecture are introduced, i.e., only the LM's parameters are used all of which are shared across and target (D t ) datasets. Note that only a single model is trained on the D E set which is fine-tuned for each D t separately in step 2 of our approach. We remove external datasets from D E which are from the same source as a given target dataset (in case of AMI, HASOC and HatEval). Additionally, we consider similarly defined but differently named labels to be the same, such as hate and hateful, sexism and misogyny or individual and active. We consider external only and unseen labels accordingly the above points. More details are in Table 6.\ntasks. Although our method allows using different patterns for each dataset, we kept them uniform for simplicity. On the other hand, the used verbalizers are specific for the label set of each dataset. We refer to Table 6 of the Appendix for more details about the used PVPs.\n4 Experimental Setup" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b31", "b27", "b5", "b14" ], "table_ref": [], "text": "We selected a wide range of datasets for our experiments, covering various abusive language detection tasks, languages and text genres. We give a short overview in the following and further details, such as labels, number of samples, used PVPs, etc., in 2019) introduced a multilingual and multi-aspect hate speech dataset of English, French and Arabic Tweets. We leveraged the fine-grained hostility labels in English. OLID: The Offensive Language Identification Dataset contains English tweets annotated with offensive labels on three layers (Zampieri et al., 2019). We used its binary offensive text and target identification subsets. SRW: is an English Twitter set created for sexism and racism detection (Waseem and Hovy, 2016).\nStormfront: was created for hate speech detection containing English forum posts from the Stormfront white supremacist forum (de Gibert et al., 2018). It is annotated with binary labels. ToLD-Br: is a Brazilian Portuguese Twitter dataset annotated for toxicity detection (Leite et al., 2020). We used its fine-grained label set containing a wide range of labels, including misogyny." }, { "figure_ref": [], "heading": "Multi-Dataset Setup", "publication_ref": [], "table_ref": [], "text": "In the following we describe our multi-dataset setup, i.e., the 9 corpora in the external set (D E ) and the 13 target (D t ) datasets. For a high-level overview of the setup, including labels of the external and target datasets, we refer to " }, { "figure_ref": [], "heading": "Compared Systems", "publication_ref": [ "b4" ], "table_ref": [], "text": "We compare our multi-dataset learning (MDL) approach to three types of baseline systems. We use off-the-shelf pre-trained LMs and train them using the few-shot setup as in the second step of our proposed approach without training them on the external datasets (LM-base). As shown by Gururangan et al. ( 2020) fine-tuning LMs on the domain of the task of interest by further MLM training on unlabeled data can improve down-stream task performance. In order to test the effectiveness of this step in contrast to our approach which leverages labels instead, we run MLM on the external datasets of the above-mentioned setups for one epoch (MLM). Finally, to test the importance of the two separate steps of our approach we perform multitask learning, i.e., both the external datasets and the target dataset are used in a single step similarly as in step 1 in Section 3 (MTL). We use xlm-roberta-base as our base LM (Conneau et al., 2020) for both baselines and our MDL setups as well. For evaluation we used macro averaged F 1 scores averaged over 5 different seeds3 in order to reduce the high variance" }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_8", "tab_5", "tab_6", "tab_8", "tab_7" ], "text": "First, we present our main results followed by the analysis of different few-shot sizes, the model performance on each label separately, then we discuss an ablation study for a better understanding of how the external datasets affect the final performance, and finally, we test the cross-lingual performance in case closely related English datasets are available.\nOur main results with 4-shot training are presented in Table 2. On a higher level it can be seen that leveraging external datasets improved the fewshot performance, and our multi-dataset learning approach (MDL) improved over all baselines (LMbase, MLM, and MTL) in 11 out of 13 cases. The MLM and MTL baselines also improve over the LM-base system, however, not as consistently and to a lesser extent than our approach. MTL even achieves lower performance than LM-base when the averaged performance over all datasets is considered. This indicates that i) relying on the labels other than only the domain adaptation effect of MLM is beneficial and ii) the two-step approach of MDL is more effective since the very low number of samples of the target dataset are suppressed by the external data samples when they are added directly into MTL. Additionally, training on the external datasets makes the models more consistent over different runs, as shown by the decreased standard deviation values on most of the datasets.\nMDL achieves comparable average improvements on the fine-grained and the binary target datasets. Looking at the former set, not only seen but unseen labels as well were improved (even ToLD-Br with more than half of its labels unseen), suggesting that the general abusive language aware M E model helps learning the fine-grained label sets of these datasets even with only a few-shots being available. We discuss the improvements on the different labels in more details below in the Table 2: Macro averaged F 1 scores and standard deviation (%) on the fine-grained and binary target datasets of our multi-dataset approach using 4-shot training. In case there are unseen labels in a given target dataset, we highlight them with the overall number of labels in parentheses. The best result for each target dataset is in bold.\nper label analysis section. The only exception is the HASOC fine-grained abusive Hi dataset, where the MLM baseline achieved the best results, although MDL also improved over LM-base. Our conjecture is that it is partly due to the high ratio of English content in the dataset caused by its code-mixed nature, and as Table 2 suggests, MLM tends to perform better on English target datasets compared to non-English sets. Although all labels of the binary target datasets are seen, as mentioned the definitions of some labels are different. For example, the offensive label of the OLID binary offensive En target dataset includes profanity, while the same label in the external HASOC fine-grained abusive En dataset does not. However, due to the inclusion of external training samples that are directly labeled as profane, the model is trained on all the necessary information. It only has to learn to combine them in the final model of a given target dataset, such as profanity and the more restrictive offensive label of HASOC into the general offensive label of OLID.\nAll the used external datasets are English. Comparing the improvements of the monolingual and cross-lingual setups of MDL, i.e., English and non-English target datasets, we found that the external datasets are more beneficial monolingually. The average improvements are 9.51% (ignoring AMI misogyny En) and 6.09% respectively. This is not surprising given that cross-lingual transfer learning is almost always less effective. Still, it shows that the combination with English external datasets is beneficial to non-English test corpora as well. This is an important use case for reducing costs by dramatically reducing the need for human annota- the 12, 833 training samples in the ToLD-Br dataset only 11 (0.08%) are labeled as racism, meaning that a given annotator has to look at more than a thousand text inputs to increase the number of the minority label with just one sample. For a more complete picture of the performance of our approach however, we present experiments with different n values on a few selected datasets in Figure 2, while results on all the datasets are in Table 7. Similarly to Zhao et al. (2021) we find that although we average our results over 5 seeds, the performance can be unstable at lower n values, it even decreases with the increase of training samples in same cases. However, MDL steadily outperforms LM-base, the gap only decreases at higher n values. Still, even at n = 64 the baseline performs worse on of 2 of the 3 datasets. In contrast, MDL has the largest improvements compared to the baseline at lower n values, which shows the strong advantage of using external datasets, especially for target datasets, such as ToLD-Br, for which acquiring even one sample of a given label is expensive.\nPer label analysis We present per label F 1 scores on a few selected datasets in Table 3. In case of the fine-grained HASOC dataset in subtable (a), the label hate was significantly improved, while the performance on offensive and profane were improved and decreased respectively with a similar margin. All labels were improved on the OLID dataset in subtable (b), even the unseen other label by almost 5 percentage points. Most interestingly, 3 out 4 unseen labels (5 out of 7 overall) were improved on the ToLD-Br dataset in subtable (c). Our conjecture is that the unseen insult label is related to the fearful label of the external MLMA, since texts causing fear often involve insults as well, which leads to this improvement. Additionally, as stated by the authors of ToLD-Br (Leite et al., 2020), the unseen insult and obscene labels were often confused by the annotators, indicating their similarity, thus the latter could have also benefited from the fearful instances of MLMA. Similarly, the LMBTQ+phobia label is to some extent related to sexism external instances, thus MDL can leverage their similarity automatically without the need for manual label modifications. In contrast, xenophobia, which is somewhat related to racism, was not improved. However, as LM-base only achieves 0.79F 1 , we believe that xenophobia is simply too hard to classify, which is the reason for no improvements in MDL. On the binary target datasets in subtables (d) and (e) the results are similar as in case of the finegrained datasets, however all labels were improved on Stormfront, while only misogyny on AMI.\nSpecializing M E We were interested in the performance gain that can be reached by building M E specifically for a set of target datasets. Thus, we removed the external only labels from the external datasets in step 1 (see underlined labels in Table 1),4 and performed step 2 as normal (MDL spec ).\nWe present experiments on datasets where MDL outperformed LM-base in Table 4. We found that although MLD spec improved the results on the majority of the datasets, on average the results are only around 1 percentage points better. On the other hand, specializing the model to a given set of target datasets requires a careful investigation of the labels of both the external and the target datasets in order to decide which label definitions match and which labels can be removed. Furthermore, it prevents us to build a general abusive language aware model (M E ) which can be easily adapted to future target sources. Additionally, removing external-only labels decreased the performance on 4 datasets, indicating that although these labels are not of direct use in the target datasets, they still can be beneficial to achieve a better general abusive language understanding. At a closer inspection, we found that the unseen insult label decreased in MDL spec compared to MDL in both fine-grained GermEval and ToLD-Br datasets (see Table 7). As discussed in the per-label analysis, insult can be similar to the fearful label of MLMA, which was removed from MLM spec . Additionally, we found that in case of both HASOC Hi and De for which MDL spec performed better, the unseen profanity label got worse. These findings indicate the difficulty of finding the right instances from the right datasets and that external-only labels could be helpful for some target labels indirectly.\nCross-lingual related datasets In Table 5 we present a more traditional cross-lingual transfer learning setup where we assume that on top of the 4-shots of target language samples (D t ) we have a closely related English dataset (D r ). Other than MDL we present three setups: joint where we train MDL on the union of the previously defined external datasets (D E ) and D r , 3 steps where we train MDL first on D E as before followed by a training step on D r only and finally on D t , and single where we omit the first step (D E ) from 3 steps and only train on D r followed by the 4-shots of D t .\nAs D r we take HASOC fine-grained abusive En for HASOC fine-grained abusive Hi and De, and HatEval binary hate En for HatEval binary hate Es. The results show the importance of using related datasets which are from the same data source (gathered using the same keywords in our case), time period and annotated using the same guidelines, since on HASOC Hi and HatEval Es the three methods leveraging D r outperformed MDL. Interestingly, single outperformed joint and 3 steps on these two datasets, which indicates that if enough training data from the same distribution as the target samples is available then the external datasets can confuse the model. MDL achieved the best results on HASOC De which needs further investigations. In summary, the results show the importance of strongly related datasets from crosslingual transfer learning, however we argue that such datasets rarely exist in real world applications, especially if the temporal aspect is considered." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "Due to the large variety of the abusive content to be filtered, lack of resources is a major problem, larger than for many other NLP tasks. In order to eliminate the need for expensive dataset annotation for novel application scenarios, and thus reduce costs, we proposed a two-step multi-dataset approach (MDL) which exploits datasets we already have to learn general abusive language understanding and requires only a few annotated samples for the target task. Our experiments on various datasets showed that external datasets can improve few-shot classification across tasks, text genres and languages, not only of seen but unseen labels as well. Additionally, our analysis also shows that specialized foundation models, built either by careful external label selection or by involving rarely available cross-lingual related data sources, can further improve the performance. However, such models can be built with the use of abusive language expertise in the former case, or specialized data annotation in the latter case, thus making the final model more expensive. We argue for a general foundation model which can cheaply be adapted to yet unforeseen setups." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b19", "b3" ], "table_ref": [], "text": "As shown in the ablation (Tabel 4) and model specialization (Tabel 5) experiments, the performance of MDL can further be improved by employing domain expertise or target setup specific data annotation. In case of the former, the elimination of labels which do not contribute positively to the performance directly nor indirectly, indicates that some labels interfere with each other, and that MDL cannot separate some abusive language phenomenon. Furthermore, the latter case where using only the English related dataset (D r ) in the single setup outperformed the combination of the external datasets (D E ) and the related dataset in joint and 3 steps, indicates a similar label and dataset interference.\nAlthough, we argue that such related datasets are rarely available, especially for future target data sources for which, e.g., the targeted groups, discussed topics, etc., can easily differ from currently available dataset, MLD should be able to exploit both information sources. To mitigate the issues of dataset and label interference, in future work we plan to investigate modular approaches, such as adapters (Pfeiffer et al., 2021) or soft-prompts (Chen et al., 2022), and their dynamic combination for each input samples." }, { "figure_ref": [], "heading": "A Model Parameters", "publication_ref": [ "b4", "b6", "b30", "b7", "b34" ], "table_ref": [ "tab_8" ], "text": "We use xlm-roberta-base as our base LM (Conneau et al., 2020). In our early experiments we tested bert-base-multilingual-cased (Devlin et al., 2019) as well, which resulted in similar conclusions. However, XLM-R benefited slightly more from MDL which suggest that even larger models might be able to exploit general information from external datasets to a higher degree. The used hyperparameters are: batch size 1, gradient accumulation steps 16, warm-up steps 10, learning rate 5 × 10 -5 , dropout 0.1 with early stopping on the validation set. 5 Due to limited GPU memory, we could not test on larger batch sizes. Additionally, we run step 1 for a single epoch only on the external datasets, since we found that longer training made our models biased towards some of the most frequent labels. We used the same parameters for all datasets. We kept PVPs simple and uniform across datasets using English PVPs even for non-English datasets as well. We note however that in our initial experiments we tested machine translated PVPs which did not lead to significantly different results (Zhao and Schütze, 2021). Additionally, if a given token related to a label is split by the tokenizer, e.g., dominance → [domina, #nce], we take the averaged probabilities of the subwords at the [MASK] position as the probability of the related label.\nWe use the full training and validation sets of the external datasets in step 1, while only 4 training samples per label for a given target dataset in step 2 (except for the experiments in Figure 2 and Table 7). Due to high label imbalance of abusive language datasets, we use an overall 16 validation samples following the label distribution in case of the latter. 6 For all datasets we use the official train, validation and test splits if given, otherwise we take 80/20 train/test split of the full dataset and/or an additional 80/20 split of the train set for final training and validation if the latter is not given. For the implementation we used the Huggingface transformers (Wolf et al., 2020) and OpenPrompt (Ding et al., 2022) libraries for prompt-learning. To evaluate our models we used F 1 scores averaged over 5 different seeds in order to reduce the high variance issue of few-shot classification (Zheng et al., 2022).7 " }, { "figure_ref": [], "heading": "B Additional Details", "publication_ref": [], "table_ref": [], "text": "We present details of the used datasets in Table 6, such as source platform, number of samples (we only used 4 samples per label for training in the main experiments and an overall 16 samples following the original label distribution for validation in case of the target datasets) and used PVPs. Additionally, we show complete results of all setups in Table 6: Dataset statistics for each (dataset, label configuration, language) triple. From left to right we indicate the source platform of the dataset, the number of total train, validation and test samples, used verbalizers (<predicted word> → <label>) which also indicates the labels of a given dataset, and patterns (where X is the input sentence). We kept our PVPs simple, i.e., most labels are mapped 1-to-1 to the same word, and we defined only two patterns. Note that we also used English PVPs for non-English datasets, since it was shown to perform well (Zhao and Schütze, 2021). Since different datasets often name the negative abuse class differently (e.g. no-hate, not-offensive, normal, etc.), we unified them by using the frequent normal label name. Additionally, similarly defined but differently named labels, such as hate and hateful or sexism and misogyny, are united by using the same verbalizers for them." } ]
Due to the broad range of social media platforms and their user groups, the requirements of abusive language detection systems are varied and ever-changing. Already a large set of annotated corpora with different properties and label sets were created, such as hate or misogyny detection, but the form and targets of abusive speech are constantly changing. Since, the annotation of new corpora is expensive, in this work we leverage datasets we already have, covering a wide range of tasks related to abusive language detection, in order to build models cheaply for a new target label set and/or language, using only a few training examples of the target domain. We propose a two-step approach: first we train our model in a multitask fashion. We then carry out few-shot adaptation to the target requirements. Our experiments show that by leveraging already existing datasets and only a few-shots of the target task the performance of models can be improved not only monolingually but across languages as well. Our analysis also shows that our models acquire a general understanding of abusive language, since they improve the prediction of labels which are present only in the target dataset. We also analyze the trade-off between specializing the already existing datasets to a given target setup for best performance and its negative effects on model adaptability.
How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
[ { "figure_caption": "Table 6 of the Appendix.AMI: was created for the Evalita 2018 shared task on Automatic Misogyny Identification(Fersini et al., 2018), containing English and Italian tweets. We use both the binary and fine-grained misogyny labels as well as the target identification labels. GermEval: was introduced for the shared task on the Identification of Offensive Language in German tweets(Struß et al., 2019). We used both binary and fine-grained label sets. HASOC: The shared task on Hate Speech and Offensive Content Identification(Mandl et al., 2019) introduces datasets for English, German and Hindi containing Twitter and Facebook posts. We used its fine-grained abuse and target identification labels. HatEval: was built for SemEval 2019 Task 5 about the detection of hate speech against immigrants and women in Spanish and English Twitter messages(Basile et al., 2019). We used its binary hate speech and target identification label sets.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The goal of the setup is to include a wide range of datasets related to abusive language detection, such as hate speech, offense, abuse, sexism, racism detection as well as target identification. Additionally, we included datasets from the same task category but with different label sets, e.g., HASOC fine-grained abusive (hate, offensive, profane) and SRW fine-grained abusive (sexism, racism, normal). We only included English datasets in the external data set, while we used both English and non-English corpora (De, Hi, It, Pt-Br) as the target datasets to test cross-lingual transfer as well. Furthermore, we test on Stormfront which contains forum posts instead of Twitter and Facebook messages as the datasets in D E do. To avoid data leakage between the external train and the target test sets, i.e., to filter samples which have the same input samples but with different labels or inputs from different languages with the same labeling methodology, we remove all datasets from D E which are from the same authors as the test set, e.g., we omit all AMI external datasets when training M E in step 1 in case we test on AMI binary misogyny It. 2", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "±5.94 33.59±5.68 32.55±11.71 21.48±2.22 8.25±3.54 36.94±3.80 27.59 MLM 35.98 ±7.70 35.81 ±3.79 28.70±9.94 21.97 ±3.89 8.98 ±2.76 40.88 ±5.38 28.72 MTL 13.22 ±0.00 16.38 ±0.00 24.10 ±0.00 19.84 ±0.00 10.69 ±0.00 14.11 ±0.00 16.39 MDL 40.48 ±5.37 34.36 ±2.59 33.96 ±7.83 27.70 ±3.78 12.83 ±1.78 49.55 ±2.92 33.14 ±6.36 57.31 ±4.98 45.60 ±6.78 52.70 ±7.18 50.89 ±5.33 57.31 ±4.25 60.33 ±10.45 53.85 MLM 54.88 ±4.89 58.91 ±2.53 46.37 ±9.00 53.63 ±7.09 52.81 ±1.16 48.68 ±8.20 64.24 ±5.82 54.22 MTL 54.22 ±0.00 54.40 ±0.00 48.85 ±0.00 56.00 ±0.00 47.05 ±0.00 49.73 ±0.00 45.32 ±0.00 50.80 MDL 60.41 ±5.75 60.20 ±5.52 54.47 ±0.79 64.81 ±8.55 65.02 ±4.17 47.77 ±1.36 66.98 ±4.03 59.95", "figure_data": "fine-grainedabusiveoffensivetoxicitytargetHASOC (1/3)GermEval ToLD-BrOLIDEnHiDeDe (1/4)Pt-Br (4/7) En (1/3)avg.LM-base 32.76 binaryhateoffensivemisogynyStormfrontHatEvalOLIDGermEvalAMIEnEnEsEnDeEnItavg.LM-base 52.82", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Per label 4-shot F 1 scores (%). Unseen labels are bolded.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Macro averaged 4-shot F 1 scores of our M E specialization study which have labels removed from the external datasets that are not needed in the target datasets (MDL spec ).", "figure_data": "MDL MDL specHASOC abusive En 40.4848.73fine-grainedHASOC abusive Hi 34.36 HASOC abusive De 33.96 GermEval offensive De 27.70 ToLD-Br toxicitx Pt-Br 12.8337.92 35.47 26.56 10.58OLID target EN 49.5553.45Stormfront hate En 60.4160.88binaryHatEval hate En 60.20 HatEval hate Es 54.47 OLID offensive En 64.8162.48 54.31 66.23GermEval offensive De 65.0259.09AMI misogyny It 66.9868.04avg. 47.5648.65MDLjoint 3 steps singleHASOC abusive Hi 34.36 43.9443.95 47.28HASOC abusive De 35.47 24.3335.07 32.54HatEval hate Es 54.47 55.9760.36 61.07", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Macro averaged 4-shot F 1 scores of crosslingual related dataset setups. joint: adding D r to the D E set, 3 steps: training on D r separately after D E and single: training only on D r without D E . The best result for each test dataset is", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "containing each target dataset in a separate subtable stretching over multiple pages).", "figure_data": "source#train #valid #test verbalizerPatternstereotypical → stereotypedominance → dominanceAMI fine-grained misogyny En Twitter1,428357460derailing → derailingX → X It was [MASK]harassment → sexual_harassmentdiscrediting → discreditAMI binary misogyny En Twitter3,200800 1,000sexist → misogyny neutral → normalX → X It was [MASK]AMI binary misogyny It Twitter3,200800 1,000sexist → misogyny neutral → normalX → X It was [MASK]profane → profanityGermEval fine-grained offensive De Twitter4,007 1,002 3,532insulting → insult abusive → abusiveX → X It was [MASK]neutral → normalGermEval binary offensive De Twitter4,007 1,002 3,532offensive → offensive neutral → normalX → X It was [MASK]hate → hateHASOC fine-grained abusive En Twitter, Facebook1,808453288offensive → offensiveX → X It was [MASK]profane → profanityhate → hateHASOC fine-grained abusive De Twitter, Facebook32582136offensive → offensiveX → X It was [MASK]profane → profanityhate → hateHASOC fine-grained abusive Hi Twitter, Facebook1,975494605offensive → offensiveX → X It was [MASK]profane → profanityHatEval binary hate En Twitter3,055764850hate → hateful neutral → normalX → X It was [MASK]abusive → abusiveLSA fine-grained abusive En Twitter29,728 7,433 9,291hate → hateful spam → spamX → X It was [MASK]neutral → normalabusive → abusivehate → hatefulMLMA fine-grained hostility En Twitter5,549 1,388 1,735offensive → offensive disrespectful → disrespectfulX → X It was [MASK]fearful → fearfulneutral → normalOLID binary offensive En Twitter10,592 2,648860offensive → offensive neutral → normalX → X It was [MASK]sexist → sexismSRW fine-grained abusive En Twitter6,504 1,626 2,033racist → racismX → X It was [MASK]neutral → normalStormfront binary hate En Stormfront forum6,849 1,713 2,141hate → hate neutral → normalX → X It was [MASK]homophobic → LGBTQ+phobiaobscene → obsceneinsulting → insultToLD-Br fine-grained toxicity Pt-Br Twitter12,833 3,209 4,011racist → racismX → X It was [MASK]sexist → misogynyxenophobic → xenophobianeutral → normalHASOC binary target En Twitter, Facebook4,681 1,171 1,153targeted → targeted general → untargetedX → X It was [MASK]AMI binary target En Twitter1,428357460individual → active group → passiveX → X It was targeted at [MASK]HatEval binary target En Twitter3,732933 1,318individual → individual group → groupX → X It was targeted at [MASK]individual → individualOLID fine-grained target En Twitter3,100776213group → groupother → other", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
Viktor Hangya; Alexander Fraser
[ { "authors": "Cristina Valerio Basile; Elisabetta Bosco; Debora Fersini; Viviana Nozza; Francisco Patti; Manuel Rangel; Paolo Pardo; Manuela Rosso; Sanguinetti", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter", "year": "2019" }, { "authors": "Irina Bigoulaeva; Viktor Hangya; Iryna Gurevych; Alexander Fraser", "journal": "Language Resources and Evaluation", "ref_id": "b1", "title": "Label modification and bootstrapping for zero-shot cross-lingual hate speech detection", "year": "2023" }, { "authors": "Rich Caruana", "journal": "Machine learning", "ref_id": "b2", "title": "Multitask learning", "year": "1997" }, { "authors": "Hailin Chen; Amrita Saha; Shafiq Joty; Steven C H Hoi", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Learning label modular prompts for text classification in the wild", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Ona De Gibert; Naiara Perez; Aitor García-Pablos; Montse Cuadros", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Hate speech dataset from a white supremacy forum", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ning Ding; Shengding Hu; Weilin Zhao; Yulin Chen; Zhiyuan Liu; Haitao Zheng; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "OpenPrompt: An open-source framework for promptlearning", "year": "2022" }, { "authors": "Elisabetta Fersini; Debora Nozza; Paolo Rosso", "journal": "", "ref_id": "b8", "title": "Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI)", "year": "2018" }, { "authors": "Maria Antigoni; Constantinos Founta; Despoina Djouvas; Ilias Chatzakou; Jeremy Leontia Dis; Gianluca Blackburn; Athena Stringhini; Mic Vakali; Nicolas Hael Sirivianos; Kourtellis", "journal": "", "ref_id": "b9", "title": "Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior", "year": "2018" }, { "authors": "Goran Glavaš; Mladen Karan; Ivan Vulić", "journal": "", "ref_id": "b10", "title": "XHate-999: Analyzing and Detecting Abusive Language Across Domains and Languages", "year": "2020" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Timothy Hospedales; Antreas Antoniou; Paul Micaelli; Amos Storkey", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b12", "title": "Meta-learning in neural networks: A survey", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b13", "title": "Parameter-Efficient Transfer Learning for NLP", "year": "2019" }, { "authors": "Augusto João; Diego Leite; Kalina Silva; Carolina Bontcheva; Scarton", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Toxic language detection in social media for Brazilian Portuguese: New dataset and multilingual analysis", "year": "2020" }, { "authors": "Thomas Mandl; Sandip Modha; Prasenjit Majumder; Daksh Patel; M Ohana Dave; Chintak Mandlia; Aditya Patel", "journal": "", "ref_id": "b15", "title": "Overview of the HASOC track at FIRE 2019: Hate speech and offensive content identification in Indo-European languages", "year": "2019" }, { "authors": "Tahir Mehmood; Alfonso E Gerevini; Alberto Lavelli; Ivan Serina", "journal": "Procedia Computer Science", "ref_id": "b16", "title": "Combining multi-task learning with transfer learning for biomedical named entity recognition", "year": "2020" }, { "authors": "Debora Nozza", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Exposing the limits of zero-shot cross-lingual hate speech detection", "year": "2021" }, { "authors": "Nedjma Ousidhoum; Zizheng Lin; Hongming Zhang; Yangqiu Song; Dit-Yan Yeung", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Multilingual and multi-aspect hate speech analysis", "year": "2019" }, { "authors": "Jonas Pfeiffer; Aishwarya Kamath; Andreas Rücklé; Kyunghyun Cho; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "AdapterFusion: Non-destructive task composition for transfer learning", "year": "2021" }, { "authors": "Tharindu Ranasinghe; Marcos Zampieri", "journal": "", "ref_id": "b20", "title": "Multilingual offensive language identification with crosslingual embeddings", "year": "2020" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Exploiting cloze-questions for few-shot text classification and natural language inference", "year": "2021" }, { "authors": "Asa ; Cooper Stickland; Iain Murray", "journal": "", "ref_id": "b22", "title": "BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Lea rning", "year": "2019" }, { "authors": "Julia Maria Struß; Melanie Siegel; Josef Ruppenhofer; Michael Wiegand; Manfred Klenner", "journal": "", "ref_id": "b23", "title": "Overview of Germeval Task 2, 2019 Shared Task on the Identification of Offensive Language", "year": "2019" }, { "authors": "Shardul Suryawanshi; Raja Bharathi; Mihael Chakravarthi; Paul Arcan; Buitelaar", "journal": "European Language Resources Association (ELRA", "ref_id": "b24", "title": "Multimodal meme dataset (MultiOFF) for identifying offensive content in image and text", "year": "2020" }, { "authors": "Bertie Vidgen; Leon Derczynski", "journal": "PLoS ONE", "ref_id": "b25", "title": "Directions in abusive language training data, a systematic review: Garbage in, garba ge out", "year": "2020" }, { "authors": "Haoxiang Wang; Han Zhao; Bo Li", "journal": "", "ref_id": "b26", "title": "Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation", "year": "2021" }, { "authors": "Zeerak Waseem; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", "year": "2016" }, { "authors": "Taiki Watanabe; Tomoya Ichikawa; Akihiro Tamura; Tomoya Iwakura; Chunpeng Ma; Tsuneo Kato", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Auxiliary learning for named entity recognition with multiple auxiliary biomedical training data", "year": "2022" }, { "authors": "Michael Wiegand; Anastasija Amann; Tatiana Anikina; Aikaterini Azoidou; Anastasia Borisenkov; Kirstin Kolmorgen; Insa Kröger; Chris Tine Schäfer", "journal": "", "ref_id": "b29", "title": "Saarland University's Participation in the Ger-mEval Task 2018 (UdSW) -Examining Different Types of Classifiers and Features", "year": "2018" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Marcos Zampieri; Shervin Malmasi; Preslav Nakov; Sara Rosenthal; Noura Farra; Ritesh Kumar", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Predicting the type and target of offensive posts in social media", "year": "2019" }, { "authors": "Mengjie Zhao; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Discrete and soft prompting for multilingual models", "year": "2021" }, { "authors": "Mengjie Zhao; Yi Zhu; Ehsan Shareghi; Ivan Vulić; Roi Reichart; Anna Korhonen; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "A closer look at few-shot crosslingual transfer: The choice of shots matters", "year": "2021" }, { "authors": "Yanan Zheng; Jing Zhou; Yujie Qian; Ming Ding; Chonghua Liao; Li Jian; Ruslan Salakhutdinov; Jie Tang; Sebastian Ruder; Zhilin Yang", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b35", "title": "mAMI binary misogyny It Table 7: Per label and macro averaged F 1 scores for each target dataset", "year": "" } ]
[]
2024-01-09
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b3", "b6", "b7", "b8", "b2", "b9", "b10", "b12" ], "table_ref": [], "text": "Semantic segmentation of 3D scenes holds significant research value due to its broad range of applications such as robot navigation [1], object localization [2], autonomous driving [3], 3D scene editing [4], augmented/virtual reality, etc. Given the super-rich semantics in 3D scenes, a crucial aspect of this task is achieving open-vocabulary segmentation that can handle regions and objects of various semantics including those with long-tail distributions. This is a grand challenge as it necessitates a comprehensive understanding of natural language and the corresponding objects in the 3D world.\nThe main challenge in open-vocabulary 3D scene segmentation is the lack of large-scale and diverse 3D segmentation datasets. Existing 3D segmentation datasets like ScanNet [5] primarily focus on restricted scenes with limited object classes, making them unsuitable for training open-vocabulary models. An alternative is to distill knowledge from pre-trained 2D open-vocabulary segmentation models to 3D representations as learned with NeRF [6] or point clouds, by fitting the feature maps or segmentation probability outputs from the 2D models [4,7]. Though this approach circumvents the need for the 3D datasets, it inherits the limitations of the 2D models which are usually finetuned with close-vocabulary datasets of limited text labels [8,9], thereby compromising the open-vocabulary property, especially for text labels with long-tail distributions [2,3]. We achieve precise and annotation-free 3D open-vocabulary segmentation by distilling knowledge from two pre-trained foundation models into NeRF in a weakly supervised manner, supervised only by the open-vocabulary text descriptions of the objects in a scene, as illustrated in Fig. 1. One foundation model is CLIP [10] which is trained with Internet-scale text-image pairs [11] capturing extensive open-vocabulary multimodal knowledge. The other is DINO [12,13] which is trained with largescale unlabelled images capturing superb scene layout and object boundary information. However, CLIP yields image-level features which are not suitable for pixel-level semantic segmentation. Thus certain mechanisms should be designed to extract pixel-level CLIP features without fine-tuning. Additionally, the image patches' CLIP features may have ambiguities for segmentation, which need to be regularized for accurate open-vocabulary segmentation. At the other end, DINO produces feature maps instead of explicit segmentation maps. Certain distillation techniques should be designed to extract the necessary information from DINO features to facilitate precise segmentation.\nWe construct a hierarchical set of image patches to extract pixel-level features from image-level CLIP features and design a 3D Selection Volume to identify the appropriate hierarchical level for each 3D point, effectively aligning CLIP features with pixel-level features without fine-tuning. In addition, we introduce a Relevancy-Distribution Alignment (RDA) loss to address CLIP feature ambiguities, aligning segmentation probability distribution with class relevancies that capture similarities between class text features and corresponding CLIP features. Moreover, we propose a novel Feature-Distribution Alignment (FDA) loss to distill object boundary information from DINO features. The FDA loss encourages close segmentation probability distributions for points with similar DINO features and distant distributions for dissimilar features. To address the training instability due to diverse distribution shapes, we further re-balance weights associated with similar and dissimilar DINO features.\nOur method enables weakly supervised open-vocabulary segmentation of 3D scenes with accurate object boundaries. By distilling knowledge from CLIP without fine-tuning, our approach preserves its open-vocabulary knowledge and effectively handles text labels with long-tail distributions. A notable aspect of our approach is that it does not require any manual segmentation annotations for either the foundation models or the distillation process. Remarkably, our experiments demonstrate that our method surpasses fully supervised models trained with segmentation annotations in certain scenes, highlighting the possibility that 3D open-vocabulary segmentation can be effectively learned from large amounts of 2D images and text-image pairs. In summary, the contributions of this work are three-fold. Firstly, we propose an innovative pipeline for weakly supervised 3D open-vocabulary segmentation by distilling knowledge from pre-trained foundation models into NeRF without requiring any annotations in training. Secondly, we introduce a Selection Volume to align image-level CLIP features with pixel-level features, supplemented by novel Relevancy-Distribution Alignment and Feature-Distribution Alignment losses that respectively resolve CLIP features' ambiguities and effectively distill DINO features for 3D scene segmentation. Lastly, extensive experiments demonstrate that our method effectively recognizes long-tail classes and produces accurate segmentation maps, even with limited input data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b13", "b15", "b16", "b17", "b18", "b9", "b19", "b21", "b10", "b2", "b23", "b4", "b24", "b26", "b29", "b30", "b31", "b32", "b33", "b34", "b35", "b2", "b6", "b38", "b39", "b4", "b26", "b5", "b3", "b42", "b45", "b49", "b3", "b7", "b9", "b12", "b19", "b53", "b54", "b55", "b56", "b56", "b59", "b62", "b63", "b9", "b12", "b19", "b21", "b9", "b12", "b8", "b9", "b64", "b12", "b66", "b68" ], "table_ref": [], "text": "Open-vocabulary Segmentation. In recent years, the field of 2D open-vocabulary segmentation has garnered significant attention, driven by the availability of extensive text-image datasets and vast computational resources. Predominant approaches [8,[14][15][16][17][18][19] typically distill knowledge from large-scale pre-trained models, such as image-text contrastive learning models [10,[20][21][22] and diffusion models [23]. However, the distillation process requires fine-tuning on close-vocabulary datasets, contrasting with massive datasets used for large-scale pre-trained models [11]. This leads to limited performance in recalling infrequent classes with long-tail distributions [2,3], compromising the open-vocabulary property. OpenSeg [24] is not finetuned on a closed set of classes but is weakly supervised via image captions. However, OpenSeg has a smaller vocabulary and knowledge than CLIP as it is trained on a much smaller dataset. Our method, without fine-tuning CLIP, effectively handles such classes. 3D Scenes Segmentation. 3D scene segmentation has been a long-standing challenge in computer vision. Traditional approaches focus on point clouds or voxels with limited class variety in datasets, restricting generalizability to unseen classes [5,[25][26][27][28][29][30][31][32][33][34][35][36]. Recently, numerous point-cloud-based techniques have emerged to explore open-vocabulary 3D scene segmentation by encoding 2D openvocabulary models' features into 3D scene points [3,7,[37][38][39][40]. However, these methods are also mostly evaluated on datasets with restricted scenes and limited class ranges [5,27,28,41], not fully exhibiting the open-vocabulary property. Moreover, point clouds have compromised geometric details, making them less suitable for precise segmentation compared to NeRF representations [6,42]. Consequently, there has been a surge in NeRF-based 3D segmentation techniques that mainly address interactive segmentation [4,43,44], panoptic segmentation [45,46], moving part segmentation [47], object part segmentation [48], object co-segmentation [49], unsupervised object segmentation [50,51], etc. FFD [4] attempts to segment unseen text labels during training by fitting LSeg's [8] feature maps to a NeRF, but inherits LSeg's limitations, hindering generalization to long-tail distribution classes. Our method overcomes these challenges by directly using CLIP image features and distilling them into a NeRF representation [42] without fine-tuning on close-vocabulary datasets.\nFoundation Models. Pre-trained foundation models [52,53] have become a powerful paradigm in computer science due to their ability to capture general knowledge and adapt to various downstream tasks [10,12,13,20,23,[54][55][56][57]. These models are trained using various paradigms in natural language processing, such as masked language modeling [57,58], denoising autoencoder [59], replaced token detection [60], and sentence prediction tasks[61], as well as in computer vision, including data generation [23,62,63], data reconstruction [64], and data contrastive learning [10,12,13,[20][21][22]. Foundation models acquire emergent capabilities for exceptional performance on downstream tasks, either in a zero-shot manner or with fine-tuning. In this work, we harness the capabilities of two prominent foundation models, CLIP [10] and DINO [12,13]. CLIP learns associations between images and texts by mapping them to a shared space, facilitating applications in tasks like image classification, object detection, visual question-answering, and image generation [9,10,23,62,65,66]. DINO, trained in a self-supervised manner, extracts scene layout information, particularly object boundaries, and has been successfully employed in tasks such as classification, detection, segmentation, keypoint estimation, depth estimation, and image editing [12,13,49,[67][68][69]." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We propose a novel method for weakly supervised open-vocabulary segmentation of reconstructed NeRF. Given the multi-view images of a scene and the open-vocabulary text description for each class, we aim to segment the reconstructed NeRF such that every 3D point is assigned a corresponding class label.\nTo achieve this, we exploit the CLIP model's multimodal knowledge by mapping each 3D point to a CLIP feature representing its semantic meaning. As CLIP only generates image-level features, we extract a hierarchy of CLIP features from image patches and learn a 3D Selection Volume for pixel-level feature extraction, as described in Sec. 3.1. \nF multi_spatial /= count F I [scale_idx] = F multi_spatial end\nHowever, the image patches' CLIP features may have ambiguities for segmentation, showing inaccurate absolute relevancy values which lead to misclassification. Take an image patch capturing an apple lying on a lawn as an example. The corresponding CLIP feature contains both apple and lawn information. If the apple is relatively small, the patch could be classified as lawn because lawn dominates the patch's CLIP features. Then the class apple would be ignored. To address this issue, we introduce a Relevancy-Distribution Alignment (RDA) loss, aligning the segmentation probability distribution with each class's normalized relevancy map, as described in Sec. 3.2. For precise object boundaries, we align the segmentation probability distribution with the images' DINO features. Previous segmentation approaches utilizing DINO features have focused on unsupervised segmentation [49, 68], lacking semantic meaning for segmented parts. In the open-vocabulary context, assigning accurate text labels to segmented regions requires aligning DINO feature clusters with correct text labels. To overcome this challenge, we propose a Feature-Distribution Alignment (FDA) loss, segmenting the scene based on DINO features' distribution and assigning appropriate text labels, as described in Sec. 3.3." }, { "figure_ref": [], "heading": "Distilling Pixel-level CLIP Features with a 3D Selection Volume", "publication_ref": [ "b3", "b5" ], "table_ref": [], "text": "We propose a method based on multi-scale and multi-spatial strategies to adapt CLIP's image-level features for pixel-level segmentation, motivated by the observation that a pixel's semantic meaning should remain invariant to its surrounding pixels. The multi-scale component extracts features from patches of varying sizes around each pixel, and the multi-spatial component extracts from patches in which each pixel is at different positions. We average the multi-spatial features in each scale, attributing the primary direction of these features to the pixel's semantic meaning. We utilize a sliding-window algorithm for multi-spatial feature extraction. To prevent checkerboard patterns in the features and potential segmentation artifacts, we introduce randomness in the window size. The algorithm's pseudo-code is in Alg. 1.\nAfter extracting pixel-level features from CLIP, we now have the multi-view RGB images and their corresponding multi-scale pixel-level features. Each ray r is assigned its multi-scale feature F (r) ∈ R Ns×D for supervision. Rather than simply averaging each ray's multi-scale features across scales, we introduce a 3D Selection Volume S to determine the most suitable scale indicative of the object size within a patch. Each 3D point x ∈ R 3 yields a selection vector S x ∈ R Ns from S. Following [4,45], we introduce an additional branch to render the CLIP feature. We can then render the RGB value, the CLIP feature, and the selection vector of each ray r using volume rendering [6]: \nĈ(r) = i T i α i C i ∈ R 3 , F (r) = i T i α i F i ∈ R D ,(1)\nS(r) = Softmax i T i α i S i ∈ [0, 1] Ns ,(2)\nwhere C i , F i , S i are the color, feature, and selection vector of each sampled point along the ray,\nT i = Π i-1 j=0 (1 -α i )\nis the accumulated transmittance and α i = 1exp(-δ i σ i ) is the opacity of the point. We apply a Softmax function to the selection vector of each ray such that the sum of the probability of each scale is equal to 1.\nFor a set of rays R in each training batch, the supervision loss can then be formulated as the combination of the L2 distance between rendered and ground truth RGB values and the cosine similarities cos⟨, ⟩ between the rendered features and the selected multi-scale CLIP features:\nL supervision = r∈R Ĉ(r) -C(r) 2 2 -cos⟨ F (r), S(r)F (r)⟩ .(3)\nGiven a set of text descriptions {[CLASS] i } C i=1 of C classes and the CLIP text encoder E t , we can get the classes' text features T = E t ([CLASS]) ∈ R C×D . Then we can get the segmentation logits z(r) of the ray r by computing the cosine similarities between the rendered CLIP feature and the classes' text features:\nz(r) = cos⟨T, F (r)⟩ ∈ R C .(4)\nWe can then get the class label of the ray l(r) = argmax(z(r))." }, { "figure_ref": [ "fig_2" ], "heading": "Relevancy-Distribution Alignment for Ambiguity Mitigation", "publication_ref": [], "table_ref": [], "text": "To mitigate the ambiguities of the CLIP features, we propose to align the segmentation probability distribution with the spatially normalized relevancy maps of each class, enabling our method to identify specific image regions described by each class text, as illustrated in Fig. 2. The segmentation probability of each ray P (r) can be derived from the segmentation logits with a Softmax function:\nP (r) = Softmax (z(r)) ∈ [0, 1] C . (5\n)\nThe relevancy of a given class is determined by the similarity between the class's text feature and the selected feature from the hierarchy of image patches' CLIP features. Given an image I, we can get its multi-scale pixel-level CLIP feature F I ∈ R Ns×D×H×W using Alg. 1 and selection vector S I ∈ R Ns×H×W using Eq. (2). And then we can get the image's relevancy map R I ∈ R C×H×W as:\nR I hw = S I hw cos⟨T, F I hw ⟩,(6)\nwhere where h, w denotes the index in the H and W channel. We normalize each class's relevancy independently within an input view to [0, 1] to mitigate the ambiguities of CLIP features, making our method discern image regions described by each class text: where min() and max() are the functions getting the lowest and highest values across the spatial dimensions (i.e. H and W ). We apply a Softmax function to RI to make it a probability vector. Then we can assign each ray r its normalized relevancy with all the classes R(r) ∈ [0, 1] C . We employ the Jensen-Shannon (JS) divergence to measure the discrepancy between the normalized relevancy R(r) and the segmentation probability distribution P (r) of each ray, formulating the Relevancy-Distribution Alignment (RDA) loss:\nRI = (R I -min(R I )) / (max(R I ) -min(R I )) ∈ [0, 1] C×H×W ,(7)\nL RDA = r∈R c∈C P (r) c log P (r) c M P R(r) c + R(r) c log R(r) c M P R(r) c /2,(8)\nwhere M P R(r) = (P (r) + R(r))/2 is the average of the two distributions, and the subscript c denotes the probability of the cth class. By aligning the normalized relevancies and the segmentation probability distributions, our method can effectively identify the specific region corresponding to the text description of each class." }, { "figure_ref": [ "fig_3" ], "heading": "Feature-Distribution Alignment for Precise Object Boundary Segmentation", "publication_ref": [ "b12" ], "table_ref": [], "text": "To ensure the segmentation exhibits precise object boundaries, we align the segmentation probability distribution with the images' DINO features, which have been shown to capture superb scene layouts and object boundary information [12,13]. Following [49, 68], we extract the scene layout information with a DINO feature correlation tensor. Given a patch of size H p × W p , we can get the correlation tensor Corr_F ∈ R HpWp×HpWp as:\nCorr_F hwij = cos⟨f hw , f ij ⟩,(9)\nwhose entries represent the cosine similarity between the DINO features f at spatial positions (h, w) and (i, j) of the patch. In order to construct the correlation tensor for the segmentation probability distribution, we propose utilizing the JS divergence to assess the similarity between segmentation probabilities at two distinct spatial positions. The choice of JS divergence offers several advantages, including its symmetric nature and a bounded range of [0, 1], which contribute to improved numerical stability. However, since we only care about the class label of each point, i.e. the entry with the highest probability, we use a low temperature τ < 1 to get a sharper version of the segmentation probability distribution Ṕ to let the model focus on the entry with the largest probability:\nṔ = Softmax (z/τ ) ∈ [0, 1] C . (10\n)\nThe distribution correlation tensor Corr_D ∈ R HpWp×HpWp can thus be computed with:\nCorr_D hwij = c∈C Ṕhwc log Ṕhwc M Ṕ Ṕ c + Ṕijc log Ṕijc M Ṕ Ṕ c /2,(11)\nwhere Ṕhwc , Ṕijc are the segmentation probabilities of the cth class at spatial locations (h, w) and (i, j) of the patch, M Ṕ Ṕ = ( Ṕhw + Ṕij )/2 is the average of the two distributions. Thus the correlation loss [68] can be expressed as: where b is a hyper-parameter denoting that we consider the segmentation probabilities of two spatial locations (h, w) and (i, j) to be similar if their DINO features' similarity is larger than b and distant if less than b. Nonetheless, the correlation loss L corr introduces significant instability due to the diverse shapes of distributions with large divergence from a target distribution, making the loss assign wrong labels to the segmented parts. Conversely, when a distribution displays a low JS divergence with the target distribution, it consistently demonstrates a similar shape to the target distribution, as shown in Fig. 3. Based on this observation, we propose re-balancing the weights associated with similar and dissimilar DINO features. Specifically, we allocate a much greater weight to the correlation loss arising from similar DINO features and a smaller weight to that of dissimilar DINO features, thereby mitigating the instability caused by the correlation loss. Thus the Feature-Distribution Alignment (FDA) loss can be formulated with:\nL corr = hwij (Corr_F hwij -b) × Corr_D hwij ,(12)\npos_F = clamp(Corr_F -b, min = 0), neg_F = clamp(Corr_F -b, max = 0),(13)\nL F DA = λ pos hwij (pos_F hwij × Corr_D hwij )/count_nonzero(pos_F )+ λ neg hwij (neg_F hwij × Corr_D hwij )/count_nonzero(neg_F ),(14)\nwhere clamp(, min/max = 0) is to clamp all the elements smaller/greater than 0 to 0, thus making pos_F ≥ 0 and neg_F ≤ 0, count_nonzero() is to count the number of non zero elements, and λ pos , λ neg are the weights associated with similar and dissimilar DINO features, which are set as λ pos = 200 ≫ λ neg = 0.2 by default.\nThe total training loss is: L = L supervision + L RDA + L F DA , where L supervision , L RDA are calculated with randomly sampled rays and L F DA is calculated with randomly sampled patches." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b4", "b70", "b72", "b3", "b15", "b18" ], "table_ref": [], "text": "We evaluate our method on 3D open-vocabulary segmentation, showing that our method can recognize long-tail classes and produce highly accurate object boundaries even with limited input data. We employ TensoRF [42] as the backbone and extract 3 scales of pixel-level CLIP features. More implementation details and experiments are in the appendix.\nDataset. Existing 3D segmentation datasets predominantly focus on either restricted scenes with a narrow range of object classes [5,70], or individual objects [71][72][73], thereby limiting their capacity to fully assess the task of 3D open-vocabulary segmentation. Thus following [2], we create a dataset comprising 10 distinct scenes. Each scene features a set of long-tail objects situated in various poses and backgrounds. Ground truth masks for the test views are manually annotated, enabling both qualitative and quantitative evaluation of our segmentation methods. We also evaluate our method on more diverse datasets which include human body, human head, indoor scenes with low-quality images [70, 74], and a complex scene from LERF datasets [2], as shown in the appendix.\nBaselines. We benchmark our method with three NeRF-based methods capable of 3D openvocabulary segmentation: FFD [4 [16], and the CLIP-based OV-Seg [19]. LERF closely aligns with our proposed approach due to its use of knowledge distillation from CLIP and DINO. However, its primary focus is on object localization rather than segmentation. We use the same scale level number and patch sizes in LERF for fair comparisons. We also include results obtained by independently segmenting each test view using the aforementioned 2D models. Note that under our settings, FFD, Sem are fully supervised methods using segmentation annotations." }, { "figure_ref": [ "fig_4", "fig_6" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We present the qualitative results in Fig. 4 and quantitative results in Tab. 1. Our proposed method outperforms all other techniques, including those which heavily rely on extensive segmentation annotations, such as LSeg, ODISE, OV-Seg. In particular, ODISE and FFD underperform in our evaluation, as they are unable to identify many long-tail classes, suggesting that the fine-tuning process of LSeg and ODISE may have led to a significant loss of the open-vocabulary knowledge originally encapsulated by CLIP and Stable Diffusion [23]. OV-Seg attempts to retain CLIP's knowledge by leveraging a mined dataset, however, it requires a mask proposal model which produces inconsistent segmentation across views, making Sem(OV-Seg) produce noisy and imprecise segmentation. LERF also fails to capture precise object boundaries due to its usage of a relatively naïve regularization loss, which fails to fully exploit the object boundary information within the DINO features. In contrast, our method exhibits robust performance, successfully recognizing long-tail classes and generating accurate and well-defined boundaries for each class. However, LERF allows querying any object without the need for running optimization again, which is an advantage over our method. Ablations. We conduct ablation studies to evaluate the individual contributions of the RDA loss and the FDA loss to the overall performance of our proposed method. As shown in Tab. 2, both RDA loss and FDA loss are crucial to our method, without each of which can result in severe performance degradation. As illustrated in Fig. 5, without the RDA loss, the model does not resolve the ambiguities of the CLIP features, leading to misclassifications. For instance, it fails to distinguish between an orange cat and a Portuguese egg tart, and confuses a mini offroad car with wood. Without the FDA loss, although our method can correctly locate each class, it fails to segment precise object boundaries. When discarding the re-balancing in the FDA loss, i.e. using the correlation loss [68], the model produces accurate boundaries but assigns each cluster the wrong label due to the instability brought by diverse distribution shapes." }, { "figure_ref": [ "fig_6" ], "heading": "Studies", "publication_ref": [], "table_ref": [], "text": "Limited input. Given the substantial computational and storage demands of extracting hierarchical CLIP features for each view (exceeding 1GB for 3 scales in the full model), we explore whether reducing input CLIP features would yield similar results, as shown in Tab. 2 and Fig. 5. We test two modifications: reducing views for feature extraction and using a single scale of CLIP features rather than three. Halving the input views for feature extraction leads to negligible performance degradation (< 1%) and minor visual differences compared to the full model. When reducing to only 10% of input views, equivalent to 2-3 views in our dataset, we observe a modest 9% drop in the mIoU score and a 1% decrease in the Accuracy score, while retaining accurate segmentation across most classes. Using a single scale of CLIP features also only incurs minimal degradation (< 1%). Even under extreme conditions, i.e., extracting a single scale of features from 10% of input views (total only 1 /30 input of the full model), performance degradation is limited to 10%. This efficient approach even outperforms LERF [2] which utilizes all input views and scales, highlighting our method's robustness." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The limitations of our method are twofold. First, unlike LERF [2], our method requires text labels before training. To perform segmentation with new text labels, our method needs to be retrained.\nInferring accurate boundaries with open vocabularies is challenging for implicit representations like NeRF, as NeRF learns a continuous representation rather than a discrete one. It is promising to learn object-level discrete representation using NeRF in future work.\nSecond, since our method has never seen any segmentation maps during training (only weakly supervised by the text labels), it fails to segment complex scenes like indoor datasets [70, 74] with high precision, as shown in the appendix. Our method distills pixel-level CLIP features in a patchbased fashion with a strong inductive bias for compact objects with an aspect ratio close to 1. For objects with large complex shapes and unobvious textures, our method would fail to recognize them." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we address the challenge of 3D open-vocabulary segmentation by distilling knowledge from the pre-trained foundation models CLIP and DINO into reconstructed NeRF in a weakly supervised manner. We distill the open-vocabulary multimodal knowledge from CLIP with a Selection Volume and a novel Relevancy-Distribution Alignment loss to mitigate the ambiguities of CLIP features. In addition, we introduce a novel Feature-Distribution Alignment loss to extract accurate object boundaries by leveraging the scene layout information within DINO features. Our method successfully recognizes long-tail classes and produces precise segmentation maps, even when supplied with limited input data, suggesting the possibility of learning 3D segmentation from 2D images and text-image pairs. we freeze the shared volume and density volume, and train the selection volume and the CLIP feature branch. For the rest 10k iterations, we further finetune the shared volume and the RGB branch. We use Adam optimizer with betas = (0.9, 0.99). The learning rates for training the volume and MLP branch are respectively set to 0.02 and 1e -4. For finetuning the volume and the MLP, the learning rates are set to 5e -3 and 5e -5. We also employ a learning rate decay with a factor of 0.1.\nThe multi-scale pixel-level CLIP features of training views are pre-computed before training and the DINO features are computed with sampled patches on the fly during training. When computing L supervision and L RDA , we randomly sample rays with a batch size of 4096. When computing L F DA we randomly sample patches of size 256 × 256 with a batch size of 8. We use a downsampling factor of 8 when sampling rays and a factor of 5 when sampling patches. The model is trained on an NVIDIA A5000 GPU with 24G memory for ∼1h30min for each scene." }, { "figure_ref": [ "fig_7" ], "heading": "A.4 Dataset", "publication_ref": [ "b74", "b75" ], "table_ref": [], "text": "We capture 10 scenes using smartphones and use Colmap [75] to extract camera parameters for each image. We capture 20 ∼ 30 images for each scene and the resolution of each image is 4032 × 3024.\nWe follow the data structure of LLFF [76]. We manually annotate the segmentation maps of 5 views for each scene as the ground truth for evaluation.\nWe list the text labels used in our experiments in Tab. We perform two more ablation studies on the Selection Volume and the FDA loss, as shown in Tab. 4 and Fig. 7. Without the Selection Volume, we simply average the multi-scale CLIP features rather than learning the appropriate scale. We can see that both the mIoU score and the Accuracy score are inferior to the full model. We could discard the dissimilar part neg_F since dissimilar DINO features often impair the stability of the correlation loss. However, neg_F encourages different segmentation probabilities for different semantic regions and it plays a crucial role for precise object boundary extraction." }, { "figure_ref": [], "heading": "B More Ablations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_8", "fig_1" ], "heading": "C More Evaluations", "publication_ref": [ "b3", "b3" ], "table_ref": [], "text": "We additionally perform evaluations on human body, human head, indoor datasets with low-quality images [70, 74], and a complex scene from LERF datasets [2]. We compare with the concurrent work LERF qualitatively due to the lack of labels or the defective annotations as pointed out in [4]. We also perform experiments with different text prompts. We use the same scale level number and patch sizes in all comparisons.\nHuman body and head. As shown in Fig. 8, our method segments more precise parts than LERF. Specifically, LERF fails to segment the \"head\" in the human body and the \"black T-shirt\" in the human head. In contrast, our method can recognize and segment these parts correctly because our designed RDA loss addresses the ambiguity of the CLIP features effectively.\nIndoor scenes with low-quality images. Fig. 9 shows experiments on the indoor datasets [70,74], where many images are unrealistically rendered with less photorealistic appearances (as indicated in [4]) and have limited spatial resolution (640 × 480 or 1024 × 768). Due to these data constraints, Segmentation with different text prompts. We conduct experiments to segment scenes with different text prompts. In the experiments, we replaced the original texts with different languages (e.g., Portuguese egg tart -> Pastel de Nata), names (e.g., dressing doll -> Barbie), and actions (e.g., hand soap -> wash hand, black headphone -> listen to music). As Fig. 11 shows, with the rephrased text prompts, our method can still segment the scenes reliably. The experiments are well aligned with the quantitative experiments as shown in Tab. 5. " }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "D More Results", "publication_ref": [], "table_ref": [], "text": "We show more segmentation visualizations of our method in Fig. 12 (bed), Fig. 13 (sofa), Fig. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We sincerely thank Zuhao Yang, Zeyu Wang, Weijing Tao, and Kunyang Li for collecting the dataset. This project is funded by the Ministry of Education Singapore, under the Tier-1 project scheme with project number RT18/22." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "This document provides supplementary materials for Weakly Supervised 3D Open-vocabulary Segmentation in implementation details (Appendix A), more ablations (Appendix B), more evaluations (Appendix C), and more results (Appendix D). We use TensoRF [42] as our base NeRF architecture for efficiency, the plane size is also the same as the default setting of TensoRF. The RGB and CLIP feature branches share the same volume and use the same intermediate features. The selection volume and density volume are two other independent volumes. We directly use the features extracted from the selection volume and density volume as the selection vector and the density value, as they have low dimensions and are view-independent. We use the original MLP architecture in TensoRF to extract the view-dependent RGB value and use another MLP which discards view direction input to extract the rendered CLIP feature. The architecture is illustrated in Fig. 6." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2 Hyperparameters", "publication_ref": [], "table_ref": [], "text": "We set τ = 0.2 to get the shaper segmentation probability distribution Ṕ . The offset b is set to 0.7 to measure the similarities of the DINO features, meaning that two DINO features are considered similar if their cosine similarity is larger than 0.7, and different if less than 0.7. We use 3 scales of CLIP features, and the patch sizes of each scale are set as s/5, s/7, and s/10, where s is the smaller value in the width and height of the input image I. In the ablation studies, we use s/7 as the patch size of the single-scale CLIP feature input. The weights associated with similar and dissimilar DINO features in L F DA are set as λ pos = 200 and λ neg = 0.2 by default. In certain scenes, we find that setting λ neg to 0.22 or 0.18 can produce better results. We use ViT-B/16 CLIP model to extract the image and text features and use version 1 dino_vitb8 model to extract the DINO features because it employs the smallest downsampling factor of 8 which is advantageous for high-precision segmentation." }, { "figure_ref": [], "heading": "A.3 Training", "publication_ref": [], "table_ref": [], "text": "To reconstruct a NeRF from multiview images of a scene, we follow the same training settings as TensoRF. For segmentation training, we train the model for 15k iterations. In the first 5k iterations, our method sometimes confuses labels with similar appearances. However, we can see that our method still outperforms LERF by successfully segmenting more labels.\nComplex scenes. Fig. 10 (left) shows the segmentation of one challenging sample from the LERF dataset, where the scene has complex geometry as well as many objects of varying sizes. It can be observed that LERF cannot segment most objects due to the ambiguities of CLIP features while our method can segment more objects correctly with more precise boundaries. Fig. 10 (right) shows a scene with multiple instances of a same class. Since instances of the same class often share similar appearance, texture, etc., they also have similar DINO features. As a result, FDA will not mistakenly segment them into different classes. The RDA loss will further help by assigning all these instances to the same text label. In the experiment, we observed that our method successfully segments all three apples into the same class with accurate boundaries. " } ]
Open-vocabulary segmentation of 3D scenes is a fundamental function of human perception and thus a crucial objective in computer vision research. However, this task is heavily impeded by the lack of large-scale and diverse 3D open-vocabulary segmentation datasets for training robust and generalizable models. Distilling knowledge from pre-trained 2D open-vocabulary segmentation models helps but it compromises the open-vocabulary feature as the 2D models are mostly finetuned with close-vocabulary datasets. We tackle the challenges in 3D open-vocabulary segmentation by exploiting pre-trained foundation models CLIP and DINO in a weakly supervised manner. Specifically, given only the open-vocabulary text descriptions of the objects in a scene, we distill the open-vocabulary multimodal knowledge and object reasoning capability of CLIP and DINO into a neural radiance field (NeRF), which effectively lifts 2D features into view-consistent 3D segmentation. A notable aspect of our approach is that it does not require any manual segmentation annotations for either the foundation models or the distillation process. Extensive experiments show that our method even outperforms fully supervised models trained with segmentation annotations in certain scenes, suggesting that 3D open-vocabulary segmentation can be effectively learned from 2D images and textimage pairs.
Weakly Supervised 3D Open-vocabulary Segmentation
[ { "figure_caption": "Figure 1 :1Figure 1: Weakly Supervised 3D Open-vocabulary Segmentation. Given the multi-view images of a 3D scene and the open-vocabulary text descriptions, our method distills open-vocabulary multimodal knowledge from CLIP and object reasoning ability from DINO into the reconstructed NeRF, producing accurate object boundaries for the 3D scene without requiring any segmentation annotations during training.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1Extracting pixel-level features of an image from CLIP Input: RGB image I ∈ R 3×H×W , number of scale N s , CLIP image encoder E i Output: multi-scale pixel-level features F I ∈ R Ns×D×H×W Initialize: the patch size of each scale patch_sizes ∈ R Ns , F I = zeros(N s , D, H, W ) /* Loop over all the scales */ for scale_idx, patch_size in enumerate(patch_sizes) do stride = patch_size/4 count = zeros(1, 1, H, W ) /* Record the patch count for each pixel */ F multi_spatial = zeros(1, D, H, W ) /* Multi-spatial feature of current scale */ /* Loop over all the patches */ for x_idx in range((H -patch_size)/stride + 1) do start_x = x_idx × stride for y_idx in range((W -patch_size)/stride + 1) do start_y = y_idx × stride /* Get image patch's coordinates with randomness */ (lef t, upper, right, lower) = (max(start_yrandint(0, stride), 0), max(start_xrandint(0, stride), 0), min(start_y + patch_size + randint(0, stride), W ), min(start_x + patch_size + randint(0, stride), H)) /* Get image patch's CLIP feature */ F patch = E i (I.crop(lef t, upper, right, lower)) F multi_spatial [:, :, upper : lower, lef t : right] += F patch count[:, :, upper : lower, lef t : right] += 1 end end", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Mitigating CLIP features' ambiguities with normalized relevancy maps. For original relevancy maps r a , r b of classes a and b, we note a higher relevancy for class b in Region 2 than in other image regions. Despite this, the ambiguities of CLIP features lead to Region 2's classification as a due to the higher absolute relevancy of a in Region 2, even as a is located in Region 1. To rectify this, we normalize each class's relevancy maps to a fixed range. These normalized relevancy maps, ra and rb , reduce such ambiguities, facilitating accurate region-class assignments.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Difference between similar and distant distributions. Distributions having large divergence from the target distribution exhibit significantly diverse shapes, increasing the training instability (left). Conversely, distributions displaying low divergence with the target distribution consistently demonstrate a similar shape (right).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitative comparisons. Visualization of the segmentation results in 3 scenes. Our method successfully recognizes long-tail classes and produces the most accurate segmentation maps.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Studies. Visualization of the studies on ablations and limited input.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: More ablations.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Evaluation on human body dataset (left). Evaluation on human head dataset (right).", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "14 (lawn), Fig. 15 (room), Fig. 16 (bench), Fig. 17 (table), Fig. 18 (office desk), Fig. 19 (blue sofa), Fig. 20 (snacks), and Fig. 21 (covered desk). The quantitative results are listed in Tab. 6.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 Figure 17 :1217Figure 12: bed.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1217", "figure_type": "figure" }, { "figure_caption": "Figure 18 :Figure 19 :Figure 21 :181921Figure 18: office desk.", "figure_data": "", "figure_id": "fig_11", "figure_label": "181921", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Segmentation ProbabilityClassTarget Distribution", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparisons. We report the mIoU(↑) scores and the Accuracy(↑) scores of the following methods in 6 scenes and highlight the best , second-best , and third-best scores. Our method outperforms both 2D and 3D methods without any segmentation annotations in training.", "figure_data": "Methodsbed mIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy sofa lawn room bench tableLSeg [8]56.087.604.516.517.577.519.246.106.042.707.629.92DODISE [16]52.686.548.335.439.882.552.559.724.139.039.734.5OV-Seg [19]79.840.466.169.681.292.171.449.188.989.280.665.3FFD [4]56.686.903.709.542.982.625.151.406.142.807.930.1Sem(ODISE) [45] 50.386.527.722.224.280.529.561.525.656.418.430.83DSem(OV-Seg) [45] 89.396.766.389.087.695.453.881.994.298.583.894.6LERF [2]73.586.927.043.873.793.546.679.853.279.733.441.0Ours89.596.774.091.688.297.392.898.989.396.388.896.5", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Studies. We report the mean mIoU(↑) scores and the Accuracy(↑) scores in our studies.", "figure_data": "mIoU Accuracyw/o RDA loss 57.279.4w/o FDA loss58.282.7w/o re-balance 44.974.350% views85.795.710% views79.194.6single scale85.295.5single & 10%77.194.6full model86.295.8", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Dataset. We list the collected 10 scenes and the corresponding text labels. The background labels are in Italic font.", "figure_data": "SceneText Labelsbedred bag, black leather shoe, banana, hand, camera, white sheetsofaa stack of UNO cards, a red Nintendo Switch joy-con controller, Pikachu, Gundam, Xbox wireless controller, grey sofalawnred apple, New York Yankees cap, stapler, black headphone, hand soap, green lawnroomshrilling chicken, weaving basket, rabbit, dinosaur, baseball, wood wallbenchPortuguese egg tart, orange cat, green grape, mini offroad car, dressing doll, pebbled concrete wall, woodtablea wooden ukulele, a beige mug, a GPU card with fans, a black Nike shoe, a Hatsune Miku statue, lime walloffice deskthe book of The Unbearable Lightness of Being, a can of red bull drink, a white keyboard, a pack of pocket tissues, desktop, blue partitionblue sofaa bottle of perfume, sunglasses, a squirrel pig doll, a JBL bluetooth speaker, an aircon controller, blue-grey sofasnacksCoke Cola, orange juice drink, calculator, pitaya, Glico Pocky chocolate, biscuits sticks box, desktopcovered deskWinnie-the-Pooh, Dove body wash, gerbera, electric shaver, canned chili sauce, desktop", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "More Ablations.", "figure_data": "mIoU Accuracyw/o neg_F76.992.4w/o Selection 84.895.3full model86.295.8", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Evaluation on rephrased texts.", "figure_data": "mIoU Accuracy mIoU Accuracyoriginal88.297.389.396.3rephrased 89.397.288.496.6", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Quantitative results.", "figure_data": "bedsofalawnroombenchmIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy89.596.774.091.688.297.392.898.989.396.3tableoffice deskblue sofasnackscovered deskmIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy mIoU Accuracy88.896.591.796.282.897.795.899.188.697.2", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
Kunhao Liu; Fangneng Zhan; Jiahui Zhang; Muyu Xu; Yingchen Yu; Abdulmotaleb El Saddik; Christian Theobalt; Eric Xing; Shijian Lu
[ { "authors": "Nur Muhammad; Mahi Shafiullah; Chris Paxton; Lerrel Pinto; Soumith Chintala; Arthur Szlam", "journal": "", "ref_id": "b0", "title": "Clipfields: Weakly supervised semantic fields for robotic memory", "year": "2022" }, { "authors": "Justin Kerr; Chung ; Min Kim; Ken Goldberg; Angjoo Kanazawa; Matthew Tancik", "journal": "", "ref_id": "b1", "title": "Lerf: Language embedded radiance fields", "year": "2023" }, { "authors": "Krishna Murthy; Jatavallabhula ; Alihusein Kuwajerwala; Qiao Gu; Mohd Omama; Tao Chen; Shuang Li; Ganesh Iyer; Soroush Saryazdi; Nikhil Keetha; Ayush Tewari", "journal": "", "ref_id": "b2", "title": "Conceptfusion: Open-set multimodal 3d mapping", "year": "2023" }, { "authors": "Sosuke Kobayashi; Eiichi Matsumoto; Vincent Sitzmann", "journal": "", "ref_id": "b3", "title": "Decomposing nerf for editing via feature field distillation", "year": "2022" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b4", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b5", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Songyou Peng; Kyle Genova; Chiyu Jiang; Andrea Tagliasacchi; Marc Pollefeys; Thomas Funkhouser", "journal": "", "ref_id": "b6", "title": "Openscene: 3d scene understanding with open vocabularies", "year": "2022" }, { "authors": "Boyi Li; Q Kilian; Serge Weinberger; Vladlen Belongie; René Koltun; Ranftl", "journal": "", "ref_id": "b7", "title": "Language-driven semantic segmentation", "year": "2022" }, { "authors": "Yiwu Zhong; Jianwei Yang; Pengchuan Zhang; Chunyuan Li; Noel Codella; Liunian Harold Li; Luowei Zhou; Xiyang Dai; Lu Yuan; Yin Li", "journal": "", "ref_id": "b8", "title": "Regionclip: Region-based language-image pretraining", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b9", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b10", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Julien Herv'e J'egou; Piotr Mairal; Armand Bojanowski; Joulin", "journal": "", "ref_id": "b11", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b12", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Huaishao Luo; Junwei Bao; Youzheng Wu; Xiaodong He; Tianrui Li", "journal": "", "ref_id": "b13", "title": "Segclip: Patch aggregation with learnable centers for open-vocabulary semantic segmentation", "year": "2022" }, { "authors": "Jiarui Xu; Shalini De Mello; Sifei Liu; Wonmin Byeon; Thomas Breuel; Jan Kautz; Xiaolong Wang", "journal": "", "ref_id": "b14", "title": "Groupvit: Semantic segmentation emerges from text supervision", "year": "2022" }, { "authors": "Jiarui Xu; Sifei Liu; Arash Vahdat; Wonmin Byeon; Xiaolong Wang; Shalini De Mello", "journal": "", "ref_id": "b15", "title": "Openvocabulary panoptic segmentation with text-to-image diffusion models", "year": "2023" }, { "authors": "Xueyan Zou; Zi-Yi Dou; Jianwei Yang; Zhe Gan; Linjie Li; Chunyuan Li; Xiyang Dai; Harkirat Behl; Jianfeng Wang; Lu Yuan", "journal": "", "ref_id": "b16", "title": "Generalized decoding for pixel, image, and language", "year": "2022" }, { "authors": "Jian Ding; Nan Xue; Gui-Song Xia; Dengxin Dai", "journal": "", "ref_id": "b17", "title": "Decoupling zero-shot semantic segmentation", "year": "2022" }, { "authors": "Feng Liang; Bichen Wu; Xiaoliang Dai; Kunpeng Li; Yinan Zhao; Hang Zhang; Peizhao Zhang; Peter Vajda; Diana Marculescu", "journal": "", "ref_id": "b18", "title": "Open-vocabulary semantic segmentation with mask-adapted clip", "year": "2022" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b19", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Yangguang Li; Feng Liang; Lichen Zhao; Yufeng Cui; Wanli Ouyang; Jing Shao; Fengwei Yu; Junjie Yan", "journal": "", "ref_id": "b20", "title": "Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm", "year": "2021" }, { "authors": "Norman Mu; Alexander Kirillov; David Wagner; Saining Xie", "journal": "Springer", "ref_id": "b21", "title": "Slip: Self-supervision meets languageimage pre-training", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b22", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Golnaz Ghiasi; Xiuye Gu; Yin Cui; Tsung-Yi Lin", "journal": "Springer", "ref_id": "b23", "title": "Scaling open-vocabulary image segmentation with image-level labels", "year": "2022" }, { "authors": "Iro Armeni; Sasha Sax; Silvio Amir R Zamir; Savarese", "journal": "", "ref_id": "b24", "title": "Joint 2d-3d-semantic data for indoor scene understanding", "year": "2017" }, { "authors": "Jens Behley; Martin Garbade; Andres Milioto; Jan Quenzel; Sven Behnke; Cyrill Stachniss; Jurgen Gall", "journal": "", "ref_id": "b25", "title": "Semantickitti: A dataset for semantic scene understanding of lidar sequences", "year": "2019" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b26", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Angel Chang; Angela Dai; Thomas Funkhouser; Maciej Halber; Matthias Niessner; Manolis Savva; Shuran Song; Andy Zeng; Yinda Zhang", "journal": "", "ref_id": "b27", "title": "Matterport3d: Learning from rgb-d data in indoor environments", "year": "2017" }, { "authors": "Dave Zhenyu; Chen ; Angel X Chang; Matthias Nießner", "journal": "Springer", "ref_id": "b28", "title": "Scanrefer: 3d object localization in rgb-d scans using natural language", "year": "2020" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "IEEE", "ref_id": "b29", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Binh-Son Hua; Quang-Hieu Pham; Duc ; Thanh Nguyen; Minh-Khoi Tran; Lap-Fai Craig Yu; Sai-Kit Yeung", "journal": "", "ref_id": "b30", "title": "Scenenn: A scene meshes dataset with annotations", "year": "2016" }, { "authors": "Yiyi Liao; Jun Xie; Andreas Geiger", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "Kitti-360: A novel dataset and benchmarks for urban scene understanding in 2d and 3d", "year": "2021" }, { "authors": "Kaichun Mo; Shilin Zhu; X Angel; Li Chang; Subarna Yi; Leonidas J Tripathi; Hao Guibas; Su", "journal": "", "ref_id": "b32", "title": "Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding", "year": "2019" }, { "authors": "Pei Sun; Henrik Kretzschmar; Xerxes Dotiwalla; Aurelien Chouard; Vijaysai Patnaik; Paul Tsui; James Guo; Yin Zhou; Yuning Chai; Benjamin Caine", "journal": "", "ref_id": "b33", "title": "Scalability in perception for autonomous driving: Waymo open dataset", "year": "2020" }, { "authors": "P Lyne; Christopher Bongsoo Tchapmi; Iro Choy; Junyoung Armeni; Silvio Gwak; Savarese", "journal": "", "ref_id": "b34", "title": "Segcloud: Semantic segmentation of 3d point clouds", "year": "2017" }, { "authors": "Loic Landrieu; Martin Simonovsky", "journal": "", "ref_id": "b35", "title": "Large-scale point cloud semantic segmentation with superpoint graphs", "year": "2018" }, { "authors": "Runyu Ding; Jihan Yang; Chuhui Xue; Wenqing Zhang; Song Bai; Xiaojuan Qi", "journal": "", "ref_id": "b36", "title": "Language-driven open-vocabulary 3d scene understanding", "year": "2022" }, { "authors": "Kirill Mazur; Edgar Sucar; Andrew J Davison", "journal": "", "ref_id": "b37", "title": "Feature-realistic neural fusion for real-time, open set scene understanding", "year": "2022" }, { "authors": "Huy Ha; Shuran Song", "journal": "", "ref_id": "b38", "title": "Semantic abstraction: Open-world 3d scene understanding from 2d visionlanguage models", "year": "2022" }, { "authors": "Runnan Chen; Youquan Liu; Lingdong Kong; Xinge Zhu; Yuexin Ma; Yikang Li; Yuenan Hou; Yu Qiao; Wenping Wang", "journal": "", "ref_id": "b39", "title": "Clip2scene: Towards label-efficient 3d scene understanding by clip", "year": "2023" }, { "authors": "Iro Armeni; Ozan Sener; Helen Amir Roshan Zamir; Ioannis K Jiang; Martin Brilakis; Silvio Fischer; Savarese", "journal": "", "ref_id": "b40", "title": "3d semantic parsing of large-scale indoor spaces", "year": "2016" }, { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b41", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Rahul Goel; Dhawal Sirikonda; Saurabh Saini; Narayanan", "journal": "", "ref_id": "b42", "title": "Interactive segmentation of radiance fields", "year": "2022" }, { "authors": "Vadim Tschernezki; Iro Laina; Diane Larlus; Andrea Vedaldi", "journal": "", "ref_id": "b43", "title": "Neural feature fusion fields: 3d distillation of self-supervised 2d image representations", "year": "2022" }, { "authors": "Shuaifeng Zhi; Tristan Laidlow; Stefan Leutenegger; Andrew J Davison", "journal": "", "ref_id": "b44", "title": "In-place scene labelling and understanding with implicit scene representation", "year": "2021" }, { "authors": "Yawar Siddiqui; Lorenzo Porzi; Samuel Rota Buló; Norman Müller; Matthias Nießner; Angela Dai; Peter Kontschieder", "journal": "", "ref_id": "b45", "title": "Panoptic lifting for 3d scene understanding with neural fields", "year": "2022" }, { "authors": "Vadim Tschernezki; Diane Larlus; Andrea Vedaldi", "journal": "", "ref_id": "b46", "title": "Neuraldiff: Segmenting 3d objects that move in egocentric videos", "year": "2021" }, { "authors": "Jesus Zarzar; Sara Rojas; Silvio Giancola; Bernard Ghanem", "journal": "", "ref_id": "b47", "title": "Segnerf: 3d part segmentation with neural radiance fields", "year": "2022" }, { "authors": "Zhiwen Fan; Peihao Wang; Yifan Jiang; Xinyu Gong; Dejia Xu; Zhangyang Wang", "journal": "", "ref_id": "b48", "title": "Nerf-sos: Any-view self-supervised object segmentation on complex scenes", "year": "2022" }, { "authors": "Shengnan Liang; Yichen Liu; Shangzhe Wu; Yu-Wing Tai; Chi-Keung Tang", "journal": "", "ref_id": "b49", "title": "Onerf: Unsupervised 3d object segmentation from multiple views", "year": "2022" }, { "authors": "Karl Stelzner; Kristian Kersting; Adam R Kosiorek", "journal": "", "ref_id": "b50", "title": "Decomposing 3d scenes into objects via unsupervised volume segmentation", "year": "2021" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b51", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Ce Zhou; Qian Li; Chen Li; Jun Yu; Yixin Liu; Guangjing Wang; Kai Zhang; Cheng Ji; Qiben Yan; Lifang He", "journal": "", "ref_id": "b52", "title": "A comprehensive survey on pretrained foundation models: A history from bert to chatgpt", "year": "" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b53", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b54", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": " Openai", "journal": "", "ref_id": "b55", "title": "", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b56", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Mandar Joshi; Danqi Chen; Yinhan Liu; Luke Daniel S Weld; Omer Zettlemoyer; Levy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b57", "title": "Spanbert: Improving pre-training by representing and predicting spans", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b58", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b59", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b60", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b61", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b62", "title": "Photorealistic text-toimage diffusion models with deep language understanding", "year": "2022" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b63", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Zhaoqing Wang; Yu Lu; Qiang Li; Xunqiang Tao; Yandong Guo; Mingming Gong; Tongliang Liu", "journal": "", "ref_id": "b64", "title": "Cris: Clip-driven referring image segmentation", "year": "2022" }, { "authors": "Haoyu Song; Li Dong; Weinan Zhang; Ting Liu; Furu Wei", "journal": "", "ref_id": "b65", "title": "Clip models are few-shot learners: Empirical studies on vqa and visual entailment", "year": "2022" }, { "authors": "Narek Tumanyan; Omer Bar-Tal; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b66", "title": "Splicing vit features for semantic appearance transfer", "year": "2022" }, { "authors": "Mark Hamilton; Zhoutong Zhang; Bharath Hariharan; Noah Snavely; William T Freeman", "journal": "", "ref_id": "b67", "title": "Unsupervised semantic segmentation by distilling feature correspondences", "year": "2022" }, { "authors": "Shir Amir; Yossi Gandelsman; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b68", "title": "Deep vit features as dense visual descriptors", "year": "2021" }, { "authors": "Julian Straub; Thomas Whelan; Lingni Ma; Yufan Chen; Erik Wijmans; Simon Green; Jakob J Engel; Raul Mur-Artal; Carl Ren; Shobhit Verma", "journal": "", "ref_id": "b69", "title": "The replica dataset: A digital replica of indoor spaces", "year": "2019" }, { "authors": "Matt Deitke; Dustin Schwenk; Jordi Salvador; Luca Weihs; Oscar Michel; Eli Vanderbilt; Ludwig Schmidt; Kiana Ehsani; Aniruddha Kembhavi; Ali Farhadi", "journal": "", "ref_id": "b70", "title": "Objaverse: A universe of annotated 3d objects", "year": "2022" }, { "authors": "Tong Wu; Jiarui Zhang; Xiao Fu; Yuxin Wang; Jiawei Ren; Liang Pan; Wayne Wu; Lei Yang; Jiaqi Wang; Chen Qian", "journal": "", "ref_id": "b71", "title": "Omniobject3d: Large-vocabulary 3d object dataset for realistic perception, reconstruction and generation", "year": "2023" }, { "authors": "Jeremy Reizenstein; Roman Shapovalov; Philipp Henzler; Luca Sbordone; Patrick Labatut; David Novotny", "journal": "", "ref_id": "b72", "title": "Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction", "year": "2021" }, { "authors": "Mike Roberts; Jason Ramapuram; Anurag Ranjan; Atulit Kumar; Miguel Angel Bautista; Nathan Paczan; Russ Webb; Joshua M Susskind", "journal": "", "ref_id": "b73", "title": "Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding", "year": "2021" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b74", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Ben Mildenhall; P Pratul; Rodrigo Srinivasan; Nima Ortiz-Cayon; Ravi Khademi Kalantari; Ren Ramamoorthi; Abhishek Ng; Kar", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b75", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 109.9, 271.89, 107.73, 24.15 ], "formula_id": "formula_0", "formula_text": "F multi_spatial /= count F I [scale_idx] = F multi_spatial end" }, { "formula_coordinates": [ 4, 191.25, 702.91, 313.42, 22.43 ], "formula_id": "formula_1", "formula_text": "Ĉ(r) = i T i α i C i ∈ R 3 , F (r) = i T i α i F i ∈ R D ,(1)" }, { "formula_coordinates": [ 5, 219.42, 273.29, 285.25, 21.98 ], "formula_id": "formula_2", "formula_text": "S(r) = Softmax i T i α i S i ∈ [0, 1] Ns ,(2)" }, { "formula_coordinates": [ 5, 108, 310.08, 80.85, 13.15 ], "formula_id": "formula_3", "formula_text": "T i = Π i-1 j=0 (1 -α i )" }, { "formula_coordinates": [ 5, 174.16, 390.53, 330.5, 27.12 ], "formula_id": "formula_4", "formula_text": "L supervision = r∈R Ĉ(r) -C(r) 2 2 -cos⟨ F (r), S(r)F (r)⟩ .(3)" }, { "formula_coordinates": [ 5, 250.25, 470.07, 254.42, 11.81 ], "formula_id": "formula_5", "formula_text": "z(r) = cos⟨T, F (r)⟩ ∈ R C .(4)" }, { "formula_coordinates": [ 5, 238.39, 582.93, 262.41, 11.03 ], "formula_id": "formula_6", "formula_text": "P (r) = Softmax (z(r)) ∈ [0, 1] C . (5" }, { "formula_coordinates": [ 5, 500.8, 585.32, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 250.31, 653.99, 254.36, 10.32 ], "formula_id": "formula_8", "formula_text": "R I hw = S I hw cos⟨T, F I hw ⟩,(6)" }, { "formula_coordinates": [ 5, 176.58, 710.68, 328.09, 12.17 ], "formula_id": "formula_9", "formula_text": "RI = (R I -min(R I )) / (max(R I ) -min(R I )) ∈ [0, 1] C×H×W ,(7)" }, { "formula_coordinates": [ 6, 154.83, 314.21, 349.84, 29.43 ], "formula_id": "formula_10", "formula_text": "L RDA = r∈R c∈C P (r) c log P (r) c M P R(r) c + R(r) c log R(r) c M P R(r) c /2,(8)" }, { "formula_coordinates": [ 6, 245.66, 486.17, 259.01, 9.65 ], "formula_id": "formula_11", "formula_text": "Corr_F hwij = cos⟨f hw , f ij ⟩,(9)" }, { "formula_coordinates": [ 6, 247.56, 592.85, 252.96, 11.47 ], "formula_id": "formula_12", "formula_text": "Ṕ = Softmax (z/τ ) ∈ [0, 1] C . (10" }, { "formula_coordinates": [ 6, 500.52, 595.69, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 165.67, 625.26, 339, 29.32 ], "formula_id": "formula_14", "formula_text": "Corr_D hwij = c∈C Ṕhwc log Ṕhwc M Ṕ Ṕ c + Ṕijc log Ṕijc M Ṕ Ṕ c /2,(11)" }, { "formula_coordinates": [ 6, 207.73, 703.84, 296.94, 20.14 ], "formula_id": "formula_15", "formula_text": "L corr = hwij (Corr_F hwij -b) × Corr_D hwij ,(12)" }, { "formula_coordinates": [ 7, 125.9, 345.55, 378.77, 8.96 ], "formula_id": "formula_16", "formula_text": "pos_F = clamp(Corr_F -b, min = 0), neg_F = clamp(Corr_F -b, max = 0),(13)" }, { "formula_coordinates": [ 7, 117.96, 376.56, 386.7, 48.92 ], "formula_id": "formula_17", "formula_text": "L F DA = λ pos hwij (pos_F hwij × Corr_D hwij )/count_nonzero(pos_F )+ λ neg hwij (neg_F hwij × Corr_D hwij )/count_nonzero(neg_F ),(14)" } ]
2023-10-14
[ { "figure_ref": [ "fig_0", "fig_2", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b17", "b12" ], "table_ref": [], "text": "Draping virtual garments on body models has many applications in fashion design, movie-making, virtual try-on, virtual and augmented reality, among others. Traditionally, garments are represented by 3D meshes and the draping relies on physics-based simulations (PBS) [1,2,3,4,5,6,7,8,9,10,11,12] to produce realistic interactions between clothes and body. Unfortunately PBS is often computationally expensive and rarely differentiable, which limits the scope of downstream applications. Hence, many recent techniques use neural networks to speed up the draping and to make it differentiable. The garments can be represented by 3D mesh templates [13,14,15,16,17,18,19,20], point clouds [21,22], UV maps [23,24,25,26,27], or implicit surfaces [28,29,30]. Draping can then be achieved by Linear Blend Skinning (LBS) from the shape and pose parameters of a body model, such as SMPL [31].\nEven though all these methods can realistically drape individual garments over human bodies, none can handle multiple clothing layers, despite being prevalent in everyday dress. To address overlapping clothing layers such as those in Fig. 1 while preserving expressivity, we introduce an Implicit Sewing Pattern (ISP), a new representation inspired by the way fashion designers represent clothes. As shown in Fig. 2, a sewing pattern is made of several 2D panels implicitly represented by signed distance functions (SDFs) that model their 2D extent and are conditioned on a latent vector z, along with information about how to stitch them into a complete garment. To each panel is associated a 2D to 3D mapping representing its 3D shape, also conditioned on z. The 2D panels make it easy to detect collisions between surfaces and to prevent interpenetrations. In other words, the garment is made of panels whose 2D extent is learned instead of having to be carefully designed by a human and whose 37th Conference on Neural Information Processing Systems (NeurIPS 2023). 3D shape is expressed in a unified uv space in which a loss designed to prevent interpenetrations can easily be written.\nThis combination enables us to model layered garments such as those of Fig. 1 while preserving end-to-end differentiability. This lets us drape them realistically over bodies in arbitrary poses, to recover them from images, and to edit them easily. Doing all this jointly is something that has not yet been demonstrated in the Computer Vision or Computer Graphics literature. Furthermore, most data driven draping methods rely on synthetic data generated with PBS for supervision purposes. In contrast, ISPs rely on physics-based self-supervision of [18,13]. As a result, at inference time, our approach can handle arbitrary body poses, while only requiring garments draped over bodies in a canonical pose at training time. Our code is available at https://github.com/liren2515/ISP." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b31", "b32", "b33", "b34", "b11", "b12", "b13", "b14", "b16", "b18", "b17", "b19", "b15", "b17", "b12", "b21", "b20", "b35", "b23", "b24", "b28", "b29", "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b26", "b45", "b27", "b28", "b29", "b46" ], "table_ref": [], "text": "Garment Draping. Garment draping approaches can be classified as physics-based or data-driven. Physics-based ones [3,32,33,34,35,12] produce high-quality results but are computationally demanding, while data-driven approaches are faster, sometimes at the cost of realism. Most datadriven methods are template-based [13,14,15,17,19,18,20,16,18,13], with a triangulated mesh modeling a specific garment and a draping function trained specifically for it. As this is impractical for large garment collections, some recent works [22,21,36] use 3D point clouds to represent garments instead. Unfortunately, this either prevents differentiable changes in garment topology or loses the point connections with physical meanings. The approaches of [24,25] replace the clouds by UV maps that encode the diverse geometry of garments and predict positional draping maps. These UV maps are registered to the body mesh, which restricts their interpretability and flexibility for garment representation and manipulation. The resulting garments follow the underlying body topology, and the ones that should not, such as skirts, must be post-processed to remove artifacts. Yet other algorithms [29,30,37] rely on learning 3D displacement fields or hierarchical graphs for garment deformation, which makes them applicable to generic garments. While many of these data-driven methods deliver good results, they are typically designed to handle a single garment or a top and a bottom garment with only limited overlap. An exception is the approach of [38] that augments SDFs with covariant fields to untangle multi-layered garments. However, the untangling is limited to garments in T-pose. In contrast, our proposed method can perform multi-layered garment draping for bodies in complex poses and of diverse shapes.\nGarments as Sets of Panels. Sewing patterns are widely used in garment design and manufacturing [39,40,41,42,43]. Typically, flat sewing patterns are assembled and then draped using PBS. To automate pattern design, recent works introduce a parametric pattern space. For example, the methods of [44,45] introduce a sparse set of parameters, such as sleeve length or chest circumference, while [27] relies on principal component analysis (PCA) to encode the shape of individual panels. It requires hierarchical graphs on groups of panels to handle multiple garment styles. By contrast, our approach relies on the expressivity of 2D Sign Distance Functions (SDFs) to represent the panels.\nTo promote the use of sewing patterns in conjunction with deep learning, a fully automatic dataset generation tool is proposed in [46]. It randomly samples parameters to produce sewing patterns and uses PBS to drape them on a T-posed human body model to produce training pairs of sewing patterns and 3D garment meshes.\nGarments as Implicit Surfaces. SDFs have become very popular to represent 3D surfaces. However, they are primarily intended for watertight surfaces. A standard way to use them to represent open surfaces such as garments is to represent them as thin volumes surrounding the actual surface [28,29], which can be represented by SDFs but with an inherent accuracy loss. To address this issue, in [30], the SDFs are replaced by unsigned distance functions (UDFs) while relying on the differentiable approach of [47] to meshing, in case an actual mesh is required for downstream tasks. However, this requires extra computations for meshing and makes training more difficult because surfaces lie at a singularity of the implicit field. This can result in unwanted artifacts that our continuous UV parameterization over 2D panels eliminates." }, { "figure_ref": [ "fig_2" ], "heading": "Method", "publication_ref": [ "b38", "b39", "b40" ], "table_ref": [], "text": "In clothing software used industrially [39,40,41], garments are made from sets of 2D panels cut from pieces of cloth which are then stitched together. Inspired by this real-world practice, we introduce the Implicit Sewing Patterns (ISP) model depicted by Fig. 2. It consists of 2D panels whose shape is defined by the zero crossings of a function that takes as input a 2D location x = (x u , x v ) and a latent vector z, which is specific to each garment. A second function that also takes x and z as arguments maps the flat 2D panel to the potentially complex 3D garment surface within the panel, while enforcing continuity across panels. Finally, we train draping networks to properly drape multiple garments on human bodies of arbitrary shapes and poses, while avoiding interpenetrations of successive clothing layers." }, { "figure_ref": [], "heading": "Modeling Individual Garments", "publication_ref": [ "b45", "b45", "b47", "b47", "b48", "b49", "b50" ], "table_ref": [], "text": "We model garments as sets of 2D panels stitched together in a pre-specified way. Each panel is turned into a 3D surface using a uv parameterization and the networks that implement this parameterization are trained to produce properly stitched 3D surfaces. To create the required training databases, we use the PBS approach of [46]. Because it also relies on 2D panels, using its output to train our networks has proved straightforward, with only minor modifications required. Flat 2D Panels. We take a pattern P to be a subset of Ω = [-1, 1] 2 whose boundary are the zero crossings of an implicit function. More specifically, we define\nI Θ : Ω × R |z| → R × N, (x, z) → (s, c) ,(1)\nwhere I Θ is implemented by a fully connected network. It takes as input a local 2D location x and a global latent vector z and returns two values. The first, s, should approximate the signed distance to the panel boundary. The second, c, is a label associated to boundary points and is used when assembling the garment from several panels. Then, boundaries with the same labels should be stitched together. Note that the c values are only relevant at the panel boundaries, that is, when s = 0. However, training the network to produce such values only there would be difficult because this would involve a very sparse supervision. So, instead, we train our network to predict the label In this paper, we model every garment G using two panels P f and P b , one for the front and one for the back. E f and E b are labels assigned to boundary points. Label 0 denotes unstitched boundary points like collar, waist, or sleeve ends. Labels other than zero denote points in the boundary of one panel that should be stitched to a point with the same label in the other.\nLatent Space Interpolation 𝐱 ∈ ℝ ! ! ×# 𝐳 𝐱 ∈ ℝ ! ! ×# 𝐳 (𝑠 \" , 𝑐 \" ) (𝑠 # , 𝑐 # ) ℐ % 𝒜 &\nIn practice, given the database of garments and corresponding 2D panels we generated using publicly available software designed for this purpose [46], we use an auto-decoding approach to jointly train I Θ and to associate a latent vector z to each training garment. To this end, we minimize the loss function obtained by summing over all training panels\nL I = x∈Ω s(x, z) -s gt (x) + λ CE CE(c(x, z), c gt (x)) + λ reg ∥z∥ 2 2 ,(2)\nwith respect to a separate latent code z per garment and the network weights Θ that are used for all garments. CE is the cross-entropy loss, s gt and c gt are the ground-truth signed distance value and the label of the closest seamed edge of x, and λ CE and λ reg are scalars balancing the influence of the different terms. We handle each garment having a front and a back panel P f and P b by two separate network I Θ f and I Θ b with shared latent codes so that it can produce two separate sets of (s, c) values at each (x u , x v ) location, one for the front and one for the back.\nFrom 2D panels to 3D Surfaces. The sewing patterns described above are flat panels whose role is to provide stitching instructions. To turn them into 3D surfaces that can be draped on a human body, we learn a mapping from the 2D panels to the 3D garment draped on the neutral SMPL body. To this end, we introduce a second function, similar to the one used in AtlasNet [48] A\nΦ : Ω × R |z| → R 3 , (x, z) → X .(3)\nIt is also implemented by a fully connected network and takes as input a local 2D location x and a global latent vector z. It returns a 3D position X for every 2D location x in the pattern. A key difference with AtlasNet is that we only evaluate A Φ for points x within the panels, that is points for which the signed distance returned by I Θ of Eq. ( 1) is not positive. Hence, there is no need to deform or stretch uv patterns. This is in contrast to the square patches of AtlasNet that had to be, which simplifies the training.\nGiven the latent codes z learned for each garment in the training database, as described above, we train a separate set of weights Φ f and Φ b for the front and back of the garments. To this end, we minimize a sum over all front and back training panels of\nL A = L CHD + λ n L normal + λ c L consist ,(4)\nL consist = c>0 x∈Ec A Φ f (x, z) -A Φ b (x, z) 2 2 . (5\n)\nwhere L CHD , L normal , and L consist are the Chamfer distance, a normal consistency loss, and a loss term whose minimization ensures that the points on the edges of the front and back panels sewn together-E c for c > 0-are aligned when folding the panels in 3D, λ n and λ c are scalar weights. This assumes that the front and back panels have their seamed edges aligned in 2D, ie. for c > 0 and x ∈ Ω, if x has label c on the front panel, then it should also have label c in the back panel, and vice-versa. Our experiments show that L consist reduces the gap between the front and back panels.\nMeshing and Preserving Differentiability. A Φ continuously deforms 2D patches that are implicitly defined by I Θ . To obtain triangulated meshes from these, we lift a regular triangular 2D mesh defined on Ω by querying A Φ on each of its vertex, as in [48]. More specifically, we first create a square 2D mesh T = {V Ω , F Ω } for Ω, where vertices V Ω are grid points evenly sampled along orthogonal axis of Ω and faces F Ω are created with Delaunay triangulation. Given the latent code z of a specific garment, for each vertex v ∈ V Ω , we can obtain its signed distance value s and edge label c with (s, c) = I Θ (v, z). We construct the 2D panel mesh T P = {V P , F P } by discarding vertices of T whose SDF is positive. To get cleaner panel borders, we also keep vertices v ∈ V Ω with s(v, z) > 0 that belong to the mesh edges that cross the 0 iso-level, and adjust their positions to v-s(v, z)∇s(v, z) to project them to the zero level set [49]. The front and back panel 2D meshes are then lifted to 3D by querying A Φ on each vertex. During post-processing, their edges are sewn together to produce the final 3D garment mesh T G (z). More details can be found in the Supplementary Material.\nI Θ performs as an indicator function to keep vertices with s ≤ 0, which breaks automatic differentiability. To restore it, we rely on gradients derived in [50,51] for implicit surface extraction. More formally, assume v is a vertex of the extracted mesh T G and x is the point on the UV space that satisfies v = A Φ (x, z), then, as proved in the Supplementary Material,\n∂v ∂z = ∂A Φ ∂z (x, z) - ∂A Φ ∂x ∇s(x, z) ∂s ∂z (x, z) .(6)\nHence, ISP can be used to solve inverse problems using gradient descent, such as fitting garments to partial observations." }, { "figure_ref": [], "heading": "Garment Draping", "publication_ref": [ "b30", "b18", "b28", "b29", "b28", "b17", "b45", "b51", "b29", "b28", "b29" ], "table_ref": [], "text": "We have defined a mapping from a latent vector z to a garment draped over the neutral SMPL model [31], which we will refer to as a rest pose. We now need to deform the garments to conform to bodies of different shapes and in different poses. To this end, we first train a network that can deform a single garment. We then train a second one that handles the interactions between multiple garment layers and prevents interpenetrations as depicted in Fig. 3. Single Layer Draping. In SMPL, body templates are deformed using LBS. As in [19,29,30], we first invoke an extended skinning procedure with the diffused body model formulation to an initial rough estimate of the garment shape. More specifically, given the parameter vectors B and Θ that control the body shape and pose respectively, each vertex v of a garment mesh T G is transformed by\nv init = W (v (B,Θ) , B, Θ, w(v)W) ,(7)\nv (B,Θ) = v + w(v)B s (B) + w(v)B p (Θ) ,\nwhere W (•) is the SMPL skinning function with skinning weights W ∈ R N B ×24 , with N B being the number of vertices of the SMPL body mesh, and B s (B) ∈ R N B ×3 and B p (Θ) ∈ R N B ×3 are the shape and pose displacements returned by SMPL. w(v) ∈ R N B are diffused weights returned by a neural network, which generalizes the SMPL skinning to any point in 3D space. The training of w(v) is achieved by smoothly diffusing the surface values, as in [29].\nThis yields an initial deformation, such that the garment roughly fits the underlying body. A displacement network D s is then used to refine it. D s is conditioned on the body shape and pose parameters B, Θ and on the garment specific latent code z. It returns a displacement map D s = D s (B, Θ, z) in UV space. Hence, the vertex v init with UV coordinates (x u , x v ) in Ω is displaced accordingly and becomes ṽ = v init + D s [x u , x v ], where [•, •] denotes standard array addressing. D s is implemented as a multi-layer perceptron (MLP) that outputs a vector of size 6N 2 , where N is the resolution of UV mesh T. The output is reshaped to two N × N × 3 arrays to produce the front and back displacement maps.\nTo learn the deformation for various garments without collecting any ground-truth data and train D s in a self-supervised fashion, we minimize the physics-based loss from [18] \nL phy = L strain + L bend + L gravity + L BGcol ,(8)\nwhere L strain is the membrane strain energy caused by the deformation, L bend the bending energy raised from the folding of adjacent faces, L gravity the gravitational potential energy, and L BGcol the penalty for body-garment collisions.\nMulti-Layer Draping. When draping independently multiple garments worn by the same person using the network D s introduced above, the draped garments can intersect, which is physically impossible and must be prevented. We now show that our ISP model makes that straightforward first in the case of two overlapping garments, and then in the case of arbitrarily many.\nConsider an outer garment T o G and an inner garment T i G with rest state vertices V o and V i . We first drape them independently on a target SMPL body with D s , as described for single layer draping. This yields the deformed outer and underlying garments with vertices Ṽo and Ṽi , respectively. We then rely on a second network D m to predict corrective displacements, conditioned on the outer garment geometry and repulsive virtual forces produced by the intersections with the inner garment.\nIn ISP, garments are represented by mapping 2D panels to 3D surfaces. Hence their geometry can be stored on regular 2D grids on which a convolutional network can act. In practice, we first encode the rest state of the outer garment T o G into a 2D position map M r . The grid M r records the 3D location of the vertex v o ∈ V o at its (x u , x v ) coordinate as M r [x u , x v ] = v o within the panel boundaries, M r [x u , x v ] = (0, 0, 0) elsewhere. Concatenating both front and back panels yields a N × N × 6 array, that is, a 2D array of spatial dimension N with 6 channels. After draping, the same process is applied to encode the geometry of T o G into position map M d , using vertices Ṽo instead of V o this time. Finally, for each vertex ṽo ∈ Ṽo , we take the repulsive force acting on it to be\nf (ṽ o ) = max(0, (ṽ i -ṽo ) • n i )n i , (9\n)\nwhere ṽi is the closest vertex in Ṽi , n i is the normal of ṽi , and • represents the dot product. The repulsive force is also recorded in the UV space to generate the 2D force map M f . Note that it is 0 for vertices that are already outside of the inner garment, for which no collision occurs. Given the forces M f , the garment geometry in the rest state M r and after draping M d , the network D m predicts a vertex displacements map D m = D m (M r , M d , M f ) for the outer garment to resolve intersections, as shown in Fig. 3. We replace the vertex ṽo ∈ Ṽo with coordinates \n(x u , x v ) by ṽ * o = ṽo + D m [x u , x v ]. D m is\nL Dm = L phy + λ g L GGcol + λ r L reg ,(10)\nL GGcol = ṽ * o max(0, ϵ -(ṽ * o -ṽi ) • n i ) 3 , and L reg = ∥D m (M r , M d , M f )∥ 2 2 ,\nwhere L phy is the physics-based loss of Eq. ( 8), λ g and λ r are weighting constants, and ϵ is a safety margin to avoid collisions. L GGcol penalizes the intersections between the outer and underlying garments, and L reg is an L 2 regularization on the output of D m .\nGiven a pair of garments, our layering network D m only deforms the outer garment to resolve collisions, leaving the inner one untouched. Given more than 2 overlapping garments, the process can be iterated to layer them in any desired order over a body, as detailed in the Supplementary Material. To create training and test sets, we used the software of [46] to generate sewing patterns and the corresponding 3D garment meshes in their rest state, that is draped over a T-Posed body. Our training set comprises 400 shirts, 300 skirts, and 200 pairs of trousers. For testing, we use 20 shirts, 20 skirts, and 20 pairs of trousers. As discussed in Section 3.2, we trained the draping models using SMPL body poses Θ randomly sampled from the AMASS [52] dataset and body shapes B uniformly sampled from [-2, 2] 10 . We use the Chamfer Distance (CHD) and Normal Consistency (NC) to quantify garment reconstruction accuracy, along with the percentage of the garment mesh area that undergoes interpenetrations (Intersection), as in [30].\nWe compare our approach against recent and state-of-the-art methods DIG [29] and DrapeNet [30], which, like ours, can drape various garments over bodies of different shapes in arbitrary poses. Like our ISPs, DrapeNet is self-supervised using a physics-based loss and can prevent unwanted intersections between top (shirts) and bottom (trousers) garments. By contrast, DIG is a fully supervised method and cannot handle garment intersections." }, { "figure_ref": [ "fig_0", "fig_6", "fig_7", "fig_7", "fig_7" ], "heading": "Garment Reconstruction", "publication_ref": [ "b29", "b52", "b53", "b46", "b37", "b12", "b29", "b29" ], "table_ref": [], "text": "Train CHD (×10 We first consider 3D garments in their rest pose and compare the accuracy of our ISPs against that of the UDF-based representation [30]. In Fig. 4, we provide qualitative results in a specific case and for resolutions of Marching Cubes [53] ranging from 512 to 128. Our result is visually superior and more faithful to the ground-truth garment, with UDF producing more artifacts and uneven borders, especially at resolution 512. The quantitative results reported in Tab. 1 for the training and test sets confirm this. For the test set garments, we reconstruct them by optimizing the latent code to minimize the Chamfer distance between the predicted and ground truth meshes, using the gradients of Eq. ( 6). Our approach consistently outperforms UDF at all resolutions while being faster, mostly because we evaluate our network on a 2D implicit field while UDF does it on a 3D one. For example, at resolution 256, our reconstruction time is 77 ms on an Nvidia V100 GPU, while UDF requires 2379 ms. Interestingly, increasing the resolution from 256 to 512 improves the reconstruction accuracy of our method, whereas that of UDF's drops. This is because precisely learning the 0 iso-surface of a 3D UDF is challenging, resulting in potentially inaccurate normals near the surface [54,47]. We present similar results for trousers and skirts in the supplementary material. Figs. 1 and5 showcase our method's ability to realistically drape multiple garments with diverse geometry and topology over the body. Our approach can handle multi-layered garments, which is not achievable with DrapeNet. Additionally, unlike the method of [38] that is limited to multi-layered garments on the T-posed body, our method can be applied to bodies in arbitrary poses.\nAs pointed out in [13,30], there are no objective metrics for evaluating the realism of a draping. Therefore, we conducted a human evaluation. As in [30], we designed a website displaying side-byside draping results generated by our method and DrapeNet for the same shirts and trousers, on the same bodies. Participants were asked to select the option that seemed visually better to them, with the third option being \"I cannot decide\". 64 participants took part in the study, providing a total of 884 responses. As shown in Fig. 6(a), the majority of participants preferred our method (62.69% vs. 25.00%), demonstrating the higher fidelity of our results. The lower intersection ratio reported in Fig. 6(b) further demonstrates the efficacy of our draping model. In Figs. 6(c) and (d), we compare qualitatively our method against DrapeNet and DIG." }, { "figure_ref": [ "fig_9" ], "heading": "Recovering Multi-Layered Garments from Images", "publication_ref": [ "b55", "b56", "b27", "b54", "b29", "b57", "b58", "b27", "b54", "b29" ], "table_ref": [], "text": "Thanks to their differentiability, our ISPs can be used to recover multi-layered garments from image data, such as 2D garment segmentation masks. Given an image of a clothed person, we can obtain the estimation of SMPL body parameters (B, Θ) and the segmentation mask S using the algorithms of [56,57]. To each detected garment, we associate a latent code z and reconstruct garment meshes We compare the meshes recovered by our method and the state of the art methods SMPLicit [28], ClothWild [55], DrapeNet [30] (unavailable for skirts). \nOriginal Modified Original Modified Original Modified Short Sleeves Short Body Closed\nG(B, Θ, z 1:N ) = D(B, Θ, z 1 , T G (z 1 )) ⊕ • • • ⊕ D(B, Θ, z N , T G (z N )),(11)\nwhere L IoU is the IoU loss [58] over the rendered and the given mask, R(•) is a differentiable renderer [59], N is the number of detected garments, and ⊕ represents the operation of mesh concatenation. M is the SMPL body mesh, while G(B, Θ, z 1:N ) is the concatenation of garment meshes reconstructed from the implicit sewing pattern as T G (z i ) and then draped by our draping model D = D m • D s . In practice, the minimization is performed from the outermost garment to the innermost, one by one. Further details can be found in the Supplementary Material.\nFig. 7 depicts the results of this minimization. Our method outperforms the state-of-the-art methods SMPLicit [28], ClothWild [55] and DrapeNet [30], given the same garment masks. The garments we recover exhibit higher fidelity and have no collisions between them or with the underlying body." }, { "figure_ref": [ "fig_10" ], "heading": "Garment Editing", "publication_ref": [ "b29" ], "table_ref": [], "text": "As the panel shape of the sewing pattern determines the shape of the garment, we can easily manipulate the garment mesh by editing the panel. Fig. 8 depicts this editing process. We begin by moving the sleeve edges of the panel inwards to shorten the sleeves of the mesh, then move the bottom edges up to shorten the length of the jacket, and finally remove the edges for the opening to close it. This is achieved by minimizing L(z) = d(E(z), Ẽ) , where E(z) = {x|s(x, z) = 0, x ∈ Ω} ,\nw.r.t. z. d(•) computes the chamfer distance, Ẽ represents the edges of the modified panel, and E(z) represents the 0 iso-level extracted from I Θ , that is, the edges of the reconstructed panel. Our sewing pattern representation makes it easy to specify new edges by drawing and erasing lines in the 2D panel images, whereas the fully implicit garment representation of [30] requires an auxiliary classifier to identify directions in the latent space for each garment modification. As shown in the supplementary material, the texture of the garment can also be edited by drawing on the UV panels." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have introduced a novel representation for garments to be draped on human bodies. The garments are made of flat 2D panels whose boundary is defined by a 2D SDF. To each panel is associated a 3D surface parameterized by the 2D panel coordinates. Hence, different articles of clothing are represented in a standardized way. This allows the draping of multi-layer clothing on bodies and the recovery of such clothing from single images. Our current implementation assumes quasi-static garments and only deforms the outer garment to solve collisions in multi-layer draping. In future work, we will introduce garment dynamics and focus on more accurate physical interactions between the outer and inner garments." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This project was supported in part by the Swiss National Science Foundation." } ]
Many approaches to draping individual garments on human body models are realistic, fast, and yield outputs that are differentiable with respect to the body shape on which they are draped. However, they are either unable to handle multilayered clothing, which is prevalent in everyday dress, or restricted to bodies in T-pose. In this paper, we introduce a parametric garment representation model that addresses these limitations. As in models used by clothing designers, each garment consists of individual 2D panels. Their 2D shape is defined by a Signed Distance Function and 3D shape by a 2D to 3D mapping. The 2D parameterization enables easy detection of potential collisions and the 3D parameterization handles complex shapes effectively. We show that this combination is faster and yields higher quality reconstructions than purely implicit surface representations, and makes the recovery of layered garments from images possible thanks to its differentiability. Furthermore, it supports rapid editing of garment shapes and texture by modifying individual 2D panels.
ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns
[ { "figure_caption": "Figure 1 :1Figure 1: Multi-layered garment draping. Top: Draping multiple layers of garments over one body (left) and modifying the body's shape (right). Bottom: Draping the same set of 5 garments over bodies with varying poses and shapes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Implicit Sewing Pattern (ISP). (a) The 3D mesh surface for a shirt with the front surface in gray and the back one in blue. (b) The front and back 2D panels of the ISP. The numbers denote the labels of seamed edges E c , and indicate how to stitch them when c > 0. (c) We use an implicit neural representation for the signed distance and for the edge labels, denoted here by the different colors. (d) Interpolation in the latent space allows topology changes, here from a sleeveless shirt to a long-sleeve open jacket. The top rows show the front and back panels, the bottom row the reconstructed meshes. (e) To parameterize a two-panel garment, we train the implicit network I Θ to predict the signed distance field (s f /s b ) and the edge label field (c f /c b ) that represent the two panels. They are mapped to 3D surfaces by the network A Φ .", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "implemented as a CNN, and it can capture the local geometry and force information of vertices from the input 2D maps. The training of D m is self-supervised by", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Multi-layer Draping. The network D m uses the garment geometry and forces encoded in the input UV maps to resolve garment intersections for multilayered draping.", "figure_data": "", "figure_id": "fig_4", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": ", Evaluation Metrics, and Baseline", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Realistic garment draping with our method, for different combinations of garments.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Draping evaluation. (a) Human evaluation results. None refers to no preference. (b) Percentage of intersecting areas. (c) For each method, left is the draping results for a shirt and a pair of trousers, and right with only the pants. (d) The draping results of our method, DIG and DrapeNet.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: Garment recovery from images. We compare the meshes recovered by our method and the state of the art methods SMPLicit[28], ClothWild[55], DrapeNet[30] (unavailable for skirts).", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :L8Figure 8: Shape editing. Garment attributes can be edited by modifying the sewing pattern.", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "-4 , ↓) NC (%, ↑) Time (ms, ↓) Comparison of our method to UDF on shirts under the resolutions of 128, 256 and 512.", "figure_data": "TestCHD (×10 -4 , ↓) NC (%, ↑)UDF -1280.64198.86601UDF -1280.80397.71UDF -2560.33899.202379UDF -2560.49398.44UDF -5120.26298.7713258UDF -5120.42498.09Ours -1280.45499.2025Ours -1280.57998.70Ours -2560.29099.3777Ours -2560.39298.83Ours -5120.25099.41261Ours -5120.34998.89", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Ren Li; Benoît Guillard; Pascal Fua
[ { "authors": "Xavier Provot", "journal": "", "ref_id": "b0", "title": "Deformation constraints in a mass-spring model to describe rigid cloth behaviour", "year": "1995" }, { "authors": "X Provot", "journal": "", "ref_id": "b1", "title": "Collision and Self-Collision Handling in Cloth Model Dedicated to Design Garments", "year": "1997" }, { "authors": "D Baraff; A Witkin", "journal": "", "ref_id": "b2", "title": "Large Steps in Cloth Simulation", "year": "1998" }, { "authors": "T Vassilev; B Spanlang; Y Chrysanthou", "journal": "Computer Graphics Forum", "ref_id": "b3", "title": "Fast cloth animation on walking avatars", "year": "2001" }, { "authors": "C Zeller", "journal": "", "ref_id": "b4", "title": "Cloth simulation on the gpu", "year": "2005" }, { "authors": "M Tang; R Tong; R Narain; C Meng; D Manocha", "journal": "Computer Graphics Forum", "ref_id": "b5", "title": "A GPU-based streaming algorithm for high-resolution cloth simulation", "year": "2013" }, { "authors": "T Liu; S Bouaziz; L Kavan", "journal": "ACM Transactions on Graphics", "ref_id": "b6", "title": "Quasi-newton methods for real-time simulation of hyperelastic materials", "year": "2017" }, { "authors": " Nvidia; Nvcloth", "journal": "", "ref_id": "b7", "title": "", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b8", "title": "Optitext Fashion Design Software", "year": "2018" }, { "authors": " ", "journal": "", "ref_id": "b9", "title": "NVIDIA Flex", "year": "2018" }, { "authors": "M Designer", "journal": "", "ref_id": "b10", "title": "", "year": "2018" }, { "authors": "Tongkui Su; Yan Zhang; Yu Zhou; Yao Yu; Sidan Du", "journal": "", "ref_id": "b11", "title": "GPU-based Real-time Cloth Simulation for Virtual Try-on", "year": "2018" }, { "authors": "H Bertiche; M Madadi; S Escalera", "journal": "ACM Transactions on Graphics", "ref_id": "b12", "title": "PBNS: Physically Based Neural Simulation for Unsupervised Garment Pose Space Deformation", "year": "2021" }, { "authors": "B L Bhatnagar; G Tiwari; C Theobalt; G Pons-Moll", "journal": "", "ref_id": "b13", "title": "Multi-Garment Net: Learning to Dress 3D People from Images", "year": "2019" }, { "authors": "B Jiang; J Zhang; Y Hong; J Luo; L Liu; H Bao", "journal": "", "ref_id": "b14", "title": "Bcnet: Learning body and cloth shape from a single image", "year": "2020" }, { "authors": "X Pan; J Mai; X Jiang; D Tang; J Li; T Shao; K Zhou; X Jin; D Manocha", "journal": "", "ref_id": "b15", "title": "Predicting loose-fitting garment deformations using bone-driven motion networks", "year": "2022" }, { "authors": "C Patel; Z Liao; G Pons-Moll", "journal": "", "ref_id": "b16", "title": "Tailornet: Predicting clothing in 3d as a function of human pose, shape and garment style", "year": "2020" }, { "authors": "I Santesteban; M A Otaduy; D Casas", "journal": "", "ref_id": "b17", "title": "SNUG: Self-Supervised Neural Dynamic Garments", "year": "2022" }, { "authors": "I Santesteban; N Thuerey; M A Otaduy; D Casas", "journal": "", "ref_id": "b18", "title": "Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On", "year": "2021" }, { "authors": "G Tiwari; B L Bhatnagar; T Tung; G Pons-Moll", "journal": "", "ref_id": "b19", "title": "Sizer: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing", "year": "2020" }, { "authors": "H Bertiche; M Madadi; E Tylson; S Escalera", "journal": "", "ref_id": "b20", "title": "DeePSD: Automatic Deep Skinning and Pose Space Deformation for 3D Garment Animation", "year": "2021" }, { "authors": "E Gundogdu; V Constantin; S Parashar; A Seifoddini; M Dang; M Salzmann; P Fua", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b21", "title": "Garnet++: Improving Fast and Accurate Static 3D Cloth Draping by Curvature Loss", "year": "2022" }, { "authors": "Z Lahner; D Cremers; T Tung", "journal": "", "ref_id": "b22", "title": "Deepwrinkles: Accurate and Realistic Clothing Modeling", "year": "2018-09" }, { "authors": "Y Shen; J Liang; M C Lin", "journal": "", "ref_id": "b23", "title": "Gan-based garment generation using sewing pattern images", "year": "2020" }, { "authors": "Z Su; T Yu; Y Wang; Y Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b24", "title": "DeepCloth: Neural Garment Representation for Shape and Style Editing", "year": "2022" }, { "authors": "M Zhang; D Ceylan; N Mitra", "journal": "ACM Transactions on Graphics", "ref_id": "b25", "title": "Motion Guided Deep Dynamic 3D Garments", "year": "2022" }, { "authors": "X Chen; G Wang; D Zhu; X Liang; P Torr; L Lin", "journal": "", "ref_id": "b26", "title": "Structure-Preserving 3D Garment Modeling with Neural Sewing Machines", "year": "2022" }, { "authors": "E Corona; A Pumarola; G Alenya; G Pons-Moll; F Moreno-Noguer", "journal": "", "ref_id": "b27", "title": "Smplicit: Topology-Aware Generative Model for Clothed People", "year": "2021" }, { "authors": "R Li; B Guillard; E Remelli; P Fua", "journal": "", "ref_id": "b28", "title": "DIG: Draping Implicit Garment over the Human Body", "year": "2022" }, { "authors": "Luca Deluigi; Ren Li; Benoît Guillard; Mathieu Salzmann; Pascal Fua", "journal": "", "ref_id": "b29", "title": "DrapeNet: Generating Garments and Draping them with Self-Supervision", "year": "2023" }, { "authors": "F Bogo; A Kanazawa; C Lassner; P Gehler; J Romero; M J Black", "journal": "", "ref_id": "b30", "title": "Keep It SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image", "year": "2016" }, { "authors": "Y Li; M Habermann; B Thomaszewski; S Coros; T Beeler; C Theobalt", "journal": "", "ref_id": "b31", "title": "Deep physicsaware inference of cloth deformation for monocular human performance capture", "year": "2021" }, { "authors": "J Liang; M Lin; V Koltun", "journal": "", "ref_id": "b32", "title": "Differentiable Cloth Simulation for Inverse Problems", "year": "2019" }, { "authors": "R Narain; A Samii; J F O'brien", "journal": "ACM Transactions on Graphics", "ref_id": "b33", "title": "Adaptive anisotropic remeshing for cloth simulation", "year": "2012" }, { "authors": "R Narain; T Pfaff; J F O'brien", "journal": "ACM Transactions on Graphics", "ref_id": "b34", "title": "Folding and crumpling adaptive sheets", "year": "2013" }, { "authors": "I Zakharkin; K Mazur; A Grigorev; V Lempitsky", "journal": "", "ref_id": "b35", "title": "Point-based modeling of human clothing", "year": "2021" }, { "authors": "A Grigorev; B Thomaszewski; M J Black; O Hilliges", "journal": "", "ref_id": "b36", "title": "HOOD: Hierarchical Graphs for Generalized Modeling of Clothing Dynamics", "year": "2023" }, { "authors": "I Santesteban; M A Otaduy; N Thuerey; D Casas", "journal": "", "ref_id": "b37", "title": "Ulnef: Untangled layered neural fields for mix-and-match virtual try-on", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b38", "title": "Clo 3d", "year": "" }, { "authors": " Browzwear", "journal": "", "ref_id": "b39", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b40", "title": "Marvellous designer", "year": "" }, { "authors": "N Umetani; D M Kaufman; T Igarashi; E Grinspun", "journal": "ACM SIGGRAPH", "ref_id": "b41", "title": "Sensitive Couture for Interactive Garment Editing and Modeling", "year": "2011" }, { "authors": "A Kaspar; K Wu; Y Luo; L Makatura; W Matusik", "journal": "ACM Transactions on Graphics", "ref_id": "b42", "title": "Knit Sketching: from Cut & Sew Patterns to Machine-Knit Garments", "year": "2021" }, { "authors": "T Y Wang; D Ceylan; J Popovic; N J Mitra", "journal": "", "ref_id": "b43", "title": "Learning a Shared Shape Space for Multimodal Garment Design", "year": "2018" }, { "authors": "R Vidaurre; I Santesteban; E Garces; D Casas", "journal": "", "ref_id": "b44", "title": "Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On", "year": "2020" }, { "authors": "M Korosteleva; S Lee", "journal": "", "ref_id": "b45", "title": "Generating Datasets of 3D Garments with Sewing Patterns", "year": "2021" }, { "authors": "B Guillard; F Stella; P Fua", "journal": "", "ref_id": "b46", "title": "MeshUDF: Fast and Differentiable Meshing of Unsigned Distance Field Networks", "year": "2022" }, { "authors": "T Groueix; M Fisher; V Kim; B Russell; M Aubry", "journal": "", "ref_id": "b47", "title": "Atlasnet: A Papier-Mâché Approach to Learning 3D Surface Generation", "year": "2018" }, { "authors": "J Chibane; A Mir; G Pons-Moll", "journal": "", "ref_id": "b48", "title": "Neural Unsigned Distance Fields for Implicit Function Learning", "year": "2020" }, { "authors": "E Remelli; A Lukoianov; S Richter; B Guillard; T Bagautdinov; P Baque; P Fua", "journal": "", "ref_id": "b49", "title": "Meshsdf: Differentiable Iso-Surface Extraction", "year": "2020" }, { "authors": "B Guillard; E Remelli; A Lukoianov; S Richter; T Bagautdinov; P Baque; P Fua", "journal": "", "ref_id": "b50", "title": "Deepmesh: Differentiable Iso-Surface Extraction", "year": "2022" }, { "authors": "N Mahmood; N Ghorbani; N F Troje; G Pons-Moll; M J Black", "journal": "", "ref_id": "b51", "title": "AMASS: Archive of Motion Capture as Surface Shapes", "year": "2019" }, { "authors": "W E Lorensen; H E Cline", "journal": "", "ref_id": "b52", "title": "Marching Cubes: A High Resolution 3D Surface Construction Algorithm", "year": "1987" }, { "authors": "R Venkatesh; T Karmali; S Sharma; A Ghosh; R V Babu; L A Jeni; M Singh", "journal": "", "ref_id": "b53", "title": "Deep Implicit Surface Point Prediction Networks", "year": "2021" }, { "authors": "G Moon; H Nam; T Shiratori; K M Lee", "journal": "", "ref_id": "b54", "title": "3d clothed human reconstruction in the wild", "year": "2022" }, { "authors": "Yu Y Rong; T Shiratori; H Joo", "journal": "", "ref_id": "b55", "title": "Frankmocap: Fast monocular 3d hand and body motion capture by regression and integration", "year": "2021" }, { "authors": "P Li; Y Xu; Y Wei; Y Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b56", "title": "Self-Correction for Human Parsing", "year": "2020" }, { "authors": "R Li; M Zheng; S Karanam; T Chen; Z Wu", "journal": "", "ref_id": "b57", "title": "Everybody Is Unique: Towards Unbiased Human Mesh Recovery", "year": "2021" }, { "authors": "Nikhila Ravi; Jeremy Reizenstein; David Novotny; Taylor Gordon; Wan-Yen Lo; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b58", "title": "PyTorch3D", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 220.54, 626.85, 284.13, 11.72 ], "formula_id": "formula_0", "formula_text": "I Θ : Ω × R |z| → R × N, (x, z) → (s, c) ,(1)" }, { "formula_coordinates": [ 4, 129.01, 184.96, 312.73, 105.5 ], "formula_id": "formula_1", "formula_text": "Latent Space Interpolation 𝐱 ∈ ℝ ! ! ×# 𝐳 𝐱 ∈ ℝ ! ! ×# 𝐳 (𝑠 \" , 𝑐 \" ) (𝑠 # , 𝑐 # ) ℐ % 𝒜 &" }, { "formula_coordinates": [ 4, 164.05, 555.26, 340.62, 23.03 ], "formula_id": "formula_2", "formula_text": "L I = x∈Ω s(x, z) -s gt (x) + λ CE CE(c(x, z), c gt (x)) + λ reg ∥z∥ 2 2 ,(2)" }, { "formula_coordinates": [ 4, 241.14, 707.85, 263.52, 11.72 ], "formula_id": "formula_3", "formula_text": "Φ : Ω × R |z| → R 3 , (x, z) → X .(3)" }, { "formula_coordinates": [ 5, 223.08, 183.78, 281.59, 9.65 ], "formula_id": "formula_4", "formula_text": "L A = L CHD + λ n L normal + λ c L consist ,(4)" }, { "formula_coordinates": [ 5, 203.71, 197.58, 297.08, 24.03 ], "formula_id": "formula_5", "formula_text": "L consist = c>0 x∈Ec A Φ f (x, z) -A Φ b (x, z) 2 2 . (5" }, { "formula_coordinates": [ 5, 500.8, 201.86, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 5, 214.49, 481.49, 290.17, 22.31 ], "formula_id": "formula_7", "formula_text": "∂v ∂z = ∂A Φ ∂z (x, z) - ∂A Φ ∂x ∇s(x, z) ∂s ∂z (x, z) .(6)" }, { "formula_coordinates": [ 5, 227.96, 669.21, 276.71, 9.99 ], "formula_id": "formula_8", "formula_text": "v init = W (v (B,Θ) , B, Θ, w(v)W) ,(7)" }, { "formula_coordinates": [ 5, 219.58, 684.4, 172.85, 9.96 ], "formula_id": "formula_9", "formula_text": "v (B,Θ) = v + w(v)B s (B) + w(v)B p (Θ) ," }, { "formula_coordinates": [ 6, 209.15, 233.67, 295.52, 9.65 ], "formula_id": "formula_10", "formula_text": "L phy = L strain + L bend + L gravity + L BGcol ,(8)" }, { "formula_coordinates": [ 6, 234.73, 493.46, 266.06, 9.68 ], "formula_id": "formula_11", "formula_text": "f (ṽ o ) = max(0, (ṽ i -ṽo ) • n i )n i , (9" }, { "formula_coordinates": [ 6, 500.8, 493.81, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 6, 108, 568.53, 397.74, 20.56 ], "formula_id": "formula_13", "formula_text": "(x u , x v ) by ṽ * o = ṽo + D m [x u , x v ]. D m is" }, { "formula_coordinates": [ 6, 201.34, 606.99, 303.33, 9.65 ], "formula_id": "formula_14", "formula_text": "L Dm = L phy + λ g L GGcol + λ r L reg ,(10)" }, { "formula_coordinates": [ 6, 143.82, 622.63, 324.35, 23.67 ], "formula_id": "formula_15", "formula_text": "L GGcol = ṽ * o max(0, ϵ -(ṽ * o -ṽi ) • n i ) 3 , and L reg = ∥D m (M r , M d , M f )∥ 2 2 ," }, { "formula_coordinates": [ 9, 158.39, 435.7, 346.28, 30.87 ], "formula_id": "formula_16", "formula_text": "G(B, Θ, z 1:N ) = D(B, Θ, z 1 , T G (z 1 )) ⊕ • • • ⊕ D(B, Θ, z N , T G (z N )),(11)" } ]
2023-05-23
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b9", "b18", "b11", "b35", "b42", "b3", "b38", "b8", "b41", "b37" ], "table_ref": [], "text": "Generalized category discovery (GCD) seeks to categorize unlabeled samples from known and unknown classes by leveraging labeled data of known classes. As a more practical extension of novel category discovery (NCD) Han et al. (2019); Fini et al. (2021); Joseph et al. (2022); Zhong et al. (2021a,b); Han et al. (2021); Zhang et al. (2022b); Roy et al. (2022); Yu et al. (2022); Chi et al. (2022), GCD has attracted increasing attention. While existing GCD methods Vaze et al. (2022); Fei et al. (2022); Yang et al. (2022); Sun & Li (2022) have achieved promising performance, they always require centralized training, where the training data need to be accessed at once. However, this strategy violates many practical application scenarios: the GCD data are distributively collected by different local clients and the data in each client cannot be shared with others due to the privacy concerns. For instance, as shown in Fig. 1, a global species research center plans to discover the new species of global birds through the collaboration of local stations located around the world. Each local station is responsible for capturing and partially annotating bird images. Due to the difference in local policies and laws, it is hard to make an agreement to share the local data between stations. Thus, a decentralized system is required to handle this pragmatic GCD scenario.\nTo meet this requirement, we propose a practical yet challenging task, namely Federated GCD (Fed-GCD), in which the GCD data are individually collected and partially annotated by local clients as well as cannot be shared with other clients. The objective of Fed-GCD is to train a generic GCD model via the collaboration across local clients without sharing local samples, which can recognize both known and unknown categories in the unlabeled data. Compared with the conventional federated learning (FL) setups Li et al. (2021c) In addition, clients may share some common categories since some species of birds could live in different continents as shown in Fig. 1, and the different clients may have distinct client-specific categories. Attributed to such a complicated yet real situation, Fed-GCD suffers from 1) additional difficulties caused by open-set learning on limited local data, and 2) more severe data heterogeneity problems due to the inconsistent label space between clients.\nTo tackle the challenges in Fed-GCD, we propose a novel Associated Gaussian Contrastive Learning (AGCL) framework, which unifies the discriminative representation learning on the limited local data and the heterogeneous category aggregation on the central server, benefiting from learnable GMMs. Specifically, we propose to represent the potential classes by a learnable Gaussian mixture model (GMM), which brings two advantages. First, the learnable mechanism enables us to perform class-aware contrastive learning with dynamic Mahalanobis distance, which can reduce the side effects of inaccurate clustering. Second, modeling the classes as GMMs is favorable for generating informative feature-level samples of each category on server, without assessing the raw data.\nTo this end, we propose a client semantics association (CSA) features with a set of prototypes. However, PCL needs an instance-level memory buffer to produce the prototype set, which is computationally and memory-intensive. In contrast to the PCL that focus on the learning of prototypes, our GCL considers additional class-aware variances to comprehensively model data distributions without instance buffer, by incorporating the classical GMM model and contrastive learning in a unified framework. This allows models to be insensitive to outliers, especially for unreliable clusters.\n3 Federated Generalized Category Discovery" }, { "figure_ref": [], "heading": "Problem Definition and Formulation", "publication_ref": [ "b13" ], "table_ref": [], "text": "Given the practical requirements of generalized category discovery (GCD) applications (e.g., species distribution and data privacy), it is necessary to build a generic GCD model via collaborative decentralized training across clients without sharing their local data. To meet these requirements, we propose a federated generalized category discovery (Fed-GCD) task. In Fed-GCD task, the local training data collected by each client are partially labeled, where the labeled data belong to known categories, and the unlabeled data may come from known or unknown novel categories. Additionally, each client learns on its distinct label set, which contains client-specific categories and may include some shared common categories. Compared to the semi-supervised federated learning \n(b) Local GMM Initialization FedAvg < l a t e x i t s h a 1 _ b a s e 6 4 = \" M O m A I f 9 2 C G z J 0 S z G 9 4 T q O d c h D L 4 = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S T i 1 7 H o x Y O H C v 2 C N i 2 b 7 b R d u t m E 3 Y 1 S Q v 6 H F w + K e P W / e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z 5 0 e c K e 0 4 3 1 Z u Z X V t f S O / W d j a 3 t n d K + 4 f N F Q Y S 4 p 1 G v J Q t n y i k D O B d c 0 0 x 1 Y k k Q Q + x 6 Y / v p 3 6 z U e U i o W i p i c R e g E Z C j Z g l G g j d T u 1 E W r S T e 7 T X i L S X r H k l J 0 Z 7 G X i Z q Q E G a q 9 4 l e n H 9 I 4 Q K E p J 0 q 1 X S f S X k K k Z p R j W u j E C i N C x 2 S I b U M F C V B 5 y e z q 1 D 4 x S t 8 e h N K U 0 P Z M / T 2 R k E C p S e C b z o D o k V r 0 p u J / X j v W g 2 s v Y S K K N Q o 6 X z S I u a 1 D e x q B 3 W c S q e Y T Q w i V z N x q 0 x G R h G o T V M G E 4 C 6 + v E w a Z 2 X 3 s n z x c F 6 q 3 G R x 5 O E I j u E U X L i C C t x B F e p A Q c I z v M K b 9 W S 9 W O / W x 7 w 1 Z 2 U z h / A H 1 u c P 5 E q S y Q = = < / l a t e x i t > ⇥ L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" V z v r V N D k b I 1 8 x e b B 2 1 Y L 5 A l 1 Q e A = \" > A A A B 9 X i c b V D J S g N B E K 2 J W 4 x b 1 K O X x i B 4 C j P i d g x 6 8 e A h Q j Z I J q G n U 0 m a 9 C x 0 9 y h h m P / w 4 k E R r / 6 L N / / G T j I H T X x Q 8 H i v i q p 6 X i S 4 0 r b 9 b e V W V t f W N / K b h a 3 t n d 2 9 4 v 5 B Q 4 W x Z F h n o Q h l y 6 M K B Q + w r r k W 2 I o k U t 8 T 2 P T G t 1 O / + Y h S 8 T C o 6 U m E r k + H A R 9 w R r W R u p 3 a C D X t J U 7 a T e 7 T X r F k l + 0 Z y D J x M l K C D N V e 8 a v T D 1 n s Y 6 C Z o E q 1 H T v S b k K l 5 k x g W u j E C i P K x n S I b U M D 6 q N y k 9 n V K T k x S p 8 M Q m k q 0 G S m / p 5 I q K / U x P d M p 0 / 1 S C 1 6 U / E / r x 3 r w b W b 8 C C K N Q Z s v m g Q C 6 J D M o 2 A 9 L l E p s X E E M o k N 7 c S N q K S M m 2 C K p g Q n M W X l 0 n j r O x c l i 8 e z k u V m y y O P B z B M Z y C A 1 d Q g T u o Q h 0 Y S H i G V 3 i z n q w X 6 9 3 6 m L f m r G z m E P 7 A + v w B h z G S j A = = < / l a t e x i t > ⇥ L 1 < l a t e x i t s h a _ b a s e = \" N P t N / J m E A G + H b f Y i K r X t I k = \" > A A A B X i c b V D J S g N B E K x j X G L e v T S G A R P Y S\na H Y N e P H i I k A S S e j p C R N e h a e Q w D + e F D E q / / i z b + x k x B E x U P N r o q q e F w u u t G / W y u r a + s b m m t / P b O t + e C w o a J E M q y z S E S y n . In our Fed-GCD setup, we assume that for i-th and j-th client, i ̸ = j, L L i and L L j might be partially overlapping or completely non-overlapping, but their label space cannot be same (i.e.,\nV G F g o d Y w L b M U S a e A J b H q j n f f E S p e B T W D h G N C D k P u c U W k b q c R E a X n S T e n v U L R L t k z k G X i Z K Q I G a q w l e n H E k w F A z Q Z V q O a s Z R K z Z n A S b T K I w p G E B t g N a Y D K T W d X T i p U f r E j S p U J O Z + n s i p Y F S A z n Q H V Q X o T c X / v H a i / W s W G c a A z Z f J G f C K I j M o A L l E p s X Y E M o k N c S N q S S M m C y p s Q n M W X l m j X H I u S x c P X K T R Z H D o h B M A g S u o w B U o Q M J D z D K x Z T a L W z F t X r G z m C P A + v w B i L q S j Q = = < / l a t e x i t > ⇥ L\nF S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z\nF S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" v l p B k E q u M k S p 4 6 f W 3 A 1 T 1 n i j 9 r E = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d D B b B V U n E 1 7 L o Q h c u K t g H t D F M p p N 2 6 O T B z E Q I I f 6 K G x e K u P V D 3 P k 3 T t o u t P X A w O G c e 7 l n j h d z J p V l f R u l p e W V 1 b X y e m V j c 2 t 7 x 9 z d a 8 s o E Y S 2 S M Q j 0 f W w p J y F t K W Y 4 r Q b C 4 o D j 9 O O N 7 4 q / M 4 j F Z J F 4 b 1 K Y + o E e B g y n x G s t O S a 1 X 6 A 1 Y h g n l 3 n b m b n D 9 l t 7 p o 1 q 2 5 N g B a J P S M 1 m K H p m l / 9 Q U S S g I a K c C x l z 7 Z i 5 W R Y K E Y 4 z S v 9 R N I Y k z E e 0 p 6 m I Q 6 o d L J J + B w d a m W A / E j o F y o 0 U X 9 v Z D i Q M g 0 8 P V l E l f N e I f 7 n 9 R L l X z g Z C + N E 0 Z B M D / k J R y p C R R N o w A Q l i q e a Y C K Y z o r I C A t M l O 6 r o k u w 5 7 + 8 S N r H d f u s f n p 3 U m t c z u o o w z 4 c w B H Y c A 4 N u I E m t I B A C s / w C m / G k / F i v B s f 0 9 G S M d u p w h 8 Y n z 8 X T p U S < / l a t e x i t > G L 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 X X S 6 L 1 v k 5 w p C 8 O p k P s H t C X N i L Y = \" > A A A B / H i c b V D L S s N A F L 3 x W e s r 2 q W b Y B F c l a T 4 W h Z d 6 M J F B f u A N o b J d N o O n U z C z E Q I I f 6 K G x e K u P V D 3 P k 3 T t o s t P X A w O G c e 7 l n j h 8 x K p V t f x t L y y u r a + u l j f L m 1 v b O r r m 3 3 5 Z h L D B p 4 Z C F o u s j S R j l p K W o Y q Q b C Y I C n 5 G O P 7 n K / c 4 j E Z K G / F 4 l E X E D N O J 0 S D F S W v L M S j 9 A a o w R S 6 8 z L 6 1 n D + l t 5 p l V u 2 Z P Y S 0 S p y B V K N D 0 z K / + I M R x Q L j C D E n Z c + x I u S k S i m J G s n I / l i R C e I J G p K c p R w G R b j o N n 1 l H W h l Y w 1 D o x 5 U 1 V X 9 v p C i Q M g l 8 P Z l H l f N e L v 7 n 9 W I 1 v H B T y q N Y E Y 5 n h 4 Y x s 1 R o 5 U 1 Y A y o I V i z R B G F B d V Y L j 5 F A W O m + y r o E Z / 7 L i 6 R d r z l n t d O 7 k 2 r j s q i j B A d w C M f g w D k 0 4 A a a 0 A I M C T z D K 7 w Z T 8 a L 8 W 5 8 z E a X j G K n A n 9 g f P 4 A G N e V E w = = < / l a t e x i t > G L 2 (d) Global-Local Gaussian Contrastive Learning < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q b j / x T g T H N G 6 c R T S w p p Y y T 0 r U u g = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 l 0 o Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q b j / x T g T H N G 6 c R T S w p p Y y T 0 r U u g = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 l 0 o Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" M O m A I f 9 2 C G z J 0 S z G 9 4 T q O d c h D L 4 = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S T i 1 7 H o x Y O H C v 2 C N i 2 b 7 b R d u t m E 3 Y 1 S Q v 6 H F w + K e P W / e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z 5 0 e c K e 0 4 3 1 Z u Z X V t f S O / W d j a 3 t n d K + 4 f N F Q Y S 4 p 1 G v J Q t n y i k D O B d c 0 0 x 1 Y k k Q Q + x 6 Y / v p 3 6 z U e U i o W i p i c R e g E Z C j Z g l G g j d T u 1 E W r S T e 7 T X i L S X r H k l J 0 Z 7 G X i Z q Q E G a q 9 4 l e n H 9 I 4 Q K E p J 0 q 1 X S f S X k K k Z p R j W u j E C i N C x 2 S I b U M F C V B 5 y e z q 1 D 4 x S t 8 e h N K U 0 P Z M / T 2 R k E C p S e C b z o D o k V r 0 p u J / X j v W g 2 s v Y S K K N Q o 6 X z S I u a 1 D e x q B 3 W c S q e Y T Q w i V z N x q 0 x G R h G o T V M G E 4 C 6 + v E w a Z 2 X 3 s n z x c F 6 q 3 G R x 5 O E I j u E U X L i C C t x B F e p A Q c I z v M K b 9 W S 9 W O / W x 7 w 1 Z 2 U z h / A H 1 u c P 5 E q S y Q = = < / l a t e x i t > ⇥ L n Unlabeled Labeled < l a t e x i t s h a 1 _ b a s e 6 4 = \" n + J E Z N n P Y N D 3 F b h J C 3 5 I G 1 c t 2 X w = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 n U h Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z\nI = \" > A A A B + H i c b V D L S s N A F L 2 p r 1 o f j b p 0 M 1 g E V y U R q y 6 L L n R Z w T 6 g j W U y n b R D J 5 M w M x F q 6 J e 4 c a G I W z / F n X / j p M 1 C W w 9 c O J x z L 3 P m + D F n S j v O t 1 V Y W V 1 b 3 y h u l r a 2 d 3 b L 9 t 5 + S 0 W J J L R J I h 7 J j o 8 V 5 U z Q p m a a 0 0 4 s K Q 5 9 T t v + + D r z 2 4 9 U K h a J e z 2 J q R f i o W A B I 1 g b q W + X e y H W I 4 J 5 e j N 9 M N O 3 K 0 7 V m Q E t E z c n F c j R 6 N t f v U F E k p A K T T h W q u s 6 s f Z S L D U j n E 5 L v U T R G J M x H t K u o Q K H V H n p L P g U H R t l g I J I m h E a z d T f F y k O l Z q E v t n M Y q p F L x P / 8 7 q J D i 6 9 l I k 4 0 V S Q + U N B w p G O U N Y C G j B J i e Y T Q z C R z G R F Z I Q l J t p 0 V T I l u I t f X i a t 0 6 p 7 X q 3 d n V X q V 3 k d R T i E I z g B F y 6 g D r f Q g C Y Q S O A Z X u H N e r J e r H f r Y 7 5 a s P K b A / g D 6 / M H D q i T X Q = = < / l a t e x i t > G G\ni C Q h F Z p w r F T H d W L t p V h q R j j N S t 1 E 0 R i T E R 7 Q j q E C h 1 R 5 6 S R 4 h g 6 N 0 k d B J M 0 T G k 3 U 3 x s p D p U a h 7 6 Z z G O q e S 8 X / / M 6 i Q 4 u v J S J O N F U k O m h I O F I R y h v A f W Z p E T z s S G Y S G a y I j L E E h N t u i q Z E\nN G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k\nK q Q o b V i b Q B T Q 0 t t W C Q R 8 L k 0 = \" > A A A B 8 X i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q g x 4 j 5 I X J G m Y n n W T I 7 O w y M y u E J X / h x Y M i X v 0 b b / 6 N k 2 Q P G i 1 o K K q 6 6 e 4 K Y s G 1 c d 0 v J 7 e 0 v L K 6 l l 8 v b G x u b e 8 U d / c a O k o U w z q L R K R a A d U o u M S 6 4 U Z g K 1 Z I w 0 B g M x h d T / 3 m I y r N I 1 k z 4 x j 9 k A 4 k 7 3 N G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k J n 6 c y K l o d b j M L C d I T V D v e h N x f + 8 d m L 6 l 3 7 K Z Z w Y l G y + q J 8 I Y i I y f Z / 0 u E J m x N g S y h S 3 t x I 2 p I o y Y 0 M q 2 B C 8 x Z f / k s Z J 2 T s v n 9 2 d l i p X W R x 5 O I B D O A Y P L q A C t 1 C F O j C Q 8 A Q v 8 O p o 5 9 l 5 c 9 7 n r T k n m 9 m H X 3 A + v g G H u 5 D X < / l a t e x i t > ⇥ G < l a t e x i t s h a 1 _ b a s e 6 4 = \" j c W D K q Q o b V i b Q B T Q 0 t t W C Q R 8 L k 0 = \" > A A A B 8 X i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q g x 4 j 5 I X J G m Y n n W T I 7 O w y M y u E J X / h x Y M i X v 0 b b / 6 N k 2 Q P G i 1 o K K q 6 6 e 4 K Y s G 1 c d 0 v J 7 e 0 v L K 6 l l 8 v b G x u b e 8 U d / c a O k o U w z q L R K R a A d U o u M S 6 4 U Z g K 1 Z I w 0 B g M x h d T / 3 m I y r N I 1 k z 4 x j 9 k A 4 k 7 3 N G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k\nL L i L L j ̸ = L L i orL L j ).\nTo simulate such data distribution that often exists in real-world GCD applications, we adopt the parametric Dirichlet distribution Hsu et al. (2020) to control the degree of data heterogeneity." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "Baseline", "publication_ref": [ "b32", "b38" ], "table_ref": [], "text": "We employ the commonly-used FedAvg McMahan et al. (2017) algorithm, as our basic framework. Due to the inconsistent label spaces between local clients, we follow the previous FL work Li et al. (2021a) that only sends the feature extractor to the server. Given a feature extractor f parameterized by Θ, the extracted representation is defined as v = f (x). As illustrated in Fig. 2 (a),(d) and (e), the steps of the baseline for collaborative training by server and clients are as follows.\nStep I. In the t-th communication round, the server first aggregates the client models Θ L i uploaded from the last communication round, by taking a weighted average of them:\nΘ G t+1 = N L n=1 N L n N • Θ L n , N = N L i=1 N L i . (3.1)\nThen, the averaged model is distributed to each client.\nStep II. Based on the received global model, the i-th client trains its model by using local data D L i with the instance contrastive learning loss L I proposed in Vaze et al. (2022) (in Fig. 2 (d)). Specifically, we define that x i and x i are two views of random augmentations for the same image in a mini-batch B = B L ∪ B U , consisting of the labeled subset B L and unlabeled subset B U . The extracted representation v i is further projected by a MLP projection head h to high-dimensional embedding space for instance-level contrastive learning. The loss function is formulated as:\nL n ins = (λ -1) i∈B log S ins (v i , v i , τ S ) j∈B,j̸ =i S ins (v i , v j , τ S ) i∈B L -λ |P(i)| p∈P(i) log S ins (v i , v p , τ L ) j∈N (i) S ins (v i , v j , τ L ) , (3.2) S ins (v, v, τ ) = exp (h (v) • h ( v) /τ ) ,(3.3)\nwhere P(i) and N (i) are the positive and the negative index set for the anchor image i ∈ B L , respectively. λ is a trade-off factor to balance the contributions of self-supervised and supervised learning.\nStep III. The updated global model will be transmitted to each client.\nStep I and II are repeated until convergence. Ultimately, we use the final global model to discover new categories (in Fig. 2 (e))." }, { "figure_ref": [], "heading": "Limitations and Motivations", "publication_ref": [], "table_ref": [], "text": "Although the baseline approach works on our Fed-GCD benchmark, it shows unsatisfactory performance compared with centralized training, especially on fine-grained GCD datasets (see Tab. 4). We argue that the main reasons are attributed to two aspects: 1) the GCD 2022), the Fed-GCD fails to collaboratively train a robust global GCD model without discriminative local models; 2) sharing only the backbone network is inefficient to leverage the comprehensive category relationship that may not be observed in local clients. Moreover, although the label space of each client in Fed-GCD might potentially share some common semantic information (e.g., a specie of bird distributed on different continents), the server has no explicit knowledge to align or leverage such class-level relationships under privacy protection constraints.\nTo overcome these limitations, we consider representing the class-level knowledge by a learnable Gaussian mixture model (GMM), which is initialized by a parameter-free clustering approach. Each component of the GMM models a potential class/cluster with class-specific mean and variance, which naturally results in a concentration-based distance metric for robust contrastive learning. This idea enables models to 1) mitigate the negative effects caused by inaccurate clustering and enforce class-level supervision into local training, and 2) generate informative feature-level samples of each category for knowledge aggregation on the server without leaking original data. Revisiting GMM in Fed-GCD setup. We assume that the n-th client generates a GMM" }, { "figure_ref": [ "fig_7" ], "heading": "Federated Gaussian Contrastive Learning", "publication_ref": [], "table_ref": [], "text": "G L n = {N (µ i , σ i )} M L n i=1\nwith M L n components, where the µ i and σ i are the mean and variance of the i-th component. We use a component to model a potential class/category. For simplicity, we assume that the covariance matrix σ is diagonal and each cluster has the equal prior probability. By maximizing the posterior of v i belonging to the y i -th cluster, the GMM loss on the n-th client is derived as:\nL gmm (G L n , v i , y i ) = -log |σ i | -1 2 S gmm (v i , y i ) M L n j=1,j̸ =y i |σ j | -1 2 S gmm (v j , y j ) , (4.1) S gmm (v, y) = exp - 1 2 v -µ y σ y 2 • (1 + m) , (4.2)\nwhere the m is a non-negative margin factor to increase the inter-class dispersion.\nSemi-FINCH for Local GMM Initialization. Due to the fact that the ground-truth number of classes is often unknown in practical GCD applications, we first propose to improve the parameter-free hierarchical clustering algorithm, FINCH Sarfraz et al. (2019), to a semi-supervised extension. Then, we use the improved semi-FINCH to assign pseudo labels for local data, and then estimate the cluster-specific mean and covariance to initialize the learnable GMM, as shown in Fig. 2 (b). Semi-FINCH can capture the potential semantic relationships among both labeled and unlabeled samples with the guidance of labeled data. Specifically, we search the first neighbor of the unlabeled sample by the cosine similarity, while enforcing the first neighbor of the labeled sample to be the corresponding hardest positive sample. Connections between GCL and PCL Li et al. (2021b). Prototypical contrastive learning (PCL) Li et al. ( 2021b) is a pioneering method to introduce class-level supervision into unsupervised contrastive learning. PCL estimates a scalar concentration as the temperature parameter to scale the similarity between a feature and its prototype. Although it is efficient for learning discriminative representation, it fails to model a precise representation distribution that is supposed to generate reliable representations for the downstream clustering.\nHere, we discuss the differences between the PCL and the GCL. The similarly metric of the PCL is given by:\nS pcl (v i , y i ) = exp (v i • µ y i /ϕ y i ) ,(4.3)\nwhere ϕ i is the estimated temperature parameter for the i-th cluster. Comparing Eqs. (4.2) and (4.3), different from PCL, we model the clusters via the GMM with additional covariance matrices, and naturally derive the squared Mahalanobis distance as distance metric for contrastive learning. This allows models to dynamically control the contrastive temperatures in a dimension-wise way and to learn more reliable distributions of representations for the subsequent sampling.\nFurthermore, we introduce a regularization term to explicitly compact clusters and constrain covariance, to avoid trivial solutions. For example, GMM generates a high classification accuracy, but the sample embedding is far away from the center of the cluster due to the large class-specific variance. Using the regularization loss can constrain the distance between the sample embedding and its corresponding cluster center as well as reduce the overlarge variances. The regularization loss is:\nL reg (G L n , v i , y i ) = -log(S gcl (v i , y i )) + 1 2 log |σ y i | . (4.4)\nTaking Eqs. (4.1), (4.2) and (4.4), the overall GCL loss is defined by a weighted sum:\nL n gcl (G L n ) = N L n i=1 L gmm (G L n , v i , y i ) + αL reg (G L n , v i , y i ),(4.5)\nwhere α is a non-negative weighting coefficient. By optimizing this objective, the cluster-specific mean and variance can be learned. " }, { "figure_ref": [ "fig_7" ], "heading": "Client Semantic Association", "publication_ref": [], "table_ref": [], "text": "In Fed-GCD task, the data distributed to clients is highly heterogeneous. Moreover, due to privacy constraints, the central server is unreasonable to get prior knowledge to align the local clusters in practical scenarios. To overcome this limitation, we propose a sample yet efficient approach, namely client semantics association (CSA). The goal of CSA is to mine common semantic knowledge from the uploaded local GMMs, and aggregate diverse local knowledge for enriching category knowledge.\nClient-Agnostic Potential Semantic Association. Given a set of the uploaded GMMs G L = {G L n } N L n=1 , we sample N S instances from each Gaussian distribution, which results in a new representation set. By applying unsupervised FINCH clustering on the set, the central server generates a new global GMM, as illustrated in Fig. 2 (c). The global GMM will be sent to each client for the subsequent local training. Intuitively, the clusters with similar semantics will be grouped into new clusters. This type of clusters can be regarded as a super-class that contains more information with a large variance. This process implicitly associates common classes scattered in clients, thereby further enriching intra-class information. On the other hand, the clusters with relatively independent semantics will be preserved. By sampling, CSA augments the category knowledge contained in global GMM, which is beneficial for providing more negative classes in the GCL on local clients. In short, by incorporating diverse knowledge from different clients, the global GMM establishes a bridge among different clients. This allows the isolated local knowledge to mutually transfer among clients, providing a complementary supervision for local GCL." }, { "figure_ref": [ "fig_7" ], "heading": "Federated Global-Local GCL", "publication_ref": [ "b13", "b38", "b24" ], "table_ref": [], "text": "As illustrated in Fig. 2 (d), taking the n-th client as an example, we consider both the distributed global GMM G G and the local GMM G L n , to guide the optimization of the local model. We use a convex combination of them to achieve an optimal balance between the local and the global knowledge learning. The objective of GCL on the n-th client is:\nL n = L n ins + (1 -γ)L n gcl (G G ) + γL n gcl (G L n ), (4.6)\nwhere the γ is a trade-off factor to control the strength of learning on global-local GMMs. When γ is equal to 1, GCL leverages only the local class-level supervision for representation learning. On the contrary, GCL relies on only the aggregated global category information. Evaluation Protocols. Due to the varying data distribution in different Fed-GCD applications, we present two evaluation protocols to separately simulate the normally heterogeneous (NH) and extremely heterogeneous (EH) scenarios by adjusting β in Dirichlet distribution Hsu et al. (2020). Specifically, we set β = 0.2 and β = 0.05 for NH and EH, respectively. The statistics of the dataset splits under the two evaluation protocols are described in Tab. 2, in which the NH setting exists few common classes but there is no labeled categories shared across all clients in the EH setting. For each dataset, we learn a global model in a decentralized training fashion. Following Vaze et al. (2022), during testing, we first estimate the number of the potential categories (i.e., k) in the non-overlapping test set by using the labeled data stored on server. Then we calculate the maximum of clustering accuracy between the ground truth labels and the label assignment with the estimated k over the set of permutations via Hungarian algorithm Kuhn (1955). Last, we measure the clustering accuracy for \"All\", \"Old\" and \"New\" categories, respectively." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b7", "b38", "b7" ], "table_ref": [], "text": "On each client, we adopt the same backbone network, a ViT Dosovitskiy et al. (2021) pre-trained by DINO Caron et al. (2021), and use its [CLS] token for GCL learning and new category discovery. Following GCD Vaze et al. (2022), the instance contrastive learning is implemented by a projection head with 65,536 dimensions and two randomly-augmented views of an image. For a fair comparison, we follow Vaze et al. (2022) and set λ, τ S and τ L to 0.35, 0.07 and 0.05, respectively. We fine-tune only the last block of the ViT Dosovitskiy et al. (2021) with an initial learning rate of 0.1 and upload it to the central server in each communication. The projection head and global-local GMMs are trained with an initial learning rate of 0.01.\nAll models are optimized by SGD Qian (1999) for 200 epochs with a cosine annealing schedule. The size of the mini-batch is set to 128. The hyper-parameters α, γ, m and N S are set to 0.01, 0.9, 0.3 and 1 in all experiments." }, { "figure_ref": [], "heading": "Performance Evaluation", "publication_ref": [], "table_ref": [], "text": "Since this work is the first to explore GCD tasks under a federated learning challenge, there is no Fed-GCDspecific method used for comparison. Moreover, to the best of our knowledge, GCD Vaze et al. (2022) is the only state-of-the-art GCD method with official codes. Thus, we first adapt the GCD method into our Fed-GCD task as the strong baseline (\"FedAvg + GCD\"). Then, we separately implement the AGCL without global GCL (\"FedAvg + GCL\") and the full AGCL (\"FedAvg + AGCL\") to investigate the effects of our global-local GCL. Next, to provide a reference performance, we evaluate the centralized training performance of GCD (\"Centralized-GCD\") and GCL (\"Centralized-GCL\"). Finally, we adapt AGCL in the advanced heterogeneous federated learning framework Li et al. (2020b) (\"FedProx + AGCL\"), for a comprehensive comparison. We summarize the experimental results in Tabs. 3 and 4 and the main conclusions below.\nComparison on Generic Datasets. Analyzing the results in Tab. 3, we draw two-fold conclusions: 1) A significant accuracy drop between the \"Centralized-GCD\" and the \"FedAvg + GCD\" setup, especially by 7.6% in the EH setting on CIFAR100; 2) our AGCL consistently outperforms other setups. In the NH setting, AGCL outperforms the \"FedAvg+GCD\" by 6.5% on CIFAR-100 for \"All\" classes. Although there is no category shared across all clients in the EH setting, AGCL still achieves consistent improvements compared with the \"FedAvg+GCD\" by 6.9% on CIFAR-100, and 5.7% on ImageNet-100 for \"All\" classes.\nComparison on Fine-Grained Datasets. The experimental results in Tab. 4 show that AGCL outperforms other methods for \"All\" classes. Specifically, AGCL outperforms the baseline method on CUB-200 for \"All\" classes by 8.9% on the NH setting and by 9.8% on the EH setting. Compared with \"FedAvg + GCD\", AGCL shows less performance decrease under decentralized training challenge, owing to the advantage of leveraging the aggregated category information among different fine-grained clients.\nSummary. The experimental results demonstrate that 1) the proposed Fed-GCD task is challenging due to the severe data heterogeneity, which results in a large accuracy degradation between the centralized and decentralized training; 2) the fine-graded Fed-GCD exists a larger performance degradation caused by decentralized training, compared to the generic Fed-GCD. This is because the differences between different classes in fine-grained datasets are subtle and understanding the fine-grained visual is more challenging for GCD; 3) AGCL achieves consistent improvement in all settings. Benefiting from aggregating different categories scattered on clients, AGCL achieves better performance, especially on fine-grained tasks in the EH setting; 4) we verify the superiority of FedProx Li et al. (2020a) on more heterogeneous federated learning." }, { "figure_ref": [], "heading": "Effectiveness of Each Component of AGCL", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of each component of AGCL, we conduct five group experiments on both CUB-200 Wah et al. (2011) andPet Parkhi et al. (2012) datasets, as shown in Tab. 5. The method (a) is the baseline method, i.e., \"FedAvg + GCD\".\nEffectiveness of local GCL. The results of the experiment (a) and (c) indicate that GCL outperforms the baseline with instance-level supervision by a large margin, demonstrating the importance of class-level supervision in GCD. Especially for the accuracy of new classes, GCL outperforms the baseline by 10.9% on the CUB-200 dataset.\nEffectiveness of regularization loss in Eq. (4.4). Comparing the experiment (b) and (c), we can find that enforcing the regularizing loss can achieve consistent improvement. This is because the regularization loss can encourage models to avoid trivial sub-optimal solutions.\nEffectiveness of CSA in Sec. 4.2. Based on the results of the experiment (d-e), we experimentally demonstrate that CSA can associate heterogeneous category knowledge even without commonly-shared categories in EH setting. The associated knowledge contained in the global GMM complements representation learning based on local GMM, thereby improving the model's ability to discover new categories." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "Hyper-Parameter Analyses", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the impacts of the hyper-parameters of AGCL on the CUB-200 dataset under the EH setting, including loss weights (α and γ), the margin parameter of GCL loss (m), and the number of sampling from each potential category of local GMMs (N S ) in CSA. For each experiment, we vary the value of the studied parameter while fixing the others with default values.\nImpact of regularization weight in Eq. (4.5) is illustrated in Fig. 3 (a). A large weight may lead to worse performance compared to the configuration without the regularization loss (see the dashed line on Fig. 3 (a)). We empirically set α as 0.01 to achieve optimal performance." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Impact of trade-off factor in Eq. (4.6) is illustrated in Fig. 3 (b).\nBased on the results, we find that local GCL plays a dominant role in training a discriminative representation. Meanwhile, introducing relatively few knowledge from the global GMM can complement the contrastive learning based on local GMMs. On the contrary, when AGCL mainly relies on the global GMM, the performance of local training will be largely degraded (e.g., γ=0.6).\nImpact of margin parameter in Eq. (4.2) is illustrated in Fig. 3 (c). We find that using margin consistently improves accuracy compared with the configuration without margin (see the dashed line on Fig. 3 (c)). We choose the optimal m=0.3 in all experiments.\nImpact of sampling parameter in CSA is illustrated in Fig. 3 (d). The results are summarized as: 1) our CSA can effectively aggregate heterogeneous knowledge even without sampling (see the dashed line on Fig. 3 (d)); 2) sampling only one sample for each category can further improve the performance; 3) sampling more samples leads to worse performance. Thus, N S is set to 1 in all experiments." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a new Federated Generalized Category Discovery (Fed-GCD) task, based on the practical requirement of decentralized training trends. To handle this task, we propose a novel Associated Gaussian Contrastive Learning (AGCL) framework specifically designed to overcome the unique challenges posed by Fed-GCD. Moreover, we build a benchmark based on six visual datasets to facilitate the study of Fed-GCD. Extensive experiments show that AGCL outperforms the FedAvg-based baseline on all datasets. In future, we attempt to relieve the requirement of storing labeled data in the central server to meet more realistic scenarios for Fed-GCD." } ]
Generalized category discovery (GCD) aims at grouping unlabeled samples from known and unknown classes, given labeled data of known classes. To meet the recent decentralization trend in the community, we introduce a practical yet challenging task, namely Federated GCD (Fed-GCD), where the training data are distributively stored in local clients and cannot be shared among clients. The goal of Fed-GCD is to train a generic GCD model by client collaboration under the privacy-protected constraint. The Fed-GCD leads to two challenges: 1) representation degradation caused by training each client model with fewer data than centralized GCD learning, and 2) highly heterogeneous label spaces across different clients. To this end, we propose a novel Associated Gaussian Contrastive Learning (AGCL) framework based on learnable GMMs, which consists of a Client Semantics Association (CSA) and a global-local GMM Contrastive Learning (GCL). On the server, CSA aggregates the heterogeneous categories of local-client GMMs to generate a global GMM containing more comprehensive category knowledge. On each client, GCL builds class-level contrastive learning with both local and global GMMs. The local GCL learns robust representation with limited local data. The global GCL encourages the model to produce more discriminative representation with the comprehensive category relationships that may not exist in local data. We build a benchmark based on six visual datasets to facilitate the study of Fed-GCD. Extensive experiments show that our AGCL outperforms the FedAvg-based baseline on all datasets.
Federated Generalized Category Discovery
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the proposed Fed-GCD with the case of global bird species discovery. In Fed-GCD, the data are distributively collected from the different local stations (clients) over the world, which are partially annotated. Each client includes client-specific categories and may share some common categories with the other clients. Moreover, the raw data stored in local clients are not allowed to share with the central server or other clients, due to data privacy. The goal of Fed-GCD is to collaboratively train a generic GCD model under the federated privacy constraint, and then utilize it to discover novel categories in the unlabeled data on clients or the server during testing.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "t e x i t s h a 1 _ b a s e 6 4 = \" n + J E Z N n P Y N D 3 F b h J C 3 5 I G 1 c t 2 X w = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 n U h Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A c E y V T A = = < / l a t e x i t > D L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q b j / x T g T H N G 6 c R T S w p p Y y T 0 r U u g = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 l 0 o Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A c E y V T A = = < / l a t e x i t > D L n (e) Global Category Discovery Unlabeled Labeled < l a t e x i t s h a 1 _ b a s e 6 4 = \" E p a / m f t 9 I c m 7 R S 7 I l c S v u N W W G 0", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "t e x i t s h a 1 _ b a s e 6 4 = \" i Z i D P h 8 G o G R U X O 4 9 3 x y r U C D l b Y c = \" > A A A B + H i c b V D L S s N A F L 2 p r 1 o f j b p 0 M 1 g E V y U R X 8 u i g i 4 r 2 A e 0 s U y m k 3 b o Z B J m J k I N + R I 3 L h R x 6 6 e 4 8 2 + c t F 1 o 6 4 G B w z n 3 c s 8 c P + Z M a c f 5 t g p L y y u r a 8 X 1 0 s b m 1 n b Z 3 t l t q i i R h D Z I x C P Z 9 r G i n A n a 0 E x z 2 o 4 l x a H P a c s f X e V + 6 5 F K x S J x r 8 c x 9 U I 8 E C x g B G s j 9 e x y N 8 R 6 S D B P r 7 O H 9 C b r 2 R W n 6 k y A F o k 7 I x W Y o d 6 z v 7 r 9", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "t z 5 L y + S 5 n H V P a u e 3 p 1 U a p e z O o q w D w d wB C 6 c Q w 1 u o Q 4 N I J D A M 7 z C m / V k v V j v 1 s d 0 t G D N d v b g D 6 z P H w o N k 1 o = < / l a t e x i t > D G < l a t e x i t s h a 1 _ b a s e 6 4 = \" j c W D K q Q o b V i b Q B T Q 0 t t W C Q R 8 L k 0 = \" > A A A B 8 X i c b V D L Sg N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q g x 4 j 5 I X J G m Y n n W T I 7 O w y M y u E J X / h x Y M i X v 0 b b / 6 N k 2 Q P G i 1 o K K q 6 6 e 4 K Y s G 1 c d 0 v J 7 e 0 v L K 6 l l 8 v b G x u b e 8 U d / c a O k o U w z q L R K R a A d U o u M S 6 4 U Z g K 1 Z I w 0 B g M x h d T / 3 m I y r N I 1 k z 4 x j 9 k A 4 k 7 3", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "J n 6 c y K l o d b j M L C d I T V D v e h N x f + 8 d m L 6 l 3 7 K Z Z w Y l G y + q J 8 I Y i I y f Z / 0 u E J m x N g S y h S 3 t x I 2 p I o y Y 0 M q 2 B C 8 x Z f / k s Z J 2 T s v n 9 2 d l i p X W R x 5 O I B D O A Y P L q A C t 1 C F O j C Q 8 A Q v 8 O p o 5 9 l 5 c 9 7 n r T k n m 9 m H X 3 A + v g G H u 5 D X < / l a t e x i t > ⇥ G < l a t e x i t s h a 1 _ b a s e 6 4 = \" j c W D", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Diagram of the proposed federated Gaussian contrastive learning (FGCL) framework. We first apply FedAvg McMahan et al. (2017) to aggregate the uploaded local models, resulting in a global model that will be distributed to all clients. Then, after leveraging the distributed models to extract image features, local clients are required to cluster these features and initialize local GMMs. Next, the local GMMs are uploaded to the central server, and aggregated by the proposed CSA, to generate a global GMM before local training. Later, the server distributes the global GMM to each client. Based on the global-local GMMs, client models are collaboratively optimized by the proposed GCL. Finally, a generic model is trained for global category discovery.", "figure_data": "", "figure_id": "fig_7", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "To comprehensively evaluate the performance of Fed-GCD models, we reorganize three commonlyused generic image classification datasets (i.e., CIFAR-10 Krizhevsky et al. (2009), CIFAR-100 Krizhevsky et al. (2009) and ImageNet-100 Vaze et al. (2022)) and three more challenging fine-grained image classification datasets (i.e., CUB-200 Wah et al. (2011), Stanford Cars Krause et al. (2013), and Oxford-IIIT Pet Parkhi et al. (2012)) to construct a new Fed-GCD benchmark. For each dataset, first, we sample a subset of half the classes as \"Old\" categories in the original training set, and 50% of instances of each labeled class are drawn to form the labeled set, and all the remaining data form the unlabeled set. With the same rate of labeled-unlabeled splitting, we split the original testing set into labeled and unlabeled subsets for class number estimation and GCD testing on server. Then, we further leverage the β-Dirichlet distribution Hsu et al. (2020) to split the training set into N L subsets, where the N L subsets are regarded as local datasets individually stored in each client. We set N L =5 in all experiments. Experiments with different values of N L are studied in the supplementary materials.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Impact of hyper-parameters. The clustering accuracy on \"All\" categories is reported.", "figure_data": "", "figure_id": "fig_9", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison between different federated learning (FL) setups. \"FS\", \"SS\" and \"SE\" denote fullysupervised, self-supervised and semi-supervised, respectively. learning into the FL framework. Later, since there is often partially-labeled data in real-world scenarios, some semi-supervised FL approachesLin et al. (2021);Kim et al. (2022) are proposed to exploit the partial supervision and learn better representations with few annotation costs. As summarized in Tab. 1, these works assume local clients share a common label space that is infeasible for GCD tasks. In contrast, our Fed-GCD is challenged by more severe issues of data heterogeneity, because the label space on clients may be non-overlapping or clients share only a few classes with each other.", "figure_data": "FL SetupOut of Category Distribution Annotation on ClientFS✗Fully LabeledSS Zhang et al. (2020)✗UnlabeledSE Lin et al. (2021); Kim et al. (2022)✗Partially LabeledFed-GCD✓Partially Labeledself-supervised", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Jeong et al. (2021) (semi-FL) setting that assumes both labeled and unlabeled data belong to known categories and share a common label space, Fed-GCD is more challenging due to highly-heterogeneous data issues attributed to inconsistent label spaces between clients and additional difficulties caused by open-set learning on local data. In light of this, Fed-GCD aims to 1) improve the local GCD model's representation learning ability on limited local data in open-set learning scenarios, and 2) associate the heterogeneous local label spaces to provide comprehensive category knowledge for local", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Based on the above analyses, we propose a novel Associated Gaussian contrastive learning (AGCL) framework to accomplish efficient Fed-GCD. AGCL consists of a global-local GMM Contrastive Learning (GCL) on local clients and a client semantics association (CSA) on the central server. The former enforces a class-level contrastive learning in local training by jointly using a global GMM and a local one, where the local GMM is created by clustering on local data and the global GMM is distributed from the central server. The latter serves to aggregate heterogeneous category knowledge contained in the local GMMs following a client-agnostic manner, and generates the global GMM to provide comprehensive category relationship for local training. The goal of AGCL is to improve representation learning by enforcing class-aware GCL and associating related semantic knowledge scattered across clients. As empirically demonstrated in Fei et al. (2022); Zhang et al. (2022a); Li et al. (2021b), class-level or prototypical contrastive learning is efficient for learning a clustering-friendly representation. Recently, openset contrastive learningSun & Li (2022) further indicates that such representation learning can significantly improve the GCD model's abilities to discover both known and unknown categories. However, these methods represent a class by using only the center or the mean of the class, which is insufficient and vulnerable to wrong pseudo-labeling caused by inaccurate clustering. To address this issue, we propose to employ a classical Gaussian mixture model (GMM) to model potential cluster distributions, and then perform class-level contrastive learning across the components of the GMM.", "figure_data": "4.1 Gaussian Contrastive Learning", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The statistics of our Fed-GCD benchmark. We simulate different degrees of data heterogeneity in real-world Fed-GCD scenarios by adjusting the β of parametric Dirichlet distribution to split the local training sets among clients.", "figure_data": "Client N L =5ServerDatasetβ# Labelled Classes # Unlabelled Classes # Classes Shared AcrossLabelledUnlabelledMaxMinMaxMinall clients ≥2 clients # Classes # Images # Classes # ImagesCIFAR10 Krizhevsky et al. (2009)0.2 0.055 54 110 109 42 05 45 52500 250010 107500 7500CIFAR100 Krizhevsky et al. (2009)0.2 0.0550 4433 1773 40100 9016 149 4350 502500 2500100 1007500 7500ImageNet-100 Deng et al. (2009)0.2 0.0550 4437 17100 9070 4016 150 4750 501250 1250100 1003750 3750CUB-200 Wah et al. (2011)0.2 0.05100 8936 2598 178200 595 097 84100 1001430 1430200 2004362 4362SCars Krause et al. (2013)0.2 0.0598 8747 29196 17795 575 096 8798 982001 2001196 1966040 6040Pet Parkhi et al. (2012)0.2 0.0519 1812 522 4337 163 019 1719 19940 94037 372729 2729", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on generic datasets with two different degrees of data heterogeneity.", "figure_data": "NH setting (β = 0.2)EH setting (β = 0.05)SetupCIFAR10CIFAR100ImageNet-100CIFAR10CIFAR100ImageNet-100All Old New All Old New AllOld New AllOld New All Old New All Old NewCentralized-GCD 83.6 85.8 82.0 54.9 56.1 53.7 72.1 80.7 67.5 83.6 85.8 82.0 54.9 56.1 53.7 72.1 80.7 67.5Centralized-GCL 86.7 86.7 86.7 58.5 57.2 58.1 76.1 83.7 68.4 86.7 86.7 86.7 58.5 57.2 58.1 76.1 83.7 68.4FedAvg + GCD80.7 82.3 80.3 49.6 52.1 49.3 69.8 77.1 65.7 78.7 80.1 78.3 47.3 49.2 45.9 66.4 74.8 62.1FedAvg + GCL83.2 84.9 82.8 54.1 55.7 54.0 74.1 81.8 67.3 82.2 82.4 81.9 52.1 53.2 51.9 72.5 79.8 65.3FedAvg + AGCL 84.7 85.5 84.6 56.1 56.8 55.3 74.8 80.2 69.8 82.5 83.4 82.2 54.2 54.6 54.0 73.1 78.1 67.0FedProx + AGCL 84.8 85.8 84.7 55.9 56.5 54.9 74.7 80.3 69.5 83.0 84.1 82.8 54.7 55.1 54.2 74.9 78.8 67.7", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on fine-grained datasets with two different degrees of data heterogeneity. Pet All Old New All Old New All Old New All Old New All Old New All Old New", "figure_data": "NH setting (β = 0.2)EH setting (β = 0.05)SetupCUB-200Stanford-CarsOxford-PetCUB-200Stanford-CarsOxford-", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The effectiveness of loss functions of the FedAvg based-AGCL on the EH setting (β=0.05).", "figure_data": "IndexComponentCUB-200 Wah et al. (2011) Oxford-Pet Parkhi et al. (2012)L I L L gmmL reg L G gmmAll OldNewAll OldNewa)✓43.3 52.838.972.1 76.471.5b)✓48.9 50.548.576.8 78.575.1c)✓✓50.6 51.849.878.0 80.777.4d)✓✓✓52.2 53.152.079.5 81.578.6e)✓✓✓✓53.1 52.954.281.4 82.080.7", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" } ]
Nan Pu; Zhun Zhong; Xinyuan Ji; Nicu Sebe
[ { "authors": "Durmus Alp; Emre Acar; Yue Zhao; Ramon Matas Navarro; Matthew Mattina; Paul N Whatmough; Venkatesh Saligrama", "journal": "", "ref_id": "b0", "title": "Federated learning based on dynamic regularization", "year": "2021" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b1", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b2", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Haoang Chi; Feng Liu; Wenjing Yang; Long Lan; Tongliang Liu; Bo Han; Gang Niu; Mingyuan Zhou; Masashi Sugiyama", "journal": "", "ref_id": "b3", "title": "Meta discovery: Learning to discover novel classes given very limited data", "year": "2022" }, { "authors": "Ching-Yao Chuang; Joshua Robinson; Yen-Chen Lin; Antonio Torralba; Stefanie Jegelka", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Debiased contrastive learning", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b5", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Enmao Diao; Jie Ding; Vahid Tarokh", "journal": "", "ref_id": "b6", "title": "Heterofl: Computation and communication efficient federated learning for heterogeneous clients", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Yixin Fei; Zhongkai Zhao; Siwei Yang; Bingchen Zhao", "journal": "BMVA Press", "ref_id": "b8", "title": "Xcon: Learning with experts for fine-grained category discovery", "year": "2022" }, { "authors": "Enrico Fini; Enver Sangineto; Stéphane Lathuilière; Zhun Zhong; Moin Nabi; Elisa Ricci", "journal": "", "ref_id": "b9", "title": "A unified objective for novel class discovery", "year": "2021" }, { "authors": "Kai Han; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b10", "title": "Learning to discover novel visual categories via deep transfer clustering", "year": "2019" }, { "authors": "Kai Han; Sylvestre-Alvise Rebuffi; Sebastien Ehrhardt; Andrea Vedaldi; Andrew Zisserman", "journal": "IEEE TPAMI", "ref_id": "b11", "title": "Autonovel: Automatically discovering and learning novel visual categories", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b12", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Tzu-Ming Harry Hsu; Hang Qi; Matthew Brown", "journal": "Springer", "ref_id": "b13", "title": "Federated visual classification with real-world data distribution", "year": "2020" }, { "authors": "Yen-Chang Hsu; Zhaoyang Lv; Zsolt Kira", "journal": "", "ref_id": "b14", "title": "Learning to cluster in order to transfer across domains and tasks", "year": "2018" }, { "authors": "Yen-Chang Hsu; Zhaoyang Lv; Joel Schlosser; Phillip Odom; Zsolt Kira", "journal": "", "ref_id": "b15", "title": "Multi-class classification without multi-class labels", "year": "2018" }, { "authors": "Ashish Jaiswal; Ramesh Ashwin; Mohammad Zaki Babu; Debapriya Zadeh; Fillia Banerjee; Makedon", "journal": "Technologies", "ref_id": "b16", "title": "A survey on contrastive self-supervised learning", "year": "2020" }, { "authors": "Wonyong Jeong; Jaehong Yoon; Eunho Yang; Sung Ju Hwang", "journal": "", "ref_id": "b17", "title": "Federated semi-supervised learning with inter-client consistency & disjoint learning", "year": "2021" }, { "authors": "Sujoy Joseph; Gaurav Paul; Soma Aggarwal; Piyush Biswas; Kai Rai; Han; Vineeth N Balasubramanian", "journal": "Springer", "ref_id": "b18", "title": "Novel class discovery without forgetting", "year": "2022" }, { "authors": "Mikhail Khodak; Maria-Florina Balcan; Ameet Talwalkar", "journal": "NeurIPS", "ref_id": "b19", "title": "Adaptive gradient-based meta-learning methods", "year": "2019" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Supervised contrastive learning", "year": "2020" }, { "authors": "Woojung Kim; Keondo Park; Kihyuk Sohn; Raphael Shu; Hyung-Sin Kim", "journal": "", "ref_id": "b21", "title": "Federated semi-supervised learning with prototypical networks", "year": "2022" }, { "authors": "Jonathan Krause; Michael Stark; Jia Deng; Li Fei-Fei", "journal": "", "ref_id": "b22", "title": "3d object representations for fine-grained categorization", "year": "2013" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b23", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": " Harold W Kuhn", "journal": "Naval research logistics quarterly", "ref_id": "b24", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "Chenglin Li; Di Niu; Bei Jiang; Xiao Zuo; Jianming Yang", "journal": "", "ref_id": "b25", "title": "Meta-har: Federated representation learning for human activity recognition", "year": "2021" }, { "authors": "Daliang Li; Junpu Wang", "journal": "", "ref_id": "b26", "title": "Fedmd: Heterogenous federated learning via model distillation", "year": "2019" }, { "authors": "Junnan Li; Pan Zhou; Caiming Xiong; Steven C H Hoi", "journal": "", "ref_id": "b27", "title": "Prototypical contrastive learning of unsupervised representations", "year": "2021" }, { "authors": "Tian Li; Anit Kumar Sahu; Ameet Talwalkar; Virginia Smith", "journal": "IEEE signal processing magazine", "ref_id": "b28", "title": "Federated learning: Challenges, methods, and future directions", "year": "2020" }, { "authors": "Tian Li; Anit Kumar Sahu; Manzil Zaheer; Maziar Sanjabi; Ameet Talwalkar; Virginia Smith", "journal": "", "ref_id": "b29", "title": "Federated optimization in heterogeneous networks", "year": "2020-03-02" }, { "authors": "Xiaoxiao Li; Meirui Jiang; Xiaofei Zhang; Michael Kamp; Qi Dou", "journal": "", "ref_id": "b30", "title": "Fedbn: Federated learning on non-iid features via local batch normalization", "year": "2021" }, { "authors": "Haowen Lin; Jian Lou; Li Xiong; Cyrus Shahabi", "journal": "", "ref_id": "b31", "title": "Semifed: Semi-supervised federated learning with consistency and pseudo-labeling", "year": "2021" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b32", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Andrea Omkar M Parkhi; Andrew Vedaldi; Zisserman; Jawahar", "journal": "", "ref_id": "b33", "title": "Cats and dogs", "year": "2012" }, { "authors": " Ning Qian", "journal": "Neural networks", "ref_id": "b34", "title": "On the momentum term in gradient descent learning algorithms", "year": "1999" }, { "authors": "Subhankar Roy; Mingxuan Liu; Zhun Zhong; Nicu Sebe; Elisa Ricci", "journal": "Springer", "ref_id": "b35", "title": "Class-incremental novel class discovery", "year": "2022" }, { "authors": "Saquib Sarfraz; Vivek Sharma; Rainer Stiefelhagen", "journal": "", "ref_id": "b36", "title": "Efficient parameter-free clustering using first neighbor relations", "year": "2019" }, { "authors": "Yiyou Sun; Yixuan Li", "journal": "", "ref_id": "b37", "title": "Opencon: Open-world contrastive learning", "year": "2022" }, { "authors": "Sagar Vaze; Kai Han; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b38", "title": "Generalized category discovery", "year": "2022" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b39", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "Xin Wen; Bingchen Zhao; Xiaojuan Qi", "journal": "", "ref_id": "b40", "title": "Parametric classification for generalized category discovery: A baseline study", "year": "2022" }, { "authors": "Muli Yang; Yuehua Zhu; Jiaping Yu; Aming Wu; Cheng Deng", "journal": "", "ref_id": "b41", "title": "Divide and conquer: Compositional experts for generalized novel class discovery", "year": "2022" }, { "authors": "Qing Yu; Daiki Ikami; Go Irie; Kiyoharu Aizawa", "journal": "AAAI", "ref_id": "b42", "title": "Self-labeling framework for novel category discovery over domains", "year": "2022" }, { "authors": "Fengda Zhang; Kun Kuang; Zhaoyang You; Tao Shen; Jun Xiao; Yin Zhang; Chao Wu; Yueting Zhuang; Xiaolin Li", "journal": "", "ref_id": "b43", "title": "Federated unsupervised representation learning", "year": "2020" }, { "authors": "Lu Zhang; Lu Qi; Xu Yang; Hong Qiao; Ming-Hsuan Yang; Zhiyong Liu", "journal": "", "ref_id": "b44", "title": "Automatically discovering novel visual categories with self-supervised prototype learning", "year": "2022" }, { "authors": "Xinwei Zhang; Jianwen Jiang; Yutong Feng; Zhi-Fan Wu; Xibin Zhao; Hai Wan; Mingqian Tang; Rong Jin; Yue Gao; ; ; Bingchen Zhao; Kai Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b45", "title": "Novel visual category discovery with dual ranking statistics and mutual knowledge distillation", "year": "2021" }, { "authors": "Bingchen Zhao; Xin Wen; Kai Han", "journal": "", "ref_id": "b46", "title": "Learning semi-supervised gaussian mixture models for generalized category discovery", "year": "2023" }, { "authors": "Zhun Zhong; Enrico Fini; Subhankar Roy; Zhiming Luo; Elisa Ricci; Nicu Sebe", "journal": "", "ref_id": "b47", "title": "Neighborhood contrastive learning for novel class discovery", "year": "2021" }, { "authors": "Zhun Zhong; Linchao Zhu; Zhiming Luo; Shaozi Li; Yi Yang; Nicu Sebe", "journal": "", "ref_id": "b48", "title": "Openmix: Reviving known knowledge for discovering novel visual categories in an open world", "year": "2021" } ]
[ { "formula_coordinates": [ 5, 98.21, 113.19, 125.73, 122.01 ], "formula_id": "formula_0", "formula_text": "(b) Local GMM Initialization FedAvg < l a t e x i t s h a 1 _ b a s e 6 4 = \" M O m A I f 9 2 C G z J 0 S z G 9 4 T q O d c h D L 4 = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S T i 1 7 H o x Y O H C v 2 C N i 2 b 7 b R d u t m E 3 Y 1 S Q v 6 H F w + K e P W / e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z 5 0 e c K e 0 4 3 1 Z u Z X V t f S O / W d j a 3 t n d K + 4 f N F Q Y S 4 p 1 G v J Q t n y i k D O B d c 0 0 x 1 Y k k Q Q + x 6 Y / v p 3 6 z U e U i o W i p i c R e g E Z C j Z g l G g j d T u 1 E W r S T e 7 T X i L S X r H k l J 0 Z 7 G X i Z q Q E G a q 9 4 l e n H 9 I 4 Q K E p J 0 q 1 X S f S X k K k Z p R j W u j E C i N C x 2 S I b U M F C V B 5 y e z q 1 D 4 x S t 8 e h N K U 0 P Z M / T 2 R k E C p S e C b z o D o k V r 0 p u J / X j v W g 2 s v Y S K K N Q o 6 X z S I u a 1 D e x q B 3 W c S q e Y T Q w i V z N x q 0 x G R h G o T V M G E 4 C 6 + v E w a Z 2 X 3 s n z x c F 6 q 3 G R x 5 O E I j u E U X L i C C t x B F e p A Q c I z v M K b 9 W S 9 W O / W x 7 w 1 Z 2 U z h / A H 1 u c P 5 E q S y Q = = < / l a t e x i t > ⇥ L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" V z v r V N D k b I 1 8 x e b B 2 1 Y L 5 A l 1 Q e A = \" > A A A B 9 X i c b V D J S g N B E K 2 J W 4 x b 1 K O X x i B 4 C j P i d g x 6 8 e A h Q j Z I J q G n U 0 m a 9 C x 0 9 y h h m P / w 4 k E R r / 6 L N / / G T j I H T X x Q 8 H i v i q p 6 X i S 4 0 r b 9 b e V W V t f W N / K b h a 3 t n d 2 9 4 v 5 B Q 4 W x Z F h n o Q h l y 6 M K B Q + w r r k W 2 I o k U t 8 T 2 P T G t 1 O / + Y h S 8 T C o 6 U m E r k + H A R 9 w R r W R u p 3 a C D X t J U 7 a T e 7 T X r F k l + 0 Z y D J x M l K C D N V e 8 a v T D 1 n s Y 6 C Z o E q 1 H T v S b k K l 5 k x g W u j E C i P K x n S I b U M D 6 q N y k 9 n V K T k x S p 8 M Q m k q 0 G S m / p 5 I q K / U x P d M p 0 / 1 S C 1 6 U / E / r x 3 r w b W b 8 C C K N Q Z s v m g Q C 6 J D M o 2 A 9 L l E p s X E E M o k N 7 c S N q K S M m 2 C K p g Q n M W X l 0 n j r O x c l i 8 e z k u V m y y O P B z B M Z y C A 1 d Q g T u o Q h 0 Y S H i G V 3 i z n q w X 6 9 3 6 m L f m r G z m E P 7 A + v w B h z G S j A = = < / l a t e x i t > ⇥ L 1 < l a t e x i t s h a _ b a s e = \" N P t N / J m E A G + H b f Y i K r X t I k = \" > A A A B X i c b V D J S g N B E K x j X G L e v T S G A R P Y S" }, { "formula_coordinates": [ 5, 124.85, 112.95, 14.65, 13.13 ], "formula_id": "formula_1", "formula_text": "V G F g o d Y w L b M U S a e A J b H q j n f f E S p e B T W D h G N C D k P u c U W k b q c R E a X n S T e n v U L R L t k z k G X i Z K Q I G a q w l e n H E k w F A z Q Z V q O a s Z R K z Z n A S b T K I w p G E B t g N a Y D K T W d X T i p U f r E j S p U J O Z + n s i p Y F S A z n Q H V Q X o T c X / v H a i / W s W G c a A z Z f J G f C K I j M o A L l E p s X Y E M o k N c S N q S S M m C y p s Q n M W X l m j X H I u S x c P X K T R Z H D o h B M A g S u o w B U o Q M J D z D K x Z T a L W z F t X r G z m C P A + v w B i L q S j Q = = < / l a t e x i t > ⇥ L" }, { "formula_coordinates": [ 5, 102.41, 184.46, 30.81, 6.22 ], "formula_id": "formula_2", "formula_text": "F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z" }, { "formula_coordinates": [ 5, 204.56, 112.74, 273.5, 121.28 ], "formula_id": "formula_3", "formula_text": "F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" v l p B k E q u M k S p 4 6 f W 3 A 1 T 1 n i j 9 r E = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d D B b B V U n E 1 7 L o Q h c u K t g H t D F M p p N 2 6 O T B z E Q I I f 6 K G x e K u P V D 3 P k 3 T t o u t P X A w O G c e 7 l n j h d z J p V l f R u l p e W V 1 b X y e m V j c 2 t 7 x 9 z d a 8 s o E Y S 2 S M Q j 0 f W w p J y F t K W Y 4 r Q b C 4 o D j 9 O O N 7 4 q / M 4 j F Z J F 4 b 1 K Y + o E e B g y n x G s t O S a 1 X 6 A 1 Y h g n l 3 n b m b n D 9 l t 7 p o 1 q 2 5 N g B a J P S M 1 m K H p m l / 9 Q U S S g I a K c C x l z 7 Z i 5 W R Y K E Y 4 z S v 9 R N I Y k z E e 0 p 6 m I Q 6 o d L J J + B w d a m W A / E j o F y o 0 U X 9 v Z D i Q M g 0 8 P V l E l f N e I f 7 n 9 R L l X z g Z C + N E 0 Z B M D / k J R y p C R R N o w A Q l i q e a Y C K Y z o r I C A t M l O 6 r o k u w 5 7 + 8 S N r H d f u s f n p 3 U m t c z u o o w z 4 c w B H Y c A 4 N u I E m t I B A C s / w C m / G k / F i v B s f 0 9 G S M d u p w h 8 Y n z 8 X T p U S < / l a t e x i t > G L 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" 4 X X S 6 L 1 v k 5 w p C 8 O p k P s H t C X N i L Y = \" > A A A B / H i c b V D L S s N A F L 3 x W e s r 2 q W b Y B F c l a T 4 W h Z d 6 M J F B f u A N o b J d N o O n U z C z E Q I I f 6 K G x e K u P V D 3 P k 3 T t o s t P X A w O G c e 7 l n j h 8 x K p V t f x t L y y u r a + u l j f L m 1 v b O r r m 3 3 5 Z h L D B p 4 Z C F o u s j S R j l p K W o Y q Q b C Y I C n 5 G O P 7 n K / c 4 j E Z K G / F 4 l E X E D N O J 0 S D F S W v L M S j 9 A a o w R S 6 8 z L 6 1 n D + l t 5 p l V u 2 Z P Y S 0 S p y B V K N D 0 z K / + I M R x Q L j C D E n Z c + x I u S k S i m J G s n I / l i R C e I J G p K c p R w G R b j o N n 1 l H W h l Y w 1 D o x 5 U 1 V X 9 v p C i Q M g l 8 P Z l H l f N e L v 7 n 9 W I 1 v H B T y q N Y E Y 5 n h 4 Y x s 1 R o 5 U 1 Y A y o I V i z R B G F B d V Y L j 5 F A W O m + y r o E Z / 7 L i 6 R d r z l n t d O 7 k 2 r j s q i j B A d w C M f g w D k 0 4 A a a 0 A I M C T z D K 7 w Z T 8 a L 8 W 5 8 z E a X j G K n A n 9 g f P 4 A G N e V E w = = < / l a t e x i t > G L 2 (d) Global-Local Gaussian Contrastive Learning < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q b j / x T g T H N G 6 c R T S w p p Y y T 0 r U u g = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 l 0 o Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" Q b j / x T g T H N G 6 c R T S w p p Y y T 0 r U u g = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 l 0 o Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z H 9 5 k b S P 6 8 5 Z / f T u p N a 4 n N V R h n 0 4 g C N w 4 B w a c A N N a A G G F J 7 h F d 6 M J + P F e D c + p q M l Y 7 Z T h T 8 w P n 8 A d P O V T w = = < / l a t e x i t > G L n < l a t e x i t s h a 1 _ b a s e 6 4 = \" M O m A I f 9 2 C G z J 0 S z G 9 4 T q O d c h D L 4 = \" > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S T i 1 7 H o x Y O H C v 2 C N i 2 b 7 b R d u t m E 3 Y 1 S Q v 6 H F w + K e P W / e P P f u G 1 z 0 N Y H A 4 / 3 Z p i Z 5 0 e c K e 0 4 3 1 Z u Z X V t f S O / W d j a 3 t n d K + 4 f N F Q Y S 4 p 1 G v J Q t n y i k D O B d c 0 0 x 1 Y k k Q Q + x 6 Y / v p 3 6 z U e U i o W i p i c R e g E Z C j Z g l G g j d T u 1 E W r S T e 7 T X i L S X r H k l J 0 Z 7 G X i Z q Q E G a q 9 4 l e n H 9 I 4 Q K E p J 0 q 1 X S f S X k K k Z p R j W u j E C i N C x 2 S I b U M F C V B 5 y e z q 1 D 4 x S t 8 e h N K U 0 P Z M / T 2 R k E C p S e C b z o D o k V r 0 p u J / X j v W g 2 s v Y S K K N Q o 6 X z S I u a 1 D e x q B 3 W c S q e Y T Q w i V z N x q 0 x G R h G o T V M G E 4 C 6 + v E w a Z 2 X 3 s n z x c F 6 q 3 G R x 5 O E I j u E U X L i C C t x B F e p A Q c I z v M K b 9 W S 9 W O / W x 7 w 1 Z 2 U z h / A H 1 u c P 5 E q S y Q = = < / l a t e x i t > ⇥ L n Unlabeled Labeled < l a t e x i t s h a 1 _ b a s e 6 4 = \" n + J E Z N n P Y N D 3 F b h J C 3 5 I G 1 c t 2 X w = \" > A A A B / H i c b V D L S s N A F L 2 p r 1 p f 0 S 7 d B I v g q i T i a 1 n U h Q s X F e w D 2 h g m 0 0 k 7 d D I J M x M h h P g r b l w o 4 t Y P c e f f O G m 7 0 N Y D A 4 d z 7 u W e O X 7 M q F S 2 / W 2 U l p Z X V t f K 6 5 W N z a 3 t H X N 3 r y 2 j R G D S w h G L R N d H k j D K S U t R x U g 3 F g S F P i M d f 3 x V + J 1 H I i S N + L 1 K Y + K G a M h p Q D F S W v L M a j 9 E a o Q R y 6 5 z L + P 5 Q 3 a b e 2 b N r t s T W I v E m Z E a z N D 0 z K / + I M J J S L j C D E n Z c + x Y u R k S i m J G 8 k o / k S R G e I y G p K c p R y G R b j Y J n 1 u H W h l Y Q S T 0 4 8 q a q L 8 3 M h R K m Y a + n i y i y n m v E P / z e o k K L t y M 8 j h R h O P p o S B h l o q s o g l r Q A X B i q W a I C y o z m r h E R I I K 9 1 X R Z f g z" }, { "formula_coordinates": [ 5, 343.43, 198.01, 20.15, 20.97 ], "formula_id": "formula_4", "formula_text": "I = \" > A A A B + H i c b V D L S s N A F L 2 p r 1 o f j b p 0 M 1 g E V y U R q y 6 L L n R Z w T 6 g j W U y n b R D J 5 M w M x F q 6 J e 4 c a G I W z / F n X / j p M 1 C W w 9 c O J x z L 3 P m + D F n S j v O t 1 V Y W V 1 b 3 y h u l r a 2 d 3 b L 9 t 5 + S 0 W J J L R J I h 7 J j o 8 V 5 U z Q p m a a 0 0 4 s K Q 5 9 T t v + + D r z 2 4 9 U K h a J e z 2 J q R f i o W A B I 1 g b q W + X e y H W I 4 J 5 e j N 9 M N O 3 K 0 7 V m Q E t E z c n F c j R 6 N t f v U F E k p A K T T h W q u s 6 s f Z S L D U j n E 5 L v U T R G J M x H t K u o Q K H V H n p L P g U H R t l g I J I m h E a z d T f F y k O l Z q E v t n M Y q p F L x P / 8 7 q J D i 6 9 l I k 4 0 V S Q + U N B w p G O U N Y C G j B J i e Y T Q z C R z G R F Z I Q l J t p 0 V T I l u I t f X i a t 0 6 p 7 X q 3 d n V X q V 3 k d R T i E I z g B F y 6 g D r f Q g C Y Q S O A Z X u H N e r J e r H f r Y 7 5 a s P K b A / g D 6 / M H D q i T X Q = = < / l a t e x i t > G G" }, { "formula_coordinates": [ 5, 383.85, 106.79, 27.1, 6.22 ], "formula_id": "formula_5", "formula_text": "i C Q h F Z p w r F T H d W L t p V h q R j j N S t 1 E 0 R i T E R 7 Q j q E C h 1 R 5 6 S R 4 h g 6 N 0 k d B J M 0 T G k 3 U 3 x s p D p U a h 7 6 Z z G O q e S 8 X / / M 6 i Q 4 u v J S J O N F U k O m h I O F I R y h v A f W Z p E T z s S G Y S G a y I j L E E h N t u i q Z E" }, { "formula_coordinates": [ 5, 162.29, 197.82, 6.88, 7.74 ], "formula_id": "formula_6", "formula_text": "N G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k" }, { "formula_coordinates": [ 5, 389.44, 120.47, 49.3, 90.76 ], "formula_id": "formula_7", "formula_text": "K q Q o b V i b Q B T Q 0 t t W C Q R 8 L k 0 = \" > A A A B 8 X i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q g x 4 j 5 I X J G m Y n n W T I 7 O w y M y u E J X / h x Y M i X v 0 b b / 6 N k 2 Q P G i 1 o K K q 6 6 e 4 K Y s G 1 c d 0 v J 7 e 0 v L K 6 l l 8 v b G x u b e 8 U d / c a O k o U w z q L R K R a A d U o u M S 6 4 U Z g K 1 Z I w 0 B g M x h d T / 3 m I y r N I 1 k z 4 x j 9 k A 4 k 7 3 N G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k J n 6 c y K l o d b j M L C d I T V D v e h N x f + 8 d m L 6 l 3 7 K Z Z w Y l G y + q J 8 I Y i I y f Z / 0 u E J m x N g S y h S 3 t x I 2 p I o y Y 0 M q 2 B C 8 x Z f / k s Z J 2 T s v n 9 2 d l i p X W R x 5 O I B D O A Y P L q A C t 1 C F O j C Q 8 A Q v 8 O p o 5 9 l 5 c 9 7 n r T k n m 9 m H X 3 A + v g G H u 5 D X < / l a t e x i t > ⇥ G < l a t e x i t s h a 1 _ b a s e 6 4 = \" j c W D K q Q o b V i b Q B T Q 0 t t W C Q R 8 L k 0 = \" > A A A B 8 X i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q g x 4 j 5 I X J G m Y n n W T I 7 O w y M y u E J X / h x Y M i X v 0 b b / 6 N k 2 Q P G i 1 o K K q 6 6 e 4 K Y s G 1 c d 0 v J 7 e 0 v L K 6 l l 8 v b G x u b e 8 U d / c a O k o U w z q L R K R a A d U o u M S 6 4 U Z g K 1 Z I w 0 B g M x h d T / 3 m I y r N I 1 k z 4 x j 9 k A 4 k 7 3 N G j Z X u O 7 U h G v q Q 3 k y 6 x Z J b d m c g f 4 m X k R J k q H a L n 5 1 e x J I Q p W G C a t 3 2 3 N j 4 K V W G M 4 G T Q i f R G F M 2 o g N s W y p p i N p P Z x d P y J F V e q Q f K V v S k" }, { "formula_coordinates": [ 5, 72, 476.05, 99.36, 19.88 ], "formula_id": "formula_8", "formula_text": "L L i L L j ̸ = L L i orL L j )." }, { "formula_coordinates": [ 5, 219.26, 654.44, 321.47, 35.16 ], "formula_id": "formula_9", "formula_text": "Θ G t+1 = N L n=1 N L n N • Θ L n , N = N L i=1 N L i . (3.1)" }, { "formula_coordinates": [ 6, 195.95, 200.62, 344.78, 95.48 ], "formula_id": "formula_10", "formula_text": "L n ins = (λ -1) i∈B log S ins (v i , v i , τ S ) j∈B,j̸ =i S ins (v i , v j , τ S ) i∈B L -λ |P(i)| p∈P(i) log S ins (v i , v p , τ L ) j∈N (i) S ins (v i , v j , τ L ) , (3.2) S ins (v, v, τ ) = exp (h (v) • h ( v) /τ ) ,(3.3)" }, { "formula_coordinates": [ 7, 444.95, 376.12, 98.75, 23.12 ], "formula_id": "formula_11", "formula_text": "G L n = {N (µ i , σ i )} M L n i=1" }, { "formula_coordinates": [ 7, 173.51, 453.82, 367.22, 86.45 ], "formula_id": "formula_12", "formula_text": "L gmm (G L n , v i , y i ) = -log |σ i | -1 2 S gmm (v i , y i ) M L n j=1,j̸ =y i |σ j | -1 2 S gmm (v j , y j ) , (4.1) S gmm (v, y) = exp - 1 2 v -µ y σ y 2 • (1 + m) , (4.2)" }, { "formula_coordinates": [ 8, 231.77, 361.29, 308.96, 18.93 ], "formula_id": "formula_13", "formula_text": "S pcl (v i , y i ) = exp (v i • µ y i /ϕ y i ) ,(4.3)" }, { "formula_coordinates": [ 8, 189.95, 534.3, 350.78, 26.03 ], "formula_id": "formula_14", "formula_text": "L reg (G L n , v i , y i ) = -log(S gcl (v i , y i )) + 1 2 log |σ y i | . (4.4)" }, { "formula_coordinates": [ 8, 184.31, 586.12, 356.42, 36.19 ], "formula_id": "formula_15", "formula_text": "L n gcl (G L n ) = N L n i=1 L gmm (G L n , v i , y i ) + αL reg (G L n , v i , y i ),(4.5)" }, { "formula_coordinates": [ 9, 206.5, 563.25, 334.23, 20.42 ], "formula_id": "formula_16", "formula_text": "L n = L n ins + (1 -γ)L n gcl (G G ) + γL n gcl (G L n ), (4.6)" } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b46", "b33", "b5" ], "table_ref": [], "text": "Motivated by the growing data demands of modern deep learning, dataset distillation [ZB21, NNXL21, ZNB22, WZTE18] aims to summarize large datasets into significantly smaller synthetic distilled datasets, which when trained on retain high predictive accuracy, comparable to the original dataset. These distilled datasets have applications in continual learning [ZNB22,SCCB22], architecture search [SRL + 19], and privacy preservation [CKF22]. Recent years have seen the development of numerous distillation algorithms, but despite this progress, the field has remained largely empirical. Specifically, there is little understanding of what makes one dataset \"easier to distill\" than another, or whether such small synthetic datasets even exist.\nThis work aims to fill this gap by providing the first bounds on the sufficient size and relative error associated with distilled datasets. Noting prior work relating neural network training to kernel ridge regression (KRR), we consider dataset distillation in the kernel ridge regression settings with shift-invariant kernels. By casting the problem into the Random Fourier Feature (RFF) space, we show that:\nThe size and relative error of distilled datasets is governed by the kernel's \"number of effective degrees of freedom\", d λ k . Specifically, in Section 4, we show that distilled sets of size Ω(d λ k log d λ k ), exist, with 12λ + 2L λ predictive error on the training dataset, and only 8λ error with respect to the optimal solution computed on the full dataset, where λ is the kernel ridge regression regularization parameter and L λ the KRR training error on the original dataset; see Theorem 3 and Remark 1 for full details. These bounds hold in practice for both real and synthetic datasets. In section 5, we validate our theorem by distilling synthetic and real datasets with varying sizes and values of d λ k , showing that in all scenarios our bounds accurately predict the error associated with distillation." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b3", "b43", "b45", "b44", "b16", "b9", "b15", "b34", "b39", "b4", "b26", "b31", "b9", "b28", "b29", "b14" ], "table_ref": [], "text": "Coresets. Coresets are weighted selections from a larger training dataset, which, when used for training, yield similar outcomes as if the whole dataset was used [MSSW18, MBL20, MJF19, JMF19, MEM + 22]. The key benefit of using coresets is that they significantly speed up the training process, unlike when the full data set is used. Current methods for picking out coresets incorporate clustering techniques [FL11, JTMF20, LBK16, BLHK16, MJTF21], bilevel optimization [BMK20], sensitivity analysis [MSSW18, HCB16, TMF20, MSF20, TBFR21, TWZ + 22, MTP + 22], and surrogate models for approximation [TZM + 23]. Newer strategies are specifically designed for neural networks, where before each training epoch, coresets are chosen such that their gradients align with the gradients of the entire dataset [MBL20, PDM22, TZM + 23], followed by training the model on the chosen coreset. Although coresets are usually theoretically supported, these methods fall short when the aim is to compute a coreset once for a full training procedure.\nDataset Distillation. To this end, dataset distillation algorithms construct synthetic datasets (not necessarily a subset from the original input) such that gradient descent training on the synthetic datapoints results in high predictive accuracy on the real dataset. Cast as a bilevel optimization problem, early methods involve unrolling training computation graph [WZTE18] for a few gradient descent steps and randomly sampled weight initializations. More sophisticated methods aim to approximate the unrolled computation using kernel methods [NCL21, NNXL21, ZNB22, LHAR22a, LHAR22b], surrogate objectives such gradient matching [ZMB21,ZB21], trajectory matching [CWT + 22] or implicit gradients [LHLR23]. The kernel-induced points (KIP) algorithm [NCL21, NNXL21] is a technique that employs Neural Tangent Kernel (NTK) theory [JGH18,LHAR22b] to formulate the ensuing loss:\nL KIP = 1 2 ∥y t -K TS K -1 SS y S ∥ 2 2 .\nThis loss signifies the predictive loss of training infinitely wide networks on distilled datapoints X S with corresponding labels y S , on the original training set and labels X T , y T , with K •,• being the NTK. Dataset distillation is closely related to the use of inducing points to accelerate Gaussian Processes [SG05,TRB16], for which convergence rates exist, but the existence of such inducing points is not unknown [BRVDW19].\nFrom dataset distillation to kernel ridge regression. Kernel ridge regression extends the linear machine learning ridge regression model by using a kernel function to map input data into higherdimensional feature spaces, allowing for more complex non-linear relationships between variables to be captured [Mur12]. Various methods have been proposed to improve and accelerate the training process of kernel ridge regression. Most notably, Random Fourier Features [RR07] approximates shift-invariant kernel functions by mapping the input data into a lower-dimensional feature space using a randomized cosine transformation. This has been shown to work effectively in practice due to regularizing effects [JSS + 20], as well as providing approximation bounds to the full kernel ridge regression [SS15, AKM + 17, LTOS19]. Training infinite-width neural networks can be cast as kernel ridge regression with the Neural Tangent Kernel (NTK) [JGH18], which allows a closed-form solution of the infinite-width neural network's predictions, enabling kernel-based dataset distillation algorithms such as [NCL21,NNXL21,LHAR22a].\nGoal. We thus provide the first provable guarantees on the existence and approximation error of a small distilled dataset in the kernel ridge regression settings." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "We first provide some notation that will be used throughout the paper.\nNotations. In this paper we let H be a Hilbert space with ∥•∥ H as its norm. For a vector a ∈ R n , we use ∥a∥ to denote its Euclidean norm, and a i to denote its ith entry for every i ∈ [n]. For any positive integer n, we use the convention [n] = {1, 2, • • • , n}. Let A ∈ R n×m be a matrix, then, for every i ∈ [n] and j ∈ [m], A i * denotes the ith row of A, A * j denotes the jth column of A, and A i,j is the jth entry of the ith row of A. Let B ∈ R n×n , then we denote the trace of B by Tr(B). We use I m ∈ R m×m to denote the identity matrix. Finally, vectors are addressed as column vectors unless stated otherwise." }, { "figure_ref": [], "heading": "Kernel ridge regression", "publication_ref": [ "b18", "b17", "b32", "b9", "b31", "b18", "b18" ], "table_ref": [ "tab_0" ], "text": "Let X ∈ R n×d be a matrix and let y ∈ R n be a vector. Let k : R d × R d → [0, ∞) be a kernel function, and let K ∈ R n×n be its corresponding kernel matrix with respect to the rows of X; i.e., K i,j = k X i * , X j * for every i, j ∈ [n]. Let λ > 0 be a regularization parameter. The goal of kernel ridge regression (KRR) involving X, y, k, and λ is to find\nα λ [X,y,k] ∈ arg min α∈R n 1 n ∥y -Kα∥ 2 + λα T Kα.(1)\nWe use the notation f λ [X,y,k] : R d → R to denote the in-sample prediction by applying the KRR solution obtained on X, y and λ using the kernel k, i.e., for every\nx ∈ R d , f λ [X,y,k] (x) = n ∑ i=1 α λ [X,y,k] i k (X i * , x) . (2\n)\nTo provide our theoretical guarantees on the size and approximation error for the distilled datasets, the following assumption will be used in our theorem and proofs.\nAssumption 1. We inherit the same theoretical assumptions used at [LTOS21] for handling the KRR problem:\n(I) Let F be the set of all functions mapping R d to R. Let f * ∈ F be the minimizer of n ∑ i=1 |y i -f (X i * )| 2 ,\nsubject to the constraint that for every x ∈ R d and y ∈ R, y = f * (x) + ϵ, where E(ϵ) = 0 and Var(ϵ) = σ 2 . Furthermore, we assume that y is bounded, i.e., |y| ≤ y 0 .\n(II) We assume that f λ [X,y,k] H ≤ 1. (III) For a kernel k, denote with λ 1 ≥ • • • ≥ λ n the eigenvalues of the kernel matrix K. We assume that the regularization parameter satisfies 0 ≤ nλ ≤ λ 1 .\nThe logic behind our assumptions. First, the idea behind Assumption (I) is that the pair (X, y) can be linked through some function that can be from either the same family of kernels that we support (i.e., shift-invariant) or any other kernel function. In the context of neural networks, the intuition behind Assumption (I) is that there exists a network from the desired architectures that gives a good approximation for the data. Assumption (II) aims to simplify the bounds used throughout the paper as it is a pretty standard assumption, characteristic to the analysis of random Fourier features [LTOS19,RR17]. Finally, Assumption (III) is to prevent underfitting. Specifically speaking, the largest eigenvalue of K (K + nλI n ) -1 is λ 1 (λ 1 +nλ) . Thus, in the case of nλ > λ 1 , the in-sample prediction is dominated by the term nλ. Throughout the following analysis, we will use the above assumptions. Hence, for the sake of clarity, we will not repeat them, unless problem-specific clarifications are required.\nConnection to Dataset distillation of neural networks. Since the neural network kernel in the case of infinite width networks describes a Gaussian distribution [JGH18], we aim at proving the existence of small sketches (distilled sets) for the input data with respect to the KRR problem with Gaussian kernel. However, the problem with this approach is that the feature space (in the Gaussian kernel corresponding mapping) is rather intangible or hard to map to, and sketch (distilled set) construction techniques require the representation of these points in the feature space.\nTo resolve this problem, we use a randomized approximated feature map, e.g., random Fourier features (RFF), and weighted random Fourier features (Weighted RFF). The dot product between every two mapped vectors in this approximated feature map aims to approximate their Gaussian kernel function [RR07]. We now restate a result connecting ridge regression in the RFF space (or alternatively weighted RFF), and KRR in the input space.\nTheorem 2 (A result of the proof of Theorem 1 and Corollary 2 of [LTOS21]). Let X ∈ R n×d be an input matrix, y ∈ R n be an input label vector, k : R d × R d → [0, ∞) be a shift-invariant kernel function, and K ∈ R n×n , where ∀i, j ∈ [n] : K i,j = k(X i * , X j * ). Let λ > 0, and let d λ\nK = Tr K (K + nλI n ) -1 . Let s ϕ ∈ Ω d λ K log d λ K\nbe a positive integer. Then, there exists a pair (ϕ, X) such that (i) ϕ is a mapping ϕ : R d → R s ϕ (which is based on either the weighted RFF function or the RFF function [LTOS21]), (ii) X is a matrix X ∈ R n×s ϕ where for every i ∈ [n], X i * := ϕ (X i * ), and (iii) (ϕ, X) satisfies\n1 n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 ≤ 1 n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4λ, where f λ [ X,y,ϕ] : R s ϕ → R such that for every row vector z ∈ R s ϕ , f λ [ X,y,ϕ] (z) = z X T X + λns ϕ λI s ϕ -1 X T y.\nNote that, Table 1 gives bounds on s ϕ when λ ∝ 1 √ n ." }, { "figure_ref": [], "heading": "Main result: on the existence of small distilled sets", "publication_ref": [], "table_ref": [], "text": "In what follows, we show that for any given matrix X ∈ R n×d and a label vector y ∈ R n , there exists a matrix S ∈ R (sϕ+1)×d and a label vector y S ∈ R s ϕ +1 such that the fitting solution in the RFF space mapping of S is identical to that of the fitted solution on the RFF space mapping of X. With such S and y S , we proceed to provide our main result showing that one can construct a solution for KRR in the original space of S which provably approximates the quality of the optimal KRR solution involving X and y. Thus, we obtain bounds on the minimal distilled set size required for computing a robust approximation, as well as bounds on the error for such a distilled set.\nTheorem 3 (On the existence of some distilled data). Let X ∈ R n×d be a matrix, y ∈ R n be a label vector, k : R d × R d → [0, ∞) be a kernel function, Υ = (0, 1) ∪ {2}, and let s ϕ be defined as in Theorem 2. Then, there exists a matrix S ∈ R (sϕ+1)×d and a label vector y S such that (i) the weighted RFF mapping S ∈ R (sϕ+1)×(sϕ) of S, satisfies that\nX T X + λns ϕ λI s ϕ -1 X T y = S T S + λns ϕ λI s ϕ -1 S T y S ," }, { "figure_ref": [], "heading": "and", "publication_ref": [], "table_ref": [], "text": "(ii) there exists an in-sample prediction f λ,X,y\n[S,y S ,k] (not necessarily the optimal on S and y s ) satisfying\n1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ 2 max τ, 4 τ 2 + 2 min 1 + τ, 4 (1 + τ) 3τ λ,(3)\nand\n1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 min 1 + τ, 4 (1 + τ) 3τ + 2 max τ, 4 τ 2 λ.\n(4)\nProof. Let S be any matrix in R (sϕ+1)×d and let S be the weighted RFF mapping of S." }, { "figure_ref": [], "heading": "Proof of (i).", "publication_ref": [], "table_ref": [], "text": "To ensure (i), we need to find a corresponding proper y S . We observe that\nS T S + λns ϕ λI s ϕ X T X + λns ϕ λI s ϕ -1 X T y = S T y S Let b = S T S + λns ϕ λI s ϕ X T X + λns ϕ λI s ϕ -1\nX T y, be the left-hand side term above. b is a vector of dimension s ϕ . Hence we need to solve b = S T y S for y S . Since it is a linear system with s ϕ + 1 variables and s ϕ equations, we get that the solution is y S = S T † b, where (•) † denotes the pseudo-inverse of the given matrix." }, { "figure_ref": [], "heading": "Proof of (ii).", "publication_ref": [ "b14", "b27", "b18", "b18" ], "table_ref": [], "text": "Inspired by [LHAR22a] and [NCL20], the goal is to find a set of instances that their in-sample prediction with respect to the input data (X in our context) would lead to an approximation towards the solution that one would achieve if the KRR was used only with the input data. To that end, we introduce the following Lemm.\nLemma 1 (Restatement of Lemma 6 [LTOS21]). Under Assumption 1 and the definitions in Theorem 2, for every f ∈ H with ∥ f ∥ H ≤ 1, with constant probability, it holds that inf\n√ s ϕ ∥β∥≤ √ 2 β∈R s ϕ n ∑ i=1 1 n f (X i * ) -X i * β 2 ≤ 2λ.\nNote that Lemma 1 shows that for every in-sample prediction function with respect to X, there exists a query β ∈ R s ϕ in the RFF space of that input data such that the distance between the inprediction sample function in the input space and the in-sample prediction in the RFF space is at 2λ. Furthermore, at [LTOS21] it was shown that β is defined as\nβ = 1 s ϕ X T X X T + nλI s ϕ -1 f[X],\nwhere\nf[X] i = f (X i * ) for every i ∈ [n].\nWe thus set out to find an in-sample prediction function that is defined over S such that by its infimum by Lemma 1 would be the same solution β that the ridge regression on X attains with respect to the y. Specifically speaking, we want to find an in-sample prediction f λ,X,y\n[S,y S ,k] (•) such that β = 1 s ϕ X T 1 s ϕ X X T + nλI s ϕ -1 f S [X],(5)\nwhere\n(i) f S [X] ∈ R n such that for every i ∈ [n], f S [X] i = f λ,X,y\n[S,y S ,k] (X i * ), and\n(ii) f λ,X,y [S,y S ,k] (•) = s ϕ +1 ∑ i=1 α i k (S i * , •) such that α ∈ R s ϕ +1 .\nHence we need to find an in-sample prediction function f λ,X,y\n[S,y S ,k] satisfying 5. Now, notice that\nβ ∈ R s ϕ , f S [X] ∈ R n and X T 1 s ϕ X X T + nλI n -1\n∈ R s ϕ ×n . Due to the fact that we aim to find f λ,X,y\n[S,y S ,k] , such a task boils down to finding α ∈ R s ϕ +1 which defines f λ,X,y [S,y S ,k] as in (ii). The above problem can be reduced to a system of linear equations where the number of equalities is s ϕ , while the number of variables is s ϕ + 1.\nTo do so, we denote 1\ns ϕ X T 1 s ϕ X X T + nλI n -1\nby Â, and observe that we aim to solve\nβ = Â f λ S [X] = Â             s ϕ +1 ∑ i=1 α i k (S i * , X 1 * ) s ϕ +1 ∑ i=1 α i k (S i * , X 2 * ) . . . s ϕ +1 ∑ i=1 α i k (S i * , X n * )            \n.\nWe now show that every entry b j (j ∈ [s ϕ + 1]) in β can be rewritten as inner products between another pair of vectors in R s ϕ +1 instead of the inner product between two vectors in R n . Formally, for every j ∈ [s ϕ + 1], it holds that\nβ j = Âj *             s ϕ +1 ∑ i=1 α i k (S i * , X 1 * ) s ϕ +1 ∑ i=1 α i k (S i * , X 2 * ) . . . s ϕ +1 ∑ i=1 α i k (S i * , X n * )             = n ∑ t=1 Âj,t k (S 1 * , X t * ) , • • • , n ∑ t=1 Âj,t k S (sϕ+1) * , X t *    α 1 . . . α s ϕ +1    .\nThus, for every j ∈ [s ϕ + 1], define\nA j * = n ∑ t=1 Âj,t k (S 1 * , X t * ) , • • • , n ∑ t=1 Âj,t k S (sϕ+1) * , X t * ∈ R s ϕ +1 .\nThe right-hand side of (5) can reformulated as\n1 s ϕ X T 1 s ϕ X X T + nλI n -1 f S [X] = Aα,(6)\nwhere now we only need to solve β = Aα. Such a linear system of equations might have an infinite set of solutions due to the fact that we have s ϕ + 1 variables (the length of α) and exactly s ϕ equations.\nFor simplicity, a solution to the above equality would be α := (A) † β. To proceed in proving (ii) with all of the above ingredients, we utilize the following tool.\nLemma 2 (Special case of Definition 6.1 from [BFL + 16]). Let X be a set, and let X, ∥•∥ 2 be a 2-metric space i.e., for every x, y, z ∈ X, ∥x -y∥ 2 ≤ 2 ∥x -z∥ 2 + ∥y -z∥ 2 . Then, for every ε ∈ (0, 1), and x, y, z ∈ X,\n(1 -ε) ∥y -z∥ 2 - 4 ε 2 ∥x -z∥ 2 ≤ ∥x -y∥ 2 ≤ 4 ε 2 ∥x -z∥ 2 + (1 + ε) ∥y -z∥ 2 . (7\n)\nWe note that Lemma 2 implies that x, y, z\n∈ R d ∥x -y∥ 2 ≤ min τ∈Υ max τ, 4 τ 2 ∥x -z∥ 2 + min 1 + τ, 4 (1 + τ) 3τ ∥y -z∥ 2 . (8\n)\nwhere for τ = 2 we get the inequality associated with the property of 2-metric, and for any τ ∈ (0, 1), we obtain the inequality (7).\nOn the Size and Approximation Error of Distilled Sets\nWe thus observe that\n1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 = 1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ [ X,y,ϕ] X i * + f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ max τ, 4 τ 2 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ [ X,y,ϕ] X i * 2 + min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ 2 max τ, 4 τ 2 λ + 2 min 1 + τ, 4 (1 + τ) 3τ λ = min τ∈Υ 2 max τ, 4 τ 2 + 2 min 1 + τ, 4 (1 + τ) 3τ λ,\nwhere the first equality holds by adding and subtracting the same term, the first inequality holds by Lemma 2, and the second inequality holds by combining the way f λ,X,y\n[S,y S ,k] was defined and Theorem 2. Finally, to conclude the proof of Theorem 3, we derive 4\n1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 = 1 n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * + f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 + max τ, 4 τ 2 n n ∑ i=1 f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 + 2 max τ, 4 τ 2 λ ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 min 1 + τ, 4 (1 + τ) 3τ + 2 max τ, 4 τ 2 λ,(9)\nwhere the equality holds by adding and subtracting the same term, the first inequality holds by (8), and the second inequality follows as a result of the way f λ S was constructed and the fact that β is its infimum based on Lemma 1, and the last inequality holds by Theorem 2.\nTo simplify the bounds stated at Theorem 3, we provide the following remark.\nRemark 1. By fixing τ := 2, the bounds in Theorem 3 become\n1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 8λ,and\n1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 2 n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 12λ.\nAs for fixing τ := ε ∈ (0, 1), we obtain that\n1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 1 + ε n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 (1 + ε) + 8 ε 2 λ." }, { "figure_ref": [], "heading": "Experimental Study", "publication_ref": [], "table_ref": [], "text": "To validate our theoretical bounds, we performed distillation on three datasets: two synthetic datasets consisted of data generated from a Gaussian Random Field section 5.1, and classification of two clusters section 5.2, and one real dataset of MNIST binary 0 vs. 1 classification section 5.3. Full experimental details for all experiments are available in the appendix. We first test our bounds by distilling data generated from the Gaussian Process prior induced by a kernel, k on 2d data. We use a squared exponential kernel with lengthscale parameter l = 1.5:" }, { "figure_ref": [ "fig_1", "fig_0", "fig_0", "fig_0", "fig_2" ], "heading": "2d Gaussian Random Fields", "publication_ref": [], "table_ref": [], "text": "k(x, x ′ ) = e -||x-x ′ || 2 2 2l 2\n. For X, we sample n = 10 5 datapoints from N (0, σ 2\nx ), with σ x ∈ [0.25, 5.0]. We then sample y ∼ N (0, K XX + σ 2 y I n ), σ y = 0.01. We fix λ = 10 -5 and distill down to s = d λ k log d λ k . The resulting values of d λ k , s, and compression ratios are plotted in fig. 2. We additionally plot the predicted upper bound given by Remark 1 and the actual distillation loss. Our predicted upper bound accurately bounds the actual distillation loss. To better visualize how distillation affects the resulting KRR prediction, we show the KRR predictive function f λ X and the distilled predictive f λ S for σ x = 5.0 in fig. 1b and fig. 1c. Our second synthetic dataset is one consisting of two Gaussian clusters centered at (-2, 0) and (2, 0), with labels -1 and +1, respectively. Each cluster contains 5000 datapoints so that n = 10 5 . Each cluster as standard deviation σ x ∈ [0.25, 5.0]. Additionally, two allow the dataset to be easily classified, we clip the x coordinates of clusters 1 and clusters 2 to not exceed/drop below -0.4 and 0.4, for the two clusters, respectively. This results in a margin between the two classes. We visualize the dataset for n = 300 and σ = 1.5 in fig. 1a. We use the same squared exponential kernel as in section 5.1 with l = 1.5, fix λ = 10 -5 and distill with the same protocol as in section 5.1. We likewise plot d λ k , s, and compression ratios and distillation losses in fig. 3, again with our bound accurately containing the true distillation loss. " }, { "figure_ref": [], "heading": "Two Gaussian Clusters Classification", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "MNIST Binary Classification", "publication_ref": [], "table_ref": [], "text": "λ i ∝ A i s ϕ ∈ Ω(log n • log log n) λ i ∝ i -2t (t ≥ 1) s ϕ ∈ Ω(n 1/2t • log n) λ i ∝ i -1 s ϕ ∈ Ω( √ n • log n) PLAIN RFF finite rank s ϕ ∈ Ω( √ n) λ i ∝ A i s ϕ ∈ Ω( √ n • log log n) λ i ∝ i -2t (t ≥ 1) s ϕ ∈ Ω( √ n • log n) λ i ∝ i -1 s ϕ ∈ Ω( √ n • log n)\nFor our final dataset, we consider binary classification on MNIST 0 and 1 digits, with labels -1 and +1, respectively. We use the same squared-exponential kernel with l = 13.9, which was chosen to maximize the marginal-log-likelihood, treating the problem as Gaussian Process regression. We vary n ∈ [500, 10000], with an equal class split, and perform the same distillation protocol as in section 5.1. Here, we additionally scale λ ∝ 1 √ n such that λ = 10 -4 when n = 5000. Distilling yields fig. 4, showing that our bounds can accurately predict distillation losses for real-world datasets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we adopt a theoretical perspective to provide bounds on the (sufficient) size and approximation error of distilled datasets. By leveraging the concept of random Fourier features (RFF), we prove the existence of small distilled datasets and we bound their corresponding excess risk when using shift-invariant kernels. Our findings indicate that the size of the guaranteed distilled data is a function of the \"number of effective degrees of freedom,\" which relies on factors like the kernel, the number of points, and the chosen regularization parameter, λ, which also controls the excess risk.\nIn particular, we demonstrate the existence of a small subset of instances within the original input space, where the solution in the RFF space coincides with the solution found using the input data in the RFF space. Subsequently, we show that this distilled subset of instances can be utilized to generate a KRR solution that approximates the KRR solution obtained from the complete input data. To validate these findings, we conducted empirical examinations on both synthetic and real-world datasets supporting our claim.\nWhile this study provides a vital first step in understanding the theoretical limitations of dataset distillation, the proposed bounds are not tight, as seen by the gap between the theoretical upper bound and the empirical distillation loss in section 5. Future work could look at closing this gap, as well as better understanding the tradeoff between distillation size and relative error. " }, { "figure_ref": [], "heading": "A. Experiment Details", "publication_ref": [], "table_ref": [], "text": "All experiments unless otherwise stated present the average/standard deviations of n = 3 runs. Each run consists of a random subset of MNIST 0/1 digits for MNIST binary classification, or random positions of sampled datapoints for synthetic data, and different samples from the GP for the Gaussian Random Field experiment. Distilled datasets are initialized as subsets of the original training data. We distill for 20000 iterations with Adam optimizer with a learning rate of 0.002 optimizing both images/data positions and labels. We use full batch gradient descent for the synthetic datasets and a maximum batch size of 2000 for the MNIST experiment. For the MNIST experiment we found that particularly for larger values of n, with minibatch training, we could obtain lower distillation losses by optimizing for longer, so the closing of the gap between the upper bound and experiment values in fig. 4 may be misleading: longer optimization could bring the actual distillation loss lower.\nTo ensure that assumption (II) is fulfilled, we scale the labels such that f λ [X,y,k] H = 1. For example, if we are working with MNIST binary classification, with labels {+1, -1}, we first compute f λ [X,y,k] H = r using {+1, -1} labels, then rescale the labels by 1/r so that the labels are {+ 1 r , -1 r }. Suppose this results in some upper bound L U and some real distillation loss L R . For the corresponding plots in figs. 2 to 4, we plot r 2 L U and r 2 L R . We do this because the r values for different parameters (such as n or σ x ) could be different, and scaling for the plots allows the values to be comparable.\nIn the figures for the upper bounds on the distillation loss we plot the smallest value of the upper bounds in remark 1." } ]
Dataset Distillation is the task of synthesizing small datasets from large ones while still retaining comparable predictive accuracy to the original uncompressed dataset. Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets? In this work, we take a theoretical view on kernel ridge regression (KRR) based methods of dataset distillation such as Kernel Inducing Points. By transforming ridge regression in random Fourier features (RFF) space, we provide the first proof of the existence of small (size) distilled datasets and their corresponding excess risk for shift-invariant kernels. We prove that a small set of instances exists in the original input space such that its solution in the RFF space coincides with the solution of the original data. We further show that a KRR solution can be generated using this distilled set of instances which gives an approximation towards the KRR solution optimized on the full input data. The size of this set is linear in the dimension of the RFF space of the input set or alternatively near linear in the number of effective degrees of freedom, which is a function of the kernel, number of datapoints, and the regularization parameter λ. The error bound of this distilled set is also a function of λ. We verify our bounds analytically and empirically.
DRL On the Size and Approximation Error of Distilled Sets
[ { "figure_caption": "Fig. 1 :1Fig. 1: (a) visualizes the two clusters dataset in section 5.2 with n = 300 and σ x = 1.5. (b) and (c) visualizing the KRR predictive functions generated by the original dataset (b) and the distilled dataset (c) for the Gaussian Random Field experiment in section 5.1 for σ x = 5.0. The distilled dataset is able to capture all the nuances of the original dataset with a fraction of the datapoints.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Distillation results for synthetic data generated by a Gaussian Random Field (n = 3)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Distillation results for synthetic data of two Gaussian clusters (n = 3)", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Distillation results for MNIST binary 0 vs. 1 classification (n = 3)", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "acknowledgementsThis research has been funded in part by the Office of Naval Research Grant Number Grant N00014-18-1-2830, DSTA Singapore, and the J. P. Morgan AI Research program.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Table1from[LTOS19]. The trade-off in the worst case for the squared error loss.", "figure_data": "SAMPLING SCHEMESPECTRUMNUMBER OF FEATURESfinite ranks ϕ ∈ Ω(1)WEIGHTED RFF", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Alaa Maalouf; Murad Tukan; Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus
[ { "authors": "Michael Haim Avron; Cameron Kapralov; Christopher Musco; Ameya Musco; Amir Velingker; Zandieh", "journal": "PMLR", "ref_id": "b0", "title": "Random fourier features for kernel ridge regression: Approximation bounds and statistical guarantees", "year": "2017" }, { "authors": "Vladimir Braverman; Dan Feldman; Harry Lang; Adiel Statman; Samson Zhou", "journal": "", "ref_id": "b1", "title": "New frameworks for offline and streaming coreset constructions", "year": "2016" }, { "authors": "Olivier Bachem; Mario Lucic; S Hamed Hassani; Andreas Krause", "journal": "AAAI Press", "ref_id": "b2", "title": "Approximate k-means++ in sublinear time", "year": "2016" }, { "authors": "Zalán Borsos; Mojmir Mutny; Andreas Krause", "journal": "", "ref_id": "b3", "title": "Coresets via bilevel optimization for continual learning and streaming", "year": "2020" }, { "authors": "David Burt; Carl Edward Rasmussen; Mark Van; Der Wilk", "journal": "PMLR", "ref_id": "b4", "title": "Rates of convergence for sparse variational Gaussian process regression", "year": "2019-06-15" }, { "authors": "Dingfan Chen; Raouf Kerkouche; Mario Fritz", "journal": "", "ref_id": "b5", "title": "Private set generation with discriminative information", "year": "2022" }, { "authors": "George Cazenavette; Tongzhou Wang; Antonio Torralba; Alexei A Efros; Jun-Yan Zhu", "journal": "", "ref_id": "b6", "title": "Dataset distillation by matching training trajectories", "year": "2022" }, { "authors": "Dan Feldman; Michael Langberg", "journal": "", "ref_id": "b7", "title": "A unified framework for approximating and clustering data", "year": "2011" }, { "authors": "Jonathan H Huggins; Trevor Campbell; Tamara Broderick", "journal": "", "ref_id": "b8", "title": "Coresets for scalable bayesian logistic regression", "year": "2016" }, { "authors": "Arthur Jacot; Franck Gabriel; Clément Hongler", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Neural tangent kernel: Convergence and generalization in neural networks", "year": "2018" }, { "authors": "Ibrahim Jubran; Alaa Maalouf; Dan Feldman", "journal": "", "ref_id": "b10", "title": "Introduction to coresets: Accurate coresets", "year": "2019" }, { "authors": "Arthur Jacot; Berfin Simsek; Francesco Spadaro; Clément Hongler; Franck Gabriel", "journal": "PMLR", "ref_id": "b11", "title": "Implicit regularization of random feature models", "year": "2020" }, { "authors": "Ibrahim Jubran; Murad Tukan; Alaa Maalouf; Dan Feldman", "journal": "PMLR", "ref_id": "b12", "title": "Sets clustering", "year": "2020" }, { "authors": "Mario Lucic; Olivier Bachem; Andreas Krause", "journal": "PMLR", "ref_id": "b13", "title": "Strong coresets for hard and soft bregman clustering with applications to exponential family mixtures", "year": "2016" }, { "authors": "Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus", "journal": "", "ref_id": "b14", "title": "Efficient dataset distillation using random feature approximation", "year": "2022" }, { "authors": "Noel Loo; Ramin Hasani; Alexander Amini; Daniela Rus", "journal": "", "ref_id": "b15", "title": "Evolution of neural tangent kernels under benign and adversarial training", "year": "2022" }, { "authors": "Noel Loo; Ramin Hasani; Mathias Lechner; Daniela Rus", "journal": "", "ref_id": "b16", "title": "Dataset distillation with convexified implicit gradients", "year": "2023" }, { "authors": "Zhu Li; Jean-Francois Ton; Dino Oglic; Dino Sejdinovic", "journal": "PMLR", "ref_id": "b17", "title": "Towards a unified analysis of random fourier features", "year": "2019" }, { "authors": "Zhu Li; Jean-Francois Ton; Dino Oglic; Dino Sejdinovic", "journal": "The Journal of Machine Learning Research", "ref_id": "b18", "title": "Towards a unified analysis of random fourier features", "year": "2021" }, { "authors": "Baharan Mirzasoleiman; Jeff A Bilmes; Jure Leskovec", "journal": "PMLR", "ref_id": "b19", "title": "Coresets for data-efficient training of machine learning models", "year": "2020-07" }, { "authors": "Alaa Maalouf; Gilad Eini; Ben Mussay; Dan Feldman; Margarita Osadchy", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b20", "title": "A unified approach to coreset learning", "year": "2022" }, { "authors": "Alaa Maalouf; Ibrahim Jubran; Dan Feldman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Fast and accurate least-mean-squares solvers", "year": "2019" }, { "authors": "Alaa Maalouf; Ibrahim Jubran; Murad Tukan; Dan Feldman", "journal": "Sensors", "ref_id": "b22", "title": "Coresets for the average case error for finite query sets", "year": "2021" }, { "authors": "Alaa Maalouf; Adiel Statman; Dan Feldman", "journal": "", "ref_id": "b23", "title": "Tight sensitivity bounds for smaller coresets", "year": "2020" }, { "authors": "Alexander Munteanu; Chris Schwiegelshohn; Christian Sohler; David Woodruff", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "On coresets for logistic regression", "year": "2018" }, { "authors": "Alaa Maalouf; Murad Tukan; Eric Price; Daniel M Kane; Dan Feldman", "journal": "PMLR", "ref_id": "b25", "title": "Coresets for data discretization and sine wave fitting", "year": "2022" }, { "authors": "Kevin P Murphy", "journal": "MIT press", "ref_id": "b26", "title": "Machine learning: a probabilistic perspective", "year": "2012" }, { "authors": "Timothy Nguyen; Zhourong Chen; Jaehoon Lee", "journal": "", "ref_id": "b27", "title": "Dataset meta-learning from kernel ridge-regression", "year": "2020" }, { "authors": "Timothy Nguyen; Zhourong Chen; Jaehoon Lee", "journal": "", "ref_id": "b28", "title": "Dataset meta-learning from kernel ridge-regression", "year": "2021" }, { "authors": "Timothy Nguyen; Roman Novak; Lechao Xiao; Jaehoon Lee", "journal": "", "ref_id": "b29", "title": "Dataset distillation with infinitely wide convolutional networks", "year": "2021" }, { "authors": "Omead Pooladzandi; David Davini; Baharan Mirzasoleiman", "journal": "PMLR", "ref_id": "b30", "title": "Adaptive second order coresets for data-efficient machine learning", "year": "2022-07-23" }, { "authors": "Ali Rahimi; Benjamin Recht", "journal": "Advances in neural information processing systems", "ref_id": "b31", "title": "Random features for large-scale kernel machines", "year": "2007" }, { "authors": "Alessandro Rudi; Lorenzo Rosasco", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Generalization properties of learning with random features", "year": "2017" }, { "authors": "Mattia Sangermano; Antonio Carta; Andrea Cossu; Davide Bacciu", "journal": "", "ref_id": "b33", "title": "Sample condensation in online continual learning", "year": "2022" }, { "authors": "Edward Snelson; Zoubin Ghahramani", "journal": "MIT Press", "ref_id": "b34", "title": "Sparse gaussian processes using pseudoinputs", "year": "2005" }, { "authors": "Felipe Petroski Such; Aditya Rawal; Joel Lehman; Kenneth O Stanley; Jeff Clune", "journal": "", "ref_id": "b35", "title": "Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data", "year": "2019" }, { "authors": "J Danica; Jeff Sutherland; Schneider", "journal": "", "ref_id": "b36", "title": "On the error of random fourier features", "year": "2015" }, { "authors": "Murad Tukan; Cenk Baykal; Dan Feldman; Daniela Rus", "journal": "Theoretical Computer Science", "ref_id": "b37", "title": "On coresets for support vector machines", "year": "2021" }, { "authors": "Murad Tukan; Alaa Maalouf; Dan Feldman", "journal": "", "ref_id": "b38", "title": "Coresets for near-convex functions", "year": "2020" }, { "authors": "Dustin Tran; Rajesh Ranganath; David M Blei", "journal": "", "ref_id": "b39", "title": "The variational gaussian process", "year": "2016" }, { "authors": "", "journal": "", "ref_id": "b40", "title": "TWZ +", "year": "" }, { "authors": "Murad Tukan; Xuan Wu; Samson Zhou; Vladimir Braverman; Dan Feldman", "journal": "PMLR", "ref_id": "b41", "title": "New coresets for projective clustering and applications", "year": "2022" }, { "authors": "Murad Tukan; Samson Zhou; Alaa Maalouf; Daniela Rus; Vladimir Braverman; Dan Feldman", "journal": "", "ref_id": "b42", "title": "Provable data subset selection for efficient neural network training", "year": "2023" }, { "authors": "Tongzhou Wang; Jun-Yan Zhu; Antonio Torralba; Alexei A Efros", "journal": "", "ref_id": "b43", "title": "Dataset distillation", "year": "2018" }, { "authors": "Bo Zhao; Hakan Bilen", "journal": "", "ref_id": "b44", "title": "Dataset condensation with differentiable siamese augmentation", "year": "2021" }, { "authors": "Bo Zhao; Konda Reddy Mopuri; Hakan Bilen", "journal": "", "ref_id": "b45", "title": "Dataset condensation with gradient matching", "year": "2021" }, { "authors": "Yongchao Zhou; Ehsan Nezhadarya; Jimmy Ba", "journal": "", "ref_id": "b46", "title": "Dataset distillation using neural feature regression", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 66.47, 516.85, 133.7, 16.12 ], "formula_id": "formula_0", "formula_text": "L KIP = 1 2 ∥y t -K TS K -1 SS y S ∥ 2 2 ." }, { "formula_coordinates": [ 3, 210.61, 443.21, 338.7, 27.42 ], "formula_id": "formula_1", "formula_text": "α λ [X,y,k] ∈ arg min α∈R n 1 n ∥y -Kα∥ 2 + λα T Kα.(1)" }, { "formula_coordinates": [ 3, 233.04, 497.31, 312.03, 53.21 ], "formula_id": "formula_2", "formula_text": "x ∈ R d , f λ [X,y,k] (x) = n ∑ i=1 α λ [X,y,k] i k (X i * , x) . (2" }, { "formula_coordinates": [ 3, 545.08, 529.38, 4.24, 10.49 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 73.98, 609.92, 475.61, 27.1 ], "formula_id": "formula_4", "formula_text": "(I) Let F be the set of all functions mapping R d to R. Let f * ∈ F be the minimizer of n ∑ i=1 |y i -f (X i * )| 2 ," }, { "formula_coordinates": [ 4, 66.03, 443.89, 483.56, 34.97 ], "formula_id": "formula_5", "formula_text": "K = Tr K (K + nλI n ) -1 . Let s ϕ ∈ Ω d λ K log d λ K" }, { "formula_coordinates": [ 4, 66.33, 515.95, 483.25, 65.35 ], "formula_id": "formula_6", "formula_text": "1 n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 ≤ 1 n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4λ, where f λ [ X,y,ϕ] : R s ϕ → R such that for every row vector z ∈ R s ϕ , f λ [ X,y,ϕ] (z) = z X T X + λns ϕ λI s ϕ -1 X T y." }, { "formula_coordinates": [ 5, 192.93, 203.59, 259.61, 19.62 ], "formula_id": "formula_7", "formula_text": "X T X + λns ϕ λI s ϕ -1 X T y = S T S + λns ϕ λI s ϕ -1 S T y S ," }, { "formula_coordinates": [ 5, 157.78, 280.91, 391.53, 60.47 ], "formula_id": "formula_8", "formula_text": "1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ 2 max τ, 4 τ 2 + 2 min 1 + τ, 4 (1 + τ) 3τ λ,(3)" }, { "formula_coordinates": [ 5, 137, 374.66, 365.88, 65.47 ], "formula_id": "formula_9", "formula_text": "1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 min 1 + τ, 4 (1 + τ) 3τ + 2 max τ, 4 τ 2 λ." }, { "formula_coordinates": [ 5, 66.33, 510.16, 366.68, 53.68 ], "formula_id": "formula_10", "formula_text": "S T S + λns ϕ λI s ϕ X T X + λns ϕ λI s ϕ -1 X T y = S T y S Let b = S T S + λns ϕ λI s ϕ X T X + λns ϕ λI s ϕ -1" }, { "formula_coordinates": [ 6, 214.39, 128.63, 185.87, 40.37 ], "formula_id": "formula_11", "formula_text": "√ s ϕ ∥β∥≤ √ 2 β∈R s ϕ n ∑ i=1 1 n f (X i * ) -X i * β 2 ≤ 2λ." }, { "formula_coordinates": [ 6, 359.4, 221.01, 155.28, 21.74 ], "formula_id": "formula_12", "formula_text": "β = 1 s ϕ X T X X T + nλI s ϕ -1 f[X]," }, { "formula_coordinates": [ 6, 66.47, 244.99, 151.25, 11.76 ], "formula_id": "formula_13", "formula_text": "f[X] i = f (X i * ) for every i ∈ [n]." }, { "formula_coordinates": [ 6, 218.57, 287.91, 330.74, 55.57 ], "formula_id": "formula_14", "formula_text": "[S,y S ,k] (•) such that β = 1 s ϕ X T 1 s ϕ X X T + nλI s ϕ -1 f S [X],(5)" }, { "formula_coordinates": [ 6, 74.44, 371.56, 267.3, 13.8 ], "formula_id": "formula_15", "formula_text": "(i) f S [X] ∈ R n such that for every i ∈ [n], f S [X] i = f λ,X,y" }, { "formula_coordinates": [ 6, 71.26, 393.67, 247.65, 29.12 ], "formula_id": "formula_16", "formula_text": "(ii) f λ,X,y [S,y S ,k] (•) = s ϕ +1 ∑ i=1 α i k (S i * , •) such that α ∈ R s ϕ +1 ." }, { "formula_coordinates": [ 6, 66.74, 446.21, 226.92, 21.75 ], "formula_id": "formula_17", "formula_text": "β ∈ R s ϕ , f S [X] ∈ R n and X T 1 s ϕ X X T + nλI n -1" }, { "formula_coordinates": [ 6, 183.07, 514.57, 115.03, 21.75 ], "formula_id": "formula_18", "formula_text": "s ϕ X T 1 s ϕ X X T + nλI n -1" }, { "formula_coordinates": [ 6, 216.25, 549.73, 177.79, 106.79 ], "formula_id": "formula_19", "formula_text": "β = Â f λ S [X] = Â             s ϕ +1 ∑ i=1 α i k (S i * , X 1 * ) s ϕ +1 ∑ i=1 α i k (S i * , X 2 * ) . . . s ϕ +1 ∑ i=1 α i k (S i * , X n * )            " }, { "formula_coordinates": [ 7, 89.32, 113.63, 436.33, 106.79 ], "formula_id": "formula_20", "formula_text": "β j = Âj *             s ϕ +1 ∑ i=1 α i k (S i * , X 1 * ) s ϕ +1 ∑ i=1 α i k (S i * , X 2 * ) . . . s ϕ +1 ∑ i=1 α i k (S i * , X n * )             = n ∑ t=1 Âj,t k (S 1 * , X t * ) , • • • , n ∑ t=1 Âj,t k S (sϕ+1) * , X t *    α 1 . . . α s ϕ +1    ." }, { "formula_coordinates": [ 7, 151.51, 255.54, 311.66, 26.99 ], "formula_id": "formula_21", "formula_text": "A j * = n ∑ t=1 Âj,t k (S 1 * , X t * ) , • • • , n ∑ t=1 Âj,t k S (sϕ+1) * , X t * ∈ R s ϕ +1 ." }, { "formula_coordinates": [ 7, 216.74, 316.72, 332.57, 29.73 ], "formula_id": "formula_22", "formula_text": "1 s ϕ X T 1 s ϕ X X T + nλI n -1 f S [X] = Aα,(6)" }, { "formula_coordinates": [ 7, 123.33, 473.89, 421.74, 25.3 ], "formula_id": "formula_23", "formula_text": "(1 -ε) ∥y -z∥ 2 - 4 ε 2 ∥x -z∥ 2 ≤ ∥x -y∥ 2 ≤ 4 ε 2 ∥x -z∥ 2 + (1 + ε) ∥y -z∥ 2 . (7" }, { "formula_coordinates": [ 7, 545.07, 481.27, 4.24, 10.49 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 7, 127.78, 508.51, 417.29, 50.89 ], "formula_id": "formula_25", "formula_text": "∈ R d ∥x -y∥ 2 ≤ min τ∈Υ max τ, 4 τ 2 ∥x -z∥ 2 + min 1 + τ, 4 (1 + τ) 3τ ∥y -z∥ 2 . (8" }, { "formula_coordinates": [ 7, 545.08, 541.42, 4.24, 10.49 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 8, 139.68, 109.59, 336.03, 201.56 ], "formula_id": "formula_27", "formula_text": "1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 = 1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ [ X,y,ϕ] X i * + f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ max τ, 4 τ 2 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ [ X,y,ϕ] X i * 2 + min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ 2 max τ, 4 τ 2 λ + 2 min 1 + τ, 4 (1 + τ) 3τ λ = min τ∈Υ 2 max τ, 4 τ 2 + 2 min 1 + τ, 4 (1 + τ) 3τ λ," }, { "formula_coordinates": [ 8, 109.03, 371.35, 440.28, 218.63 ], "formula_id": "formula_28", "formula_text": "1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 = 1 n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * + f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 + max τ, 4 τ 2 n n ∑ i=1 f λ [ X,y,ϕ] X i * -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [ X,y,ϕ] X i * 2 + 2 max τ, 4 τ 2 λ ≤ min τ∈Υ min 1 + τ, 4(1+τ) 3τ n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 min 1 + τ, 4 (1 + τ) 3τ + 2 max τ, 4 τ 2 λ,(9)" }, { "formula_coordinates": [ 8, 212.08, 688.72, 191.72, 31.02 ], "formula_id": "formula_29", "formula_text": "1 n n ∑ i=1 f λ [X,y,k] (X i * ) -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 8λ,and" }, { "formula_coordinates": [ 9, 168.51, 101.22, 278.87, 31.02 ], "formula_id": "formula_30", "formula_text": "1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 2 n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 12λ." }, { "formula_coordinates": [ 9, 124.12, 158.2, 367.64, 31.02 ], "formula_id": "formula_31", "formula_text": "1 n n ∑ i=1 y i -f λ,X,y [S,y S ,k] (X i * ) 2 ≤ 1 + ε n n ∑ i=1 y i -f λ [X,y,k] (X i * ) 2 + 4 (1 + ε) + 8 ε 2 λ." }, { "formula_coordinates": [ 10, 66.47, 89.56, 88.04, 20.21 ], "formula_id": "formula_32", "formula_text": "k(x, x ′ ) = e -||x-x ′ || 2 2 2l 2" }, { "formula_coordinates": [ 11, 152.12, 139.69, 307.73, 96.08 ], "formula_id": "formula_33", "formula_text": "λ i ∝ A i s ϕ ∈ Ω(log n • log log n) λ i ∝ i -2t (t ≥ 1) s ϕ ∈ Ω(n 1/2t • log n) λ i ∝ i -1 s ϕ ∈ Ω( √ n • log n) PLAIN RFF finite rank s ϕ ∈ Ω( √ n) λ i ∝ A i s ϕ ∈ Ω( √ n • log log n) λ i ∝ i -2t (t ≥ 1) s ϕ ∈ Ω( √ n • log n) λ i ∝ i -1 s ϕ ∈ Ω( √ n • log n)" } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Collecting large amounts of quality data is always desirable, if not essential for most successful use cases in the field of machine learning. However, sometimes this is not possible. Usually, we either have a large amount of data that we doubt whether it is of good quality, or we have little data that we know that have high quality but it becomes complicated to obtain more for the intended use case. That can be either because of the novelty of the use case to be treated or because of the difficulty of the acquisition of data. Additionally, there may be some cases where the cost of re-training a model needs to be optimized as some models are very expensive to train, so if you have a method of evaluating new data you can delegate that decision to when you obtain sufficient quality data.\nThe evaluation of data quality turns out to be a problem that concerns the entire field of supervised learning and needs to be addressed because of the potential benefits it can bring. However, it is necessary to clarify that a data point is sometimes not good or bad individually, but depends on the origin or quality of the rest of the available data, so it is desirable to bring a more contextual approach to it. Indeed, we can formulate our problem as an optimization problem where, given N samples, the objective is to select the best samples to the given task.\nFortunately, there is a field of machine learning that tries to optimize processes in order to maximize a metric (or reward) function. Reinforcement learning algorithms are essentially search algorithms that find a solution by trial and error. Therefore, using this kind of algorithms could be the key to solve the data valuation problem, knowing that it should be scalable to large amounts of data, as well as agnostic to the supervised learning model used. This is why this article proposes a method of data valuation based on reinforcement learning that takes into account the information that a data sample can provide, both individually and contextually.\nThis paper is organized as follows. In Section 2 we briefly describe the state of the art in data quality assessment for supervised models. We present the proposed method in Section 3, and the experimental results over several datasets are shown in Section 4. The paper ends with some conclusions in Section 5." }, { "figure_ref": [], "heading": "State of the art", "publication_ref": [ "b6", "b4", "b4", "b16", "b15" ], "table_ref": [], "text": "There are previous approaches in the field of data quality assessment for supervised learning models. First, and a fairly common method, is the Leave One Out (LOO) approach. This technique attempts to isolate the effect of each data point on the training sample by simply removing it from the set and looking at the results it achieves. The worse the result achieved by removing one of the records, the greater the value of that data point is considered to be. While simple, the computational cost of this approach is in the order of O(N 2 ) for a dataset with N training samples. Nevertheless, this method has been reformulated in a computationally less inefficient way [Koh and Liang, 2017].\nOn the other hand, and based on game theory, Shapley values can be defined [Lundberg and Lee, 2017]. Shapley values are a fairly widely used method in variable selection and explainability of supervised learning models. However, it has been seen in the literature that they can also be used for the particular use case of data quality assessment at Ghorbani and Zou [2019]. The original computational cost of Shapley values is O(N !) where N is the number of training samples, so it is necessary to perform a Monte Carlo sampling for cases where then number of items to be discriminated is high. In particular, in Ghorbani and Zou [2019] they reduce this cost to O(2 n ) by reducing the problem to a cooperative game and using a truncated Monte Carlo (TMC) sampling strategy.\nFinally, there is also prior work on methods that use reinforcement learning techniques for the problem of data quality assessment such as Data Valuation using Reinforcement Learning (DVRL) [Yoon et al., 2020]. This method proved to be experimentally quite powerful, while featuring a computational complexity well below those discussed above (see D). It approaches the objective as a multistep trajectory problem, so it is forced to generate a reward that takes into account a moving window of scores within each episode versus the score of the actual step, being the episode a set of batches of the whole training dataset. This technique uses the REINFORCE algorithm [Williams, 1992] and treats each record individually to produce a probability value to be selected or not by a multinomial distribution.\nTherefore, the current paper proposes a novel methodology based on a reinforcement learning approach improving over the one previously discussed, aimed at enhancing certain aspects of the aforementioned technique." }, { "figure_ref": [], "heading": "RLBoost method", "publication_ref": [ "b12" ], "table_ref": [], "text": "The problem of selecting and sorting a collection of training data for its application in a supervised learning model can be addressed as an optimization problem, making the field of reinforcement learning particularly relevant. Specifically, the task involves selecting a subset of records (actions) from a given set (state) to optimize the model score (reward). To accomplish this, the paper delves into the domain of reinforcement learning, with a focus on policy gradient and Actor-Critic strategies-based agents.\nReinforcement learning [Sutton and Barto, 2018] is the field that deals with the optimization of decision sequences along a trajectory. In this way, we would go on to define the following concepts.\n1. A trajectory T is composed of single steps t ∈ T .\n2. During the trajectory T , decisions a t are made based on certain situations or states s t for which an immediate reward r t is obtained.\n3. The agent must adjust the parameters θ of a policy π by which to decide what action to take in a given state, which will be denoted as π θ (a t |s t ).\n4. In the case of agents based on Actor-Critic strategies it will also be necessary to adjust the parameters ϕ of a complementary estimator of the value function V depending only on the current state. This estimator will be in charge of predicting the cumulative reward from the current state to the end of the trajectory, and such estimation will be noted in the form V ϕ (s t )." }, { "figure_ref": [], "heading": "Proximal Policy Optimization", "publication_ref": [ "b16", "b10" ], "table_ref": [], "text": "The reinforcement learning algorithm used in this study is Proximal Policy Optimization (PPO). At Yoon et al. [2020] they use a regular Policy Gradient method, but PPO [Schulman et al., 2017] usually outperforms these techniques by several reasons. First, we have the use of advantages A ϕ (s t , a t , π) = δ t + γδ t+1 + γ 2 δ t+2 + ... + γ T -2 δ t+T -1 where δ t = r t + γV ϕ (s t+1 ) -V ϕ (s t ) and γ is defined as a discount parameter over the steps in a trajectory. Those advantages are used to check whether some actions can obtain higher/lower rewards than expected, and therefore they are reinforced/disincentivized. This leads us to a generalization of the advantages calculation called Generalized Advantages Estimation (GAE) defined as\nA GAE(γ,λ) t = ∞ t=0 (λγ) t δ t+1 .\nHere, the use of an exponential weight discount λ is needed to control the bias variance trade-off.\nOn the other hand, PPO incorporates an intrinsic reward mechanism via an entropy term to promote exploration. Finally, the clipping component of this algorithm's policy loss acts as a regularizer of the model, ensuring that the policy does not suffer from excessive changes between each iteration and thereby enabling smoother improvements to be made.\nThe original PPO algorithm has this loss function definition\nL(θ) = L clip (θ) -c 1 L V F (ϕ) + c 2 S(θ),(1)\nwith the following entropy bonus loss to encourage exploration\nS(θ) = E t [-π θ (a t , s t ) log(π θ (a t , s t ))] ,\nthe value function (or critic) loss defined as\nL V F (ϕ) = E t r t + γV ϕ (s t+1 , π)) -V ϕ (s t , π)) 2 2 ,(2)\nwith an old copy of the value function estimator with parameters θ, and the policy (or actor) loss as\nt 1 = A ϕ (s t , a t , π)R t (θ), t 2 = A ϕ (s t , a t , π)clip(R t (θ), 1 -ϵ, 1 + ϵ), L clip (θ) = E t [min(t 1 , t 2 )],(3)\nwhere R t = π θ (a t , s t )/π θ (a t , s t ). Here, we can see that the clipping method is used as a kind of regularizer to avoid aggressive changes on the policy function as previously mentioned." }, { "figure_ref": [], "heading": "PPO as a bandit agent", "publication_ref": [], "table_ref": [], "text": "After reviewing the existing literature on data evaluation using reinforcement learning, it became apparent that a simplified approach could potentially establish a starting point for improvement. As outlined in earlier sections, the problem was formulated as a one-step optimization problem in which a reinforcement learning algorithm selects a subset of data from a training dataset to improve a chosen metric. The size of the data batch for the agent's states (N ) must be large enough to be representative of the original data, but not so large as to excessively increase the complexity of the problem. As part of this simplification process, a reward mechanism was defined based on calculating the difference between the validation score obtained by a supervised estimator trained with the full data batch and the validation score of the same estimator with the subset of data selected by the reinforcement learning agent.\nHaving specified the use case at hand, one could argue that a bandit method could better fit in this case, since the only actions available in the environment are the selection of data points in the batch being evaluated, for which an immediate reward is obtained. But in fact, to adapt PPO to a problem formulated in this way, we only need to check that it is mathematically possible and thus take advantage of the improvements of the algorithm mentioned above. This adaptation involves making two assumptions: γ = 0 and t = 0 as there is no trajectory.\nThese assumptions lead us to rewrite the previous formulation. First, the advantages calculated in the GAE are simplified in the form:\nA ϕ (s, a, π) = δ = r -V ϕ (s, π).\nTherefore, the new expression of the policy loss is as follows\nt 1 = δR(θ), t 2 = δclip(R(θ), 1 -ϵ, 1 + ϵ), L clip (θ) = min(t 1 , t 2 ),(4)\nand the loss function of the value function (the critic part) is given by:\nL V F (ϕ) = ∥r -V ϕ (s, π)∥ 2 2 .\n(5)\nIt is worth noting that, as we see in the differences between equation (2) and equation ( 5), what was previously a temporal difference error between the actual value functions estimates and the estimation of the value function of the next state has become only the estimate of the actual state. This tells us that the value function of this agent indicates whether a higher or lower difference at model score is expected exclusively for the actual data.\nFinally, it should also be noted that the entropy bonus, which is the value that allows the agent to be able to explore, would look like this\nS(θ) = -π θ (a, s) log(π θ (a, s)).\nAfter having tested the mathematical feasibility of making the proposed change, it has been proven that not only the change is feasible but it also provides us with some interesting advantages for the specific use case, such as the estimation of the profits of filtering non-quality data at each step through the agent's critic estimation or the clipping of the actor (equation ( 4)) as a method of regularization of the agent. It shows the application cases for both tabular data and image data, where a non-trainable data vectorizer will be needed for the latter. In this diagram it can be also seen that the reward calculation is the difference between the score of the episode using the proposed filtering versus the score of the model without filtering, both against the validation set." }, { "figure_ref": [ "fig_0" ], "heading": "Model architecture", "publication_ref": [ "b9", "b3" ], "table_ref": [], "text": "For a complete understanding of the architecture of the model used by this particular algorithm and presented in Figure 1, it is necessary to clarify certain aspects beforehand.\nFirst, depending on the use case for which the strategy presented in this paper is to be used, some kind of vectorization of each of the data samples to be evaluated is needed. In the case of tabular data this step is straightforward since each of the records is a vector in itself and the vectorizing part can be simply a series of fully connected layers or just one that functions as a projection to the latent state of the data. However, in the case of images, some kind of architecture would be needed to vectorize each of these samples, for example, a Contrastive Languaje-Image Pretrained model (CLIP) [Radford et al., 2021].\nOnce all the data has been vectorized, each of these vectors is used as input to a sequence of L Transformer Encoders [Vaswani et al., 2017]. An extra parameterizable vector, known as CLS Token, is also added to this process. The strategy of including a learnable vector called CLS is frequently employed in document classification within the field of natural language processing (NLP ) [Devlin et al., 2018]. The output of the L Transformer Encoders corresponding to the CLS Token is expected to capture the contextual information of the entire batch of vectors, allowing it to estimate the value function (5). In this case, the value function pertains to the complete batch of proposed data." }, { "figure_ref": [ "fig_0" ], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "As depicted in Figure 1, the proposed method is versatile and applicable to various use cases, as long as the input data can be vectorized to serve as input to the Transformer Encoder. Hence, this approach will be evaluated on both tabular and image data, where the image data will be vectorized beforehand.\nIn addition, in order to test the training data filtering behavior that the proposed method can perform, a noise rate has been introduced in each of the train datasets to be tested. This error rate consisted of a target class shift to the opposite class of a random percentage of the training data at tabular data and a complete circular shift at image data (MNIST dataset), but neither at validation nor test datasets.\nSince we are introducing noise into datasets that are expected to be sufficiently clean it has been decided to evaluate each of the proposed methods in two different ways. In all cases, we perform a test of the accuracy score of the supervised learning model using the filtering proposed by RLBoost against the same supervised learning model trained with all the available records, which we refer to as the \"baseline\".Additionally, as in some cases we are intentionally introducing noise (15% and 30%) we have also decided to evaluate the ability of the selection model to detect noisy records. Since in this case we also want to measure the ability of the model to detect data in which noise has been introduced, the evaluation of the data can be seen as a classification problem between noiseless and noisy records, such that the positive class is noiseless record and the negative class is noisy record. In this way, Receiver Operating Characteristic (ROC) curves can be calculated to measure the effectiveness of the method.\nThe methods compared in this study will be those presented in Section 2. The LOO, Data Shapley (SHAP), DVRL and RLBoost methods will be run for each of the use cases presented. It should be noted that the version of Data Shapley used needs to be the one that improves the computational cost of the algorithm from O(N !) to O(2 n ) in order to obtain results for large dataset sizes.\nIt is necessary to emphasize that, since the LOO and SHAP methods are not agents intended for the task of data selection, a subsequent process has been performed after obtaining the data values from these methods. This process involves a sweep with different thresholds for the values obtained so that, once the best cut-off threshold is selected against the validation set, this threshold is used to train the final model.\nThis sweep has been performed in order to compare the rest of the methods against the best possible execution of these methods. However, it generates a couple of side-effects that must be taken into account:\n• This sweep is very costly to perform in the face of new data arrivals as it forces several re-trainings , so it is not desirable to perform.\n• The score results of the models using sweeping are measured in a way that generates a lower bound for the final score with the value of the baseline model. This is because a threshold that doesn't filter any data will always result in the baseline score. Therefore, the worst-case scenario for this method can only be the same as the baseline accuracy. In other words, the baseline score will be achieved even if no method had been applied." }, { "figure_ref": [], "heading": "Tabular data", "publication_ref": [ "b1", "b16" ], "table_ref": [ "tab_0" ], "text": "Since each record in tabular datasets corresponds to a vector, the projection operation for this kind of problems is fairly straightforward. Specifically, it has been decided to use the Adult dataset and A5A, W5A and CIFAR10 datasets (from the LibSVM repository Chang and Lin [2011]) to test the performance with this kind of data.\nIt should be noted that in the case of the CIFAR10 dataset, which is not a binary classification problem, the problem was previously binarized by simply segregating class #1 from the rest. In all cases, the noise rates proposed for each of the tests are 0%, 15% and 30%. As can be seen in Table 1, in all cases priority has been given to having as large a dataset as possible under test to ensure that the evaluation of the algorithm performance is as realistic as possible. On the other hand, it should be noticed that in the case of the Adult dataset, the same training, validation and test cuts and the same preprocessing as in the original DVRL paper [Yoon et al., 2020] have been performed." }, { "figure_ref": [ "fig_1", "fig_2", "fig_1", "fig_3" ], "heading": "Dataset Train", "publication_ref": [ "b4" ], "table_ref": [ "tab_12" ], "text": "The parameters used in each of the runs are common to all experiments. In all cases, the internal generic estimator was a logistic regression (to speed up the execution of the tests), the batch sizes of records proposed to the agent were of size 200 samples, the agent has 4 stacked Transformer Encoders, trained during 1e5 steps and the agent training batch being 64 in all proposed datasets. The tables 2, 3, 4 and 5 show the results of each of the proposed methods for the indicated datasets. On the one hand we have the results of previous filtering methods such as LOO, SHAP and DVRL together with the model without any filtering (Baseline) and on the other hand we have the results of our RLBoost method (RLB) with different entropy bonus values (the hyperparameter that quantifies the exploration in the PPO algorithm).\nTo assess the stability of the aforementioned methods, 5 executions were performed for each method in order to calculate the standard deviation of the metric being evaluated. This allowed us to examine the variability and consistency of the results obtained. It is important to note that in the case of DVRL, several re-executions were required due to the need to make a final decision on whether or not to choose a datum, as it was not necessary to use the sweep of methods that did choose an action. However, in some cases, this decision was incompatible with the training of a final estimator, requiring a rerun. To obtain the 5 valid runs for all the cases proposed, 38 backup runs were necessary in addition to the 75 runs required (also considering the results of Table 11).\nSeveral things must be emphasized in this table. On the one hand it can be seen that the execution of the SHAP method on datasets where the training data size has 5k examples could not be performed since the computational efficiency of the algorithm does not make it feasible (see appendix D). On the other hand, and as it is obvious, the baseline models worsen their performance as noise is introduced to the training dataset, which corroborates that the noise generation is done correctly.\nAdditionally, it can be also observed how the RLBoost-based methods outperform in cases where the noise is high and still work pretty well in cases where there is not noise, so it seems that it is a fairly robust method as far as improving the final metric of the model (accuracy in our cases) is concerned. As we can see, Figure 2 shows the performance against the test data for each of the models (0% error, 15% and 30%) at the end of the final selection at every tabular dataset case tested.\nHowever, it can also be seen that the higher the error introduced, the more the generic estimator can take advantage of the data thanks to the selection made by our agent, which was the objective we were looking for.\n(a) 15% error rate, best run in terms of AUC for each method.\n(b) 30% error rate, best run in terms of AUC for each method.\n(c) 15% error rate, worst run in terms of AUC for each method.\n(d) 30% error rate, worst run in terms of AUC for each method. On the other hand, in Figure 3 and tables 6, 7, 8 and 9 , we can see the ROC curves on whether the model has identified that a particular record is a data sample without noise (positive value), or it is a value with noise (negative value) based on the outputs of the agent before being binarized to be selected or not selected.\nThese curves show us that the capacity of the proposed agents are similar to the best of those available in the state of the art, but without the computational limitation of the execution of Data Shapley [Ghorbani and Zou, 2019], as well as the improved stability of our method over DVRL.\nIt should be noted that in the case of CIFAR10 the results corroborate the quality of the agent and are quite promising as can be seen at Figures 2 and4.\n(a) 15% error rate, best run in terms of AUC for each method.\n(b) 30% error rate, best run in terms of AUC for each method.\n(c) 15% error rate, worst run in terms of AUC for each method.\n(d) 30% error rate, worst run in terms of AUC for each method. In Appendices A and B we report the result graphs of the rest of the ROC analysis on the proposed tabular datasets." }, { "figure_ref": [], "heading": "Image data", "publication_ref": [ "b2", "b9" ], "table_ref": [ "tab_11", "tab_13" ], "text": "As previously mentioned in Subsection 3.3, this algorithm is not restricted only to its application to tabular data. Any problem in which a data can be vectorized in advance can be used to evaluate the quality of the given samples. In this case it has been decided to test the robustness of the model by evaluating the quality of vectorized image data using the MNIST dataset [Deng, 2012]. The way to vectorize these images has been through a CLIP model [Radford et al., 2021] with a pre-trained ResNet50. At Table 10 can be seen the different sets and the specification of each one after vectorizing and splitting the data.\nSince this dataset contains handwritten digits from 0 to 9, it was also decided to complicate a little more the error introduced to the data. Thus, each of the erroneous data has been classified as the class of the next digit in a circular fashion, so that 0 becomes 1, 1 becomes 2, etc. and 9 becomes 0. (a) 15% error rate, best run in terms of AUC for each method.\n(b) 30% error rate, best run in terms of AUC for each method.\n(c) 15% error rate, worst run in terms of AUC for each method.\n(d) 30% error rate, worst run in terms of AUC for each method. In view of the above results, we can conclude that the RLBoost method is really effective for its purpose. In each of the tests performed against the proposed dataset, it has ended up with, if not the best value in terms of accuracy after agent filtering, a fairly competitive one. One of the striking cases of these results is the exceptional good performance of the DVRL method in the case where the noise is 30%, in detection of noisy data in Table 12. However, in view of the results of other runs, we can see that DVRL suffers from instability when it comes to yield good results, while the proposed method does not." }, { "figure_ref": [], "heading": "Conclusions and future work", "publication_ref": [ "b5", "b11", "b0" ], "table_ref": [], "text": "The problem of data evaluation can be oriented as if it were a search problem, and in particular it is possible to formulate it as a trajectory free reinforcement learning problem. This approach makes it possible to understand that each data point is part of a context and that it will be more or less valuable depending on the supervised learning model to be used to model it.\nThis paper opens a new framework in the evaluation of data from a supervised learning model using reinforcement learning without any trajectory approach. In this sense, it opens up to us different branches of research. First, one of the next steps is to apply this strategy to text classification problems in order to be able to extend a training dataset, since this problem usually has the difficulty that the manual evaluation of records by is often very expensive and specialized. Therefore, a properly labeled validation set can be developed with the goal of collecting training data automatically for further evaluation. On the other hand, it also opens the possibility of improving the reinforcement learning algorithm (perhaps using SAC [Haarnoja et al., 2018] or V-MPO [Song et al., 2020]), taking into account that it must be of the Actor-Critic type. This improvement should have much emphasis on the sample-efficiency of the method, since this will allow a faster convergence to more complex estimators. Finally, there could be some improvements at the model architecture, like using Long-Former from Beltagy et al. [2020] to try larger batches without increasing the computational cost of the algorithm, or using a Encoder/Decoder architecture for the policy network. " }, { "figure_ref": [ "fig_3" ], "heading": "D Time cost comparison", "publication_ref": [], "table_ref": [], "text": "As this is a reinforcement learning problem, the computational cost is increasing in the number of interactions of the environment with the agent. The purpose of the Figure 14 is to illustrate several concepts. First, the high computational cost of Data Shapley and the low computational cost of LOO. Secondly, the stability of the computational cost of our agents regardless of the number of training samples we are using. And thirdly and lastly the cost of increasing the batch size of examples that see each interaction of the agent with the environment." } ]
Data quality or data evaluation is sometimes a task as important as collecting a large volume of data when it comes to generating accurate artificial intelligence models. In fact, being able to evaluate the data can lead to a larger database that is better suited to a particular problem because we have the ability to filter out data obtained automatically of dubious quality. In this paper we present RLBoost, an algorithm that uses deep reinforcement learning strategies to evaluate a particular dataset and obtain a model capable of estimating the quality of any new data in order to improve the final predictive quality of a supervised learning model. This solution has the advantage that of being agnostic regarding the supervised model used and, through multi-attention strategies, takes into account the data in its context and not only individually. The results of the article show that this model obtains better and more stable results than other state-of-the-art algorithms such as LOO, DataShapley or DVRL.
RLBoost: Boosting Supervised Models using Deep Reinforcement Learning
[ { "figure_caption": "Figure 1 :1Figure 1: Schematic diagram of the operation of the proposed RLBoost model.It shows the application cases for both tabular data and image data, where a non-trainable data vectorizer will be needed for the latter. In this diagram it can be also seen that the reward calculation is the difference between the score of the episode using the proposed filtering versus the score of the model without filtering, both against the validation set.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Final Scores for the agents trained over tabular datasets", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: ROC curves to measure the ability to detect noisy data in the Adult dataset, where the positive class is the data without noise and the negative class is the data with noise.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: ROC curves to measure the ability to detect noisy data in the CIFAR10 dataset, where the positive class is the data without noise and the negative class is the data with noise.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Final scores of the agents trained over the MNIST data", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: ROC curves to measure the ability to detect noisy data in the MNIST dataset, where the positive class is the data without noise and the negative class is the data with noise.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "ROC curve of the different models on the data set with 15% error rate. (b) ROC curve of the different models on the data set with 30% error rate.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: ROC curves to measure the ability to detect noisy data in the A5A dataset, where the positive class is the data without noise and the negative class is the data without noise.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: ROC curves to measure the ability to detect noisy data in the W5A dataset, where the positive class is the data without noise and the negative class is the data without noise.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Final Scores for the agents trained over the A5A dataset making the ablations.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Final Scores for the agents trained over the W5A dataset making the ablations.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Final Scores for the agents trained over the CIFAR10 dataset making the ablations", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Final Scores for the agents trained over the MNIST dataset making the ablations", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Details of the records, features and classes of the tabular datasets to be used in the experimentation.", "figure_data": "ValidationTest# Features # ClassesAdult1k40047,4421232A5A5k1k10k1232W5A5k1k10k3002CIFAR104k1k10k3,07210 →2", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table of scores against test data using the final filtering proposed by each of the models at Adult dataset at 0%, 15% and 30% noise ratio.", "figure_data": "0% noise15% noise30% noiseBaseline0.8440.8380.807LOO0.844(±0.0)0.838(±0.0)0.807(±0.0)SHAP------DVRL0.828(±0.002) 0.829(±0.004) 0.809(±0.016)RLB (1e-1)0.834(±0.004) 0.831(±0.004) 0.826(±0.006)RLB (1e-2)0.833(±0.002) 0.830(±0.004) 0.831(±0.003)RLB (1e-3)0.827(±0.003) 0.826(±0.002) 0.823(±0.006)RLB (1e-4)0.826(±0.002) 0.825(±0.004) 0.826(±0.001)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table of scores against test data using the final filtering proposed by each of the models at A5A dataset at 0%, 15% and 30% noise ratio.", "figure_data": "0% noise15% noise30% noiseBaseline0.9820.9770.946LOO0.982(±0.0)0.977(±0.0)0.946(±0.0)SHAP------DVRL0.979(±0.004) 0.970(±0.004) 0.899(±0.096)RLB (1e-1)0.979(±0.001) 0.970(±0.007) 0.959(±0.005)RLB (1e-2)0.974(±0.006) 0.972(±0.005) 0.966(±0.005)RLB (1e-3)0.980(±0.001) 0.969(±0.004) 0.961(±0.002)RLB (1e-4)0.981(±0.001) 0.970(±0.005) 0.957(±0.005)", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table of scores against test data using the final filtering proposed by each of the models at W5A dataset at 0%, 15% and 30% noise ratio.", "figure_data": "0% noise15% noise30% noiseBaseline0.8800.7810.683LOO0.890(±0.0)0.759(±0.0)0.648(±0.0)SHAP------DVRL0.866(±0.031)0.748(±0.017)0.648(±0.010)RLB (1e-1)0.896(±0.003)0.855(±0.007)0.797(±0.007)RLB (1e-2)0.902(±0.006)0.869(±0.016)0.860(±0.007)RLB (1e-3)0.902(±0.002) 0.882(±0.014) 0.879(±0.011)RLB (1e-4)0.898(±0.004)0.873(±0.014)0.866(±0.007)", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table of scores against test data using the final filtering proposed by each of the models at binarized CIFAR10 dataset at 0%, 15% and 30% noise ratio.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Tables of AUCs detecting noisy data by each of the models in the Adult dataset", "figure_data": "15% noise30% noiseLOO0.688(±0.0)0.535(±0.0)SHAP----DVRL0.791(±0.141)0.68(±0.167)RLB (1e-1)0.833(±0.016)0.847(±0.007)RLB (1e-2)0.845(±0.007)0.86(±0.005)RLB (1e-3)0.833(±0.008)0.845(±0.006)RLB (1e-4)0.819(±0.007)0.832(±0.018)", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Tables of AUCs detecting noisy data by each of the models in the A5A dataset", "figure_data": "15% noise30% noiseLOO0.628(±0.0)0.455(±0.0)SHAP----DVRL0.946(±0.008)0.859(±0.185)RLB (1e-1)0.957(±0.013)0.967(±0.003)RLB (1e-2)0.947(±0.008)0.967(±0.006)RLB (1e-3)0.916(±0.008)0.933(±0.006)RLB (1e-4)0.887(±0.007)0.892(±0.025)", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Tables of AUCs detecting noisy data by each of the models in the W5A dataset", "figure_data": "15% noise30% noiseLOO0.619(±0.0)0.592(±0.0)SHAP----DVRL0.548(±0.061)0.498(±0.003)RLB (1e-1)0.757(±0.014)0.767(±0.005)RLB (1e-2)0.779(±0.044)0.855(±0.005)RLB (1e-3)0.764(±0.019)0.82(±0.009)RLB (1e-4)0.657(±0.021)0.678(±0.021)", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Tables of AUCs detecting noisy data by each of the models in the binarized CIFAR10 dataset", "figure_data": "", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Details of the records, features and classes of the image dataset to be used in the experimentation.", "figure_data": "Dataset Train Validation Test # Features # ClassesMNIST5k1k5k102410", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Table of scores against test data using the final filtering proposed by each of the models in the MNIST data set", "figure_data": "0% noise15% noise30% noiseBaseline0.9680.7790.621LOO0.968(±0.000) 0.780(±0.000)0.620(±0.000)SHAP---DVRL0.908(±0.119) 0.923(±0.081)0.780(±0.379)RLB (1e-1)0.967(±0.001) 0.890(±0.013)0.782(±0.017)RLB (1e-2)0.968(±0.002) 0.910(±0.007)0.842(±0.016)RLB (1e-3)0.969(±0.002) 0.914(±0.009)0.867(±0.018)RLB (1e-4)0.968(±0.001) 0.876(±0.011)0.802(±0.030)", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Table of AUCs detecting noisy data by each of the models in image dataset", "figure_data": "15% noise30% noiseLOO0.440(±0.0)0.575(±0.0)SHAP--DVRL0.884(±0.192)0.977(±0.006)RLB (1e-1)0.816(±0.006)0.737(±0.03)RLB (1e-2)0.853(±0.014) 0.861(±0.012)RLB (1e-3)0.839(±0.014)0.843(±0.012)RLB (1e-4)0.679(±0.013)0.688(±0.016)", "figure_id": "tab_13", "figure_label": "12", "figure_type": "table" } ]
Eloy Anguiano Batanero; Ángela Fernández Pascual; Álvaro Barbero Jiménez
[ { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b0", "title": "Longformer: The longdocument transformer", "year": "2020" }, { "authors": "Chih-Chung Chang; Chih-Jen Lin", "journal": "ACM transactions on intelligent systems and technology (TIST)", "ref_id": "b1", "title": "Libsvm: A library for support vector machines", "year": "2011" }, { "authors": "Li Deng", "journal": "IEEE Signal Processing Magazine", "ref_id": "b2", "title": "The mnist database of handwritten digit images for machine learning research", "year": "2012" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Amirata Ghorbani; James Zou", "journal": "", "ref_id": "b4", "title": "Data shapley: Equitable valuation of data for machine learning", "year": "2019" }, { "authors": "Tuomas Haarnoja; Aurick Zhou; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b5", "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "year": "2018-07-15" }, { "authors": "Pang Wei; Koh ; Percy Liang", "journal": "", "ref_id": "b6", "title": "Understanding black-box predictions via influence functions", "year": "2017" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "", "ref_id": "b7", "title": "A unified approach to interpreting model predictions", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b8", "title": "", "year": "2017" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "PMLR", "ref_id": "b9", "title": "Learning transferable visual models from natural language supervision", "year": "2021-07" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b10", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "H Francis Song; Abbas Abdolmaleki; Jost Tobias Springenberg; Aidan Clark; Hubert Soyer; Jack W Rae; Seb Noury; Arun Ahuja; Siqi Liu; Dhruva Tirumala; Nicolas Heess; Dan Belov; Martin Riedmiller; Matthew M Botvinick", "journal": "", "ref_id": "b11", "title": "V-mpo: On-policy maximum a posteriori policy optimization for discrete and continuous control", "year": "2020" }, { "authors": "Richard S Sutton; Andrew G Barto", "journal": "A Bradford Book", "ref_id": "b12", "title": "Reinforcement Learning: An Introduction", "year": "2018" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; L Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b13", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b14", "title": "", "year": "2017" }, { "authors": "R J Williams", "journal": "Machine Learning", "ref_id": "b15", "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "year": "1992" }, { "authors": "Jinsung Yoon; Sercan Arik; Tomas Pfister", "journal": "PMLR", "ref_id": "b16", "title": "Data valuation using reinforcement learning", "year": "2020-07" } ]
[ { "formula_coordinates": [ 4, 267.77, 250.96, 116.94, 24.23 ], "formula_id": "formula_0", "formula_text": "A GAE(γ,λ) t = ∞ t=0 (λγ) t δ t+1 ." }, { "formula_coordinates": [ 4, 223.94, 366.85, 253.54, 11.72 ], "formula_id": "formula_1", "formula_text": "L(θ) = L clip (θ) -c 1 L V F (ϕ) + c 2 S(θ),(1)" }, { "formula_coordinates": [ 4, 224.08, 411.42, 163.1, 9.65 ], "formula_id": "formula_2", "formula_text": "S(θ) = E t [-π θ (a t , s t ) log(π θ (a t , s t ))] ," }, { "formula_coordinates": [ 4, 200.9, 451.99, 276.58, 22.08 ], "formula_id": "formula_3", "formula_text": "L V F (ϕ) = E t r t + γV ϕ (s t+1 , π)) -V ϕ (s t , π)) 2 2 ,(2)" }, { "formula_coordinates": [ 4, 218.04, 519.14, 259.44, 41.12 ], "formula_id": "formula_4", "formula_text": "t 1 = A ϕ (s t , a t , π)R t (θ), t 2 = A ϕ (s t , a t , π)clip(R t (θ), 1 -ϵ, 1 + ϵ), L clip (θ) = E t [min(t 1 , t 2 )],(3)" }, { "formula_coordinates": [ 5, 240.42, 376.71, 130.41, 9.65 ], "formula_id": "formula_5", "formula_text": "A ϕ (s, a, π) = δ = r -V ϕ (s, π)." }, { "formula_coordinates": [ 5, 243.52, 419.27, 233.96, 41.12 ], "formula_id": "formula_6", "formula_text": "t 1 = δR(θ), t 2 = δclip(R(θ), 1 -ϵ, 1 + ϵ), L clip (θ) = min(t 1 , t 2 ),(4)" }, { "formula_coordinates": [ 5, 247.33, 492.12, 116.58, 14.47 ], "formula_id": "formula_7", "formula_text": "L V F (ϕ) = ∥r -V ϕ (s, π)∥ 2 2 ." }, { "formula_coordinates": [ 5, 240.59, 624.29, 130.06, 9.65 ], "formula_id": "formula_8", "formula_text": "S(θ) = -π θ (a, s) log(π θ (a, s))." } ]
10.18653/v1/N19-1388
2024-03-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b0", "b2", "b41", "b43", "b3", "b33", "b50", "b42", "b52", "b44", "b28" ], "table_ref": [], "text": "The need for large supervised corpora remains a major bottleneck in neural machine translation (NMT) (Bapna et al., 2022). Sufficient bilingual data is scarce for most languages and limited to religious texts for the lowest-resource languages. To compensate for this lack of data, one effective approach is to leverage related parallel data from other languages via multilingual machine translation (MMT) that enables positive transfer from high-resource to low-resource languages (Aharoni et al., 2019;Arivazhagan et al., 2019). Additionally, we can use monolingual data, either through pretraining with denoising autoencoding(DAE; Conneau and Lample 2019; Liu et al. 2020a), or with backtranslation (BT; Sennrich et al., 2016). Driven by the success of these methods, recent works are converging toward a unified approach, that jointly trains MMT with monolingual data using auxiliary DAE objectives (Siddhant et al., 2022;Bapna et al., 2022;NLLB team et al., 2022) and/or BT.\nHowever, the literature contains contradictory results about the effectiveness of these methods, particularly DAE. Early studies indicated combining MMT with DAE led to improvements across all settings (Wang et al., 2020;Siddhant et al., 2020). These studies, however, were limited in scope, as they only considered moderately-sized models and used few languages (10 to 15), with training and test data drawn from similar domains. By contrast, NLLB team et al. (2022) found that DAE helped only in very low-resource directions in MMT experiments with 200+ languages, while Xu et al. (2023) reported that DAE produced mixed results in experiments with (mostly) African languages.\nTo resolve this conflict, we present a systematic analysis of different methods that integrate monolingual data into MMT, focusing on BT and two DAE objectives, MASS (Song et al., 2019) and BART (Lewis et al., 2020;Liu et al., 2020b). First, we carefully investigate the role of the domain. To align with prior work, we focus on the Englishcentric setting (i.e., concatenation of English→XX and XX→English). We use a realistic and diverse multilingual translation dataset with 100 directions and run controlled experiments using different monolingual splits with single-and mixed-domain data. Then, we evaluate models across four widecoverage multilingual test sets from Wikipedia, news, medical, and mixed domains. Our results with medium-sized models (370M) show that while BT outperforms both DAE objectives in most settings, the effectiveness of all methods varies signif-icantly, as they are surprisingly brittle to domain mismatches. BT is more sensitive to the domain than DAE, and can underperform the parallel-only baseline when the monolingual and test data are not similar. However, increasing the diversity of the monolingual data by mixing different sources improves domain robustness to some extent. We also discover that both DAE methods are less effective than previously reported, and they are mainly helpful in low-resource and xx→en directions. Of the two, MASS consistently outperforms BART, although by a narrow margin.\nNext, we study the role of model capacity and discover that it is crucial and can even change the ranking between methods. We hold all other factors constant and train models with sizes from 90M up to 1.6B parameters. When the scale is small, both BT and DAE yield poor results, especially in out-of-domain settings. However, as model capacity grows, all methods quickly improve compared to the parallel-only baseline, and also become more robust to domain mismatches. Scale affects DAE the most, which transitions from underperforming the parallel-only baseline at the 90M scale to becoming competitive with BT at 1.6B and even outperforming it in low-resource.\nOur contributions are: (i) We present a large-scale systematic analysis of how the domain and model scale affect the effectiveness of methods that incorporate monolingual data into MMT. (ii) We show that BT and DAE are sensitive to domain mismatches between the monolingual and test data, particularly on small scales. BT is best in most settings. Also, prior works have overestimated DAE, and when comparing the two methods, MASS outperforms BART. (iii) We discover that model capacity is key for the effectiveness of both methods, especially DAE. When the scale is small, DAE can even harm MMT, but it quickly improves with scale, and eventually becomes competitive with BT." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b42", "b50", "b42", "b3", "b52", "b8", "b9", "b47", "b51", "b55", "b49", "b17", "b57", "b21", "b51", "b7", "b25", "b18", "b19", "b22", "b30" ], "table_ref": [], "text": "Monolingual Data with Multi-Task Learning Early works on DAE+MMT report universal gains in all settings. Siddhant et al. (2020) use WMT parallel data from 15 languages and large monolingual corpora from many sources, like News Crawl, Wikipedia, and Common Crawl, with MASS. Wang et al. (2020) explore BART-like objectives with a subset of 10 languages from Siddhant et al. (2020) and News Crawl monolingual data.\nHowever, more recent works that use larger and/or less uniform datasets, report less favourable results. To extend MMT to very low-resource languages, Bapna et al. (2022) show that models learn to translate from/into languages with only monolingual data if there are sufficient parallel data in other languages to enable transfer from the DAE to the MT task. NLLB team et al. ( 2022) explore a similar idea, but report that, in supervised translation, DAE (BART) is effective only for very lowresource. Xu et al. (2023) compare all aforementioned DAE methods and find that they often fail to outperform the parallel-only baseline. Our study probes confounding factors in these prior works.\nLarge Language Models Large language models (LLMs) trained on massive datasets achieve impressive results in many tasks (Brown et al., 2020;Chowdhery et al., 2022;Zhang et al., 2022b;Tay et al., 2023). To adapt LLMs to downstream tasks including translation (Wei et al., 2022;Lin et al., 2022;Zhang et al., 2023;Vilar et al., 2022;Garcia et al., 2023;Zhu et al., 2023;Hendy et al., 2023), the dominant approach is to use prompting, an ability enabled by model scale (Wei et al., 2022). Our work, however, is orthogonal and presents an analysis of methods that integrate monolingual data into encoder-decoder MMT models trained from scratch. Also, it is questionable whether these models are unsupervised with respect to translation, as recent work suggests that they have consumed parallel data during pretraining (Briakou et al., 2023).\nModel Scale A growing literature investigates the scaling laws of different aspects of a model (Kaplan et al., 2020). In NMT, Ghorbani et al. (2021) explore scaling laws related to model capacity, Fernandes et al. (2023) consider MMT, andGordon et al. (2021) focus on data scaling. Zhang et al. (2022a) investigate the scaling laws across architectures, like decoder-only and encoder-decoder. Our work does not study scaling laws but analyzes how scale impacts using monolingual data in MMT.\nAnalysis Huang et al. (2021); Liu et al. (2021) analyze the complementarity of BT and monolingual pretraining when used in bilingual NMT. By contrast, we focus on multilingual NMT and systematically analyze the joint training with BT and DAE." }, { "figure_ref": [], "heading": "(Multi-task) Multilingual NMT", "publication_ref": [ "b24", "b48" ], "table_ref": [], "text": "We follow the universal MMT training method of Johnson et al. (2017) and train a single dense Transformer-based (Vaswani et al., 2017) model on the concatenation of parallel data from multiple language pairs. We prepend a special token ⟨2XX⟩ to the source sequences, that informs the model about the translation direction (e.g., ⟨2ES⟩ for Spanish)." }, { "figure_ref": [], "heading": "Denoising Autoencoding", "publication_ref": [ "b42", "b50", "b44", "b14", "b42", "b43", "b44" ], "table_ref": [], "text": "We follow the multi-task setting from prior works (Siddhant et al., 2020;Wang et al., 2020) and use the regular MT objective on batches with parallel data and a DAE objective on batches with monolingual data. The language token ⟨2XX⟩ informs the model about the DAE and MT tasks, as it instructs it to generate a semantically similar sentence in the XX language. We explore two DAE methods. Song et al. (2019) adapt the masked language modeling objective (Devlin et al., 2019) to encoder-decoder models. MASS masks a span in the input and trains the decoder to predict that span. However, the unmasked tokens are not included in the target prefix (Figure 1). Following Siddhant et al. (2020Siddhant et al. ( , 2022)), we do not use the architectural modifications of Song et al. (2019), such as extra language embeddings or custom initialization." }, { "figure_ref": [ "fig_0" ], "heading": "MASS", "publication_ref": [ "b28" ], "table_ref": [], "text": "BART Lewis et al. (2020) propose a DAE objective similar to MASS, but with two differences. First, BART uses a slightly different noising strategy that can corrupt more than one input span in each sentence. Second, and more importantly, while the decoder is also trained to reconstruct the source sentence, its input context contains the full prefix, including the masked tokens (Figure 2)." }, { "figure_ref": [], "heading": "Backtranslation", "publication_ref": [], "table_ref": [], "text": "For BT, to save resources, instead of training separate bilingual models, we re-use the baseline MMT model and generate the new synthetic parallel data using the monolingual data of each language." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b46", "b42", "b50", "b50" ], "table_ref": [], "text": "Parallel Data We use ML50 (Tang et al., 2021), a multilingual translation dataset between English and 50 other languages. ML50 is more representative of real-world multilingual datasets as it contains typologically diverse languages, including high, medium, and (extremely -less than 10k) low resource pairs, and with data from different domains. It is also more multilingual than the datasets from Siddhant et al. (2020) and Wang et al. (2020), that use 15 and 10 languages, respectively. To reduce training time, we cap the parallel data at 10M sentences per language, similar to Wang et al. 2020, which affects only few high-resource languages." }, { "figure_ref": [], "heading": "Monolingual Data", "publication_ref": [ "b50", "b4", "b10", "b20", "b33", "b15", "b5", "b1", "b36", "b37", "b40", "b38", "b2", "b50", "b42", "b43", "b33", "b48", "b13" ], "table_ref": [ "tab_19" ], "text": "We run controlled experiments with single-and mixed-domain monolingual data. For the single-domain experiments, we use Wikipedia as it is the only publicly available source with available data for all languages in ML50, but exclude the xh and iu languages from the experiments as they lack sufficient monolingual data. We cap the monolingual data per language to 5M, similar to Wang et al. (2020), which is still much larger than the parallel data for most languages. For the mixed-domain experiments, we use the same number of sentences per language, but also include News Crawl1 (Barrault et al., 2020) and Web Crawl data from CC1002 (Conneau et al., 2020). See the Appendix for the full data statistics (Table 16).\nEvaluation Besides ML50 we also consider three domain-specific test sets. We use FLORES-200 (Goyal et al., 2022;NLLB team et al., 2022) with translations of Wikipedia articles, NTREX-1283 (Federmann et al., 2022) with translations in 128 languages from the English WMT19 News test set (Barrault et al., 2019), and TICO-19 with translations in the medical domain (Anastasopoulos et al., 2020). FLORES-200 and NTREX-128 cover all languages in ML50, while TICO-19 covers only 15, but equally distributed across high, medium, and low resources. At test time, use beam search with K=5. In the main paper, we report results using BLEU (Papineni et al., 2002) similar to most prior works. However, to make our evaluation more comprehensive, we include in the Appendix the re- sults from all experiments using ChrF (Popović, 2015) and COMET 4 (Rei et al., 2020), which is a neural metric. We find that overall, all metrics are very consistent with each other, with few small differences in en→xx (see Appendix). We use Sacre-BLEU 5 (Post, 2018) for ChrF and BLEU.\nData Sampling We use temperature-based data sampling (Arivazhagan et al., 2019) to balance the training data. Assuming that p D is the probability that a sentence belongs to dataset D, we sample sentences for D with a probability proportional to\np 1/T D ,\nwhere T is a temperature parameter. When using parallel data, D corresponds to the data of a given language pair. When including monolingual (i.e., for DAE) or synthetic parallel (i.e., for BT) data, we first concatenate all the separate datasets to the same list and then apply temperature sampling. That is, the real en→fr, synthetic (BT) en ′ →fr, and monolingual fr↔fr, are treated as separate datasets D. Larger values of T lead to more even sampling (i.e., upsampling small datasets). We set T = 5 following prior works (Wang et al., 2020;Siddhant et al., 2020), which also leads to a roughly 1:1 ratio when using both monolingual and parallel data.\nModels Our baseline is an MMT model trained only on the en→xx and xx→en parallel data. For both MASS and BART, we mask 50% of input tokens following the hyperparameters from Siddhant et al. (2022, 2020) and NLLB team et al. (2022), respectively. All models use the same Transformer architecture (Vaswani et al., 2017). We consider three different model sizes for our scaling experiments: 1) Transformer-Base with 90M parameters, 2) Transformer-Big with 370M parameters, and 3) Transformer-XL (not to be confused with Dai et al. 2019), with 1.6B parameters. We include details 4 We use v2.0.1 with the wmt22-comet-da model. 5 BLEU+case.mixed+lang.S-T+numrefs.1+smooth.exp+tok.13a+v1.5.1 about our models and training in Appendix A." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Single-Domain Monolingual Data (Wiki)", "publication_ref": [ "b42", "b50", "b50", "b42" ], "table_ref": [ "tab_19" ], "text": "We begin with a series of controlled experiments that measure the impact of the domain using the Transformer-Big model scale (370M). We compare across different test sets the parallel-only model with parallel+BT and parallel+DAE (MASS, BART) that use the single-domain monolingual split (see statistics in Table 16). In Table 1 we report the BLEU scores of each model on the ML50 test set averaged by group and translation direction.\nOn average, BT and both DAE models outperform the baseline by +1.4 and +0.7 BLEU points, respectively. BT consistently achieves the best results, with the largest gains in low-resource, with +1 BLEU points on en→xx and +3.6 BLEU points on xx→en. Both DAE models produce similar results, but MASS is marginally better. However, in the en→xx high-and medium-resource languages, both DAE models fail to outperform the baseline, although they use the same monolingual data as BT.\nNon-aggregated scores reveal mixed results. To get a more detailed picture of model performance we plot the differences in the BLEU scores (∆-BLEU) between each model and the parallelonly baseline model across all pairs in Figure 3. For a simpler presentation, we omit BART which is similar to MASS. Figure 3 reveals that the results are more mixed than the aggregated scores suggest (Table 1). In xx→en, both BT and MASS are generally better than the baseline and follow a similar trend. Their gains increase towards the lowresource languages, with few exceptions, and BT is better than MASS in most cases. However, in en→xx, we discover a different picture. BT shows a surprising behavior as it outperforms the baseline in high-resource (usually from +2 to +4 BLEU) but harms BLEU in most medium-to low-resource languages and is also often worse than MASS. MASS fluctuates around the baseline and benefits only a few low-resource languages. These results contradict early works on MMT+DAE that report universal gains (Siddhant et al., 2020;Wang et al., 2020).\nWhat is the reason for the mixed results? In our experiments, we used the same model/training hyperparameters as in previous conflicting studies (Wang et al., 2020;Siddhant et al., 2020). The only difference lies in our training and test data. Those earlier works used 10/15 languages from WMT and news test sets. By contrast, the ML50 dataset is more challenging, as 1) it has more languages, 2) contains truly low-resource languages (24/50 have less than 200K sentences, unlike prior works), and, more importantly, 3) it has data from diverse sources (Figure 5). High-resource languages contain WMT (news) data, whereas other languages have data from different sources, mainly from TED talks. Recall that BT is more effective in high-resource pairs but yields poor results in non-English non-WMT pairs. Considering this, we hypothesize that previous works reported universal gains because they considered more favourabe experimental setups, with fewer languages and parallel, monolingual, and test data in the same domain." }, { "figure_ref": [], "heading": "How do results change on other test domains?", "publication_ref": [ "b50" ], "table_ref": [], "text": "To test this hypothesis, we evaluate models on uniform test sets, where all languages have data from the same source. Figure 4 shows the results on the FLORES (Wikipedia domain) and NTREX (news domain) test sets. The TICO-19 results follow similar trends and include them in the appendix.\nThe results in both FLORES and NTREX reveal a more favorable picture for both methods. We see similar trends as in the ML50 test sets, especially in xx→en, but the gains are overall larger. This can be explained by the greater domain similarity of the test sets with the monolingual data, particularly FLORES, which shows the biggest improvements. The switch to the in-domain test sets has a stronger effect on BT, especially in en→xx. Notice that in ML50, BT is harmful in en→xx low-resource with mostly out-of-domain data, whereas in NTREX and FLORES, it is consistently helpful. MASS also performs much better on the in-domain test data. However, we still fail to observe the universal gains reported in some works. For instance, in en→xx, it outperforms the baseline only in low-resource. We hypothesize that DAE requires more ideal conditions to be helpful in MMT. For instance, Siddhant et al. ( 2020) used much more monolingual relative to the parallel data, whereas Wang et al. (2020) used a similar ratio to this work but with parallel, monolingual and test data from the same domain. Overall, the performance gap between test sets shows that the domain of the monolingual data is crucial and that both methods are sensitive to mismatches with the test domain, particularly BT." }, { "figure_ref": [ "fig_2" ], "heading": "Mixed-domain Monolingual Data", "publication_ref": [ "b6" ], "table_ref": [ "tab_8" ], "text": "Previously, we examined single-domain monolingual data, removing confounding factors to isolate domain impact. We now turn to a real-world scenario and use multiple sources of monolingual data per language. The goal is to evaluate the significance of diversity in monolingual data. For each language, we hold the size of monolingual data constant ( §5.1), and only change the data mixture. We include data from News Crawl and CC100 (web domain), the only other publicly available data sources with wide enough coverage to support most languages in ML50. For languages that do not have data from all domains, we use only the available ones. We consider two mixed-domain splits:\n1. Unbalanced: This split emulates naively concatenating all the monolingual data of a given language without considering their relative sizes. The ratio between sources is proportional to the size of their uncapped data.\n2. Balanced: This split balances the number of sentences from each source using the same temperature-based sampling method applied to the parallel data, with T=5.\nIn Figure 6, each bar shows the average BLEU difference (∆-BLEU) compared to the singledomain split (Wiki). We include results on the TICO-19 and with ChrF scores in the appendix. Diversity largely favours BT with a minor impact on MASS. This further supports that BT is more sensitive to the domain. BT displays a contrast between translation directions. Note that 1) both BT and DAE use identical target-side monolingual data, and 2) the MMT model has been exposed to a large number of diverse (i.e., many domains) English target-side sentences through the ML50 parallel data. Thus, we hypothesize that source-side diversity causes the xx→en gains of BT.\nThe highest gains appear in NTREX test sets (up to +4 BLEU), as mixed splits incorporate monolingual data from the same domain, i.e., news. Interestingly, mixed-domain data proves beneficial for xx→en in FLORES. Closer examination reveals that these gains mainly affect low-resource languages (Table 5). Although the reason isn't clear, we speculate it may be due to reduced cross-domain interference between the parallel and monolingual data. The re-balancing of monolingual data has minimal impact, though it does slightly enhance or mitigate the drawbacks of using less in-domain data (e.g., FLORES). NTREX does not benefit because re-balancing leads to using less news data. in xx→en, their results are comparable. Both objectives use similar encoder noising methods but differ in the decoder. BART's decoder conditions on the full target prefix, unlike MASS, which excludes unmasked tokens. This potentially makes the MASS decoder rely more on its encoder. Next, BART computes loss over all tokens, even unmasked ones, consequently losing part of the useful signal by teaching the model to copy the input. MASS, however, calculates loss only on unmasked tokens, targeting the training signal to denoising. In related work, Baziotis et al. (2021) study NMT pretraining using BART variants with different input noising methods, such as word replacement or shuffling, and present evidence that input masking biases models towards copying the input. We speculate that the performance gap between MASS and BART stems from these decoder-side differences." }, { "figure_ref": [], "heading": "Denoising Autoencoding Objectives", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_0", "fig_7", "fig_4", "fig_3" ], "heading": "Scale", "publication_ref": [], "table_ref": [], "text": "This section examines the role of model scale. We hold all other factors constant and test three model sizes that differ by a factor of 4: Transformer-Base (90M), Transformer-Big (370M), and Transformer-XL (1.6B). To conserve computational resources, we consider only one DAE method, MASS, as it outperformed BART in previous experiments. We use the (Wiki) single-domain monolingual split to test for in-domain (FLORES) and out-of-domain (ML50) effects. Figure 7 shows results and includes BLEU and COMET6 . How crucial is model capacity for BT and DAE? All models improve with scale. However, small models find monolingual methods less beneficial, especially in ML50 (top), which is out-of-domain with respect to the (Wikipedia) monolingual data. BT shows negligible gains, while MASS even proves detrimental. As scale increases, both MASS and BT become more effective, with MASS benefiting the most. Surprisingly, MASS transitions from underperforming the baseline to outperforming it and becomes competitive with BT at the 1.6B scale. We also discover that according to COMET (and chrF), the effects of scale on MASS are even stronger, as it outperforms BT by a small margin.\nIn FLORES (bottom), BT and MASS exhibit a similar trend, but are overall more effective, since the test and monolingual domains are the same. At small scale, MASS fails to yield any gains, whereas BT is more helpful. As scale increases, the gains of both methods relative to the baseline also increase. However, according to BLEU, the performance gap between MASS and BT remains relatively constant, unlike in ML50, whereas according to COMET, MASS achieves again comparable performance to BT. This suggests that DAE becomes more competitive with scale and bridges the gap with BT, in particular in out-of-domain settings (ML50).\nWe speculate that learning from monolingual data proves more challenging for smaller models for details; Figures 12,13). because they prioritize learning from parallel data. This also explains why BT outperforms DAE at small scales. Translating the synthetic parallel data, which is more similar to the supervised MT task, is an easier task compared to denoising. As model capacity increases, it \"unblocks\" DAE and progressively enables it to make better use of monolingual data. This suggests that there is a cross-task interference that is mitigated by scaling.\nHow direction and resource-level are affected? Next, we investigate the scaling patterns of MASS and BT. Figure 8 shows the relative difference between the BLEU score of each model and the corresponding parallel-only baseline in the same scale across translation directions. Both methods benefit from scale, with low-resource settings gaining the most. Notice that for each method, that gap between scales is small in high-resource (up to 2 BLEU) but large in low-resource directions (up to 3 and 5 BLEU in ML50 and FLORES, respectively). Scale also generally benefits more xx→en (right side) compared to en→xx (left side). The plots per test set also have the same y-axis, which enables us to directly compare BT with MASS. We discover that the reason MASS (on average) closes the gap with BT (see Figure 7) as scale grows is because of its low-resource performance. In particular, in ML50 at the 1.6B scale, the gap becomes negligible, and MASS even marginally outperforms BT in low-resource xx→en (two top-right plots)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work presents a systematic analysis of widely used methods that include monolingual data in MMT, specifically BT, and two DAE objectives. It does not negate findings from prior works but rather highlights confounding factors that explain the mixed results found in the literature. These factors range from the characteristics of the experimental setup, like the data mixture, to the effective model capacity. The main takeaway is that one should not expect gains from DAE or BT in all settings but carefully consider all aspects of the system to reach optimal performance. We compare models across different data conditions and combinations of monolingual and test data, and discover that all methods are very sensitive to domain mismatches. BT overall yields the most gains, but it can fail in out-of-domain and lowresource settings. As for DAE, we conclude that it can be helpful, particularly in low-resource and xx→en, but the universal gains reported from early works can only be achieved in ideal conditions, where the parallel, monolingual, and test data are from the same domain. Another key finding is that model capacity can make or break a method. Larger models are better able to use monolingual data, with gains from both BT and DAE increasing as the model scale grows. We also discover a novel connection between domain robustness and model size. Scale is more important in out-of-domain settings, as all methods yield limited to no gains at small scales. In particular, MASS is harmful to MMT with the 90M models, but when using 1.6B models, it becomes comparable or even better to BT.\nBased on our findings, we provide some recommendations to practitioners:\n• For in-domain settings, prefer BT, as it yields the best results across scales and resource levels.\n• For out-of-domain settings, the choice depends on model size. At small scales, prefer BT but expect small gains. At large scales, both methods are more effective, and the gap between them diminishes. DAE is a viable and computationally cheaper alternative to BT, which needs to backtranslate monolingual data from many languages.\n• For MMT+DAE, prefer MASS instead of BART.\n• Aim to increase the diversity of the monolingual data by mixing different sources and re-balance them to ensure a more even distribution. " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b57", "b35" ], "table_ref": [], "text": "We used only one dataset with roughly 200M sentences and 100 translation directions. The dataset is more diverse, with more languages than many prior works, however, it is unclear how the results will generalize to datasets with other characteristics, such as more languages or more/less typologically diverse languages. The same holds for the combinations of monolingual and test data. We consider three main sources of publicly available monolingual data that also have wide coverage across many languages. Using more domains for the monolingual and test data would be better, but we could not find other monolingual sources with wide coverage. This work focuses only on the English-centric setting (i.e., concatenation of English→XX and XX→English), which is the most commonly studied in MMT and is what the relevant prior works use. We considered this setting to make our study directly comparable to those earlier works and because it was easier to construct all the different data splits to run both controlled experiments and with wide language coverage. However, it is possible that our conclusions do not generalize to other settings, such as fully many-to-many MMT or pivotbased MMT.\nThis work presents results on three model sizes: 90M, 370M, and 1.6B. Our results reveal clear trends emerging across scales, but these trends can potentially change in much larger scales depending on the setting. One question that is left unanswered is whether DAE would outperform BT if we scaled models to over 1.6B parameters. We leave this to future work, as running those experiments would require significantly more resources than we had available. On a related note to scale, note that the scale of LLMs is not comparable to MMT models, and even models like GPT4 fail to outperform orders of magnitude smaller MMT models like NLLB ( with \"only\" 1.3B) in most languages, particularly medium-to low-resource (Zhu et al., 2023). Unlike others, we systematically train models with different methods from-scratch, and our larger variant even exceeds the size of models like NLLB.\nLastly, in this work, we considered the three most widely adopted methods for integrating monolingual data into MMT, namely BT and DAE with MASS/BART. However, there are other methods, such as those using contrastive losses (Pan et al., 2021). We leave these comparisons for future work." }, { "figure_ref": [], "heading": "A Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Training", "publication_ref": [ "b42", "b43", "b34", "b48", "b39", "b23", "b45", "b50", "b42", "b13" ], "table_ref": [], "text": "Our baseline is an MMT model trained only on the parallel data (en→xx and xx→en). For BT, we use the baseline model to generate the synthetic translations using beam search with beam size 4, following NLLB team et al. (2022). For MASS, we use the hyperparameters from Siddhant et al. (2020Siddhant et al. ( , 2022) ) and mask 50% of input tokens. For BART, we use the hyperparameters 7 from NLLB team et al. (2022), that also mask 50% of input tokens. We implement all our models using the fairseq toolkit (Ott et al., 2019), and for BART we use the original implementation in fairseq, whereas for MASS develop our own re-implementation.\nAll models use the same Transformer architecture (Vaswani et al., 2017) with shared encoderdecoder embeddings and decoder output projection layers (Press and Wolf, 2017;Inan et al., 2017) as in NLLB team et al. ( 2022). We optimize our models with Adam (Kingma and Ba, 2015) with β 1 = 0.9, β 2 = 0.98, and ϵ = 10 -6 , with a learning rate of 0.001 using a linear warm-up of 8k steps, followed by inverted squared decay. We also regularize the models with label smoothing (Szegedy et al., 2016) of 0.1 and weight decay of 0.01.\nWe consider three different model sizes: 1) Transformer-Base with 90M parameters configured as in the original paper, 2) Transformer-Big with 370M parameters, similar to the original but with an 8192-sized feed-forward layer as in Wang et al. (2020); Siddhant et al. (2020), and 3) Transformer-XL (not to be confused with Dai et al. (2019)), with 1.6B parameters, 12 encoder/decoder layers, feedforward layers of 8192, 2048-sized embeddings, and 32 attention heads. We train all models with mixed precision (FP16) and use gradient accumulation to reach the desired batch size for each model size. Specifically, we train the Transformer-Base on 4 A100 GPUs for 440K steps with an effective batch size of 280K token batches, the Transformer-Big on 8 A100 GPUs for 360K steps with 320K token batches, and the Transformer-XL on 12 A100 GPUs for 120K steps with 860K token batches. We evaluate models every 40K (10k for Transformer-XL) steps and select the checkpoint with the best average translation loss (i.e., negative log-likelihood) across all language pairs in the ML50 validation set.\n7 Fairseq arguments: \"-mask 0.5 -mask-random 0.1 -mask-length span-poisson -poisson-lambda 3.5\" " }, { "figure_ref": [], "heading": "B Additional Results", "publication_ref": [ "b27", "b27" ], "table_ref": [], "text": "In the main paper, for brevity, we discuss results using only BLEU and for selected experiments that highlight our most important findings. For completeness, we also re-evaluate the outputs from all of our experiments and across all test sets with two additional evaluation metrics, following the recommendations of Kocmi et al. (2021): chrF: this is another surface-level (i.e., stringbased) metric, like BLEU, but achieves better correlation with human judgment. It compares character n-grams that make it better for languages with rich morphology and is also tokenization independent.\nCOMET: this is a neural-based metric that uses a pretrained model to estimate the translation quality. Unlike BLEU and chrF, it also takes into account the source sentence. However, we point out that it is not clear how reliable (the current version of) COMET is for low-resource languages or test data across different domains, as Kocmi et al. (2021) in their analysis considered only high-resource languages and two test domains (news, discussions).\nWe find that overall, the ranking of the models is very consistent across metrics. We observe only two instances where metrics do not fully agree with each other, mainly in en→xx and low-resource languages (see §B.1.1, §B.3). However, the main findings and patterns discussed in the main body of the paper still hold across metrics." }, { "figure_ref": [ "fig_4" ], "heading": "B.1 Main Experiments", "publication_ref": [], "table_ref": [ "tab_7", "tab_8", "tab_9", "tab_15" ], "text": "First, we report the results of the experiments that investigate the role of data. This includes the results from all models trained with the singledomain (Wikipedia) and mixed-domain (unbalancevs-balanced) monolingual data in Section 5.1 and Section 5.2, respectively. Recall that ML50 con-tains parallel data from many different sources, which are mostly out-of-domain data with respect to the Wikipedia domain. The same holds for the ML50 test data.\nWe include the full results for all methods across all monolingual splits in Table 4 (ML50), Table 5 (FLORES), Table 6 (NTREX) andTable 7 (TICO-19). Next, we also include the line charts with the score differences of all models with all metrics in Figure 8, which are the counterparts of the Figures 3, 4 in the main body of the paper." }, { "figure_ref": [], "heading": "B.1.1 Mixed-Domain Monolingual Data", "publication_ref": [], "table_ref": [], "text": "Besides the table view of the results, which do include the scores per monolingual split, here we also report the corresponding bar plots, similar to those in Section 5.2, with all methods, test sets, and metrics. This is one of the few cases where we discover a small discrepancy between metrics. Specifically, we see that the ChrF and COMET results suggest that using mixed-domain monolingual data is even more helpful for BT, than what the BLEU scores suggest. In particular, Figure 9 shows gains in BLEU (top) only in the xx→en direction, whereas the ChrF (middle row) and COMET (bottom row) scores reveal consistent improvements even in the en→xx direction. We also see that further re-balancing (green bar) the monolingual data yields small gains in most settings. Besides these differences, the overall trends are the same across metrics (i.e., BT is more sensitive to diversity than MASS, with larger gains in xx→en). " }, { "figure_ref": [], "heading": "B.2 Denoising Autoencoding Objectives", "publication_ref": [], "table_ref": [ "tab_15", "tab_16" ], "text": "In this section, we extend the comparison of the two DAE objectives that is presented in Section 5.3 by including the results across all metrics and monolingual splits. Specifically, Table 9 shows the results with the balanced mixed-domain monolingual split, Table 10 with the unbalanced mixed-domain monolingual split, and +BART 49.4 45.3 46.5 56.4 52.9 50.4 49.9 +MASS 49.7 45.8 46.8 56.8 53.3 51.6 50.4 NTREX +BART 47.4 42.9 44.1 54.4 51.8 48.2 47.9 +MASS 47.6 43.3 44.3 54.6 52.1 49.3 48.3 ML50 +BART 47.9 44.1 47.4 55.1 50.4 50.6 49.1 +MASS 48.0 44.4 47.6 " }, { "figure_ref": [], "heading": "FLORES", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_7", "fig_8", "fig_9", "fig_10" ], "heading": "B.3 Scaling", "publication_ref": [], "table_ref": [ "tab_1", "tab_5" ], "text": "In this section, we report all of our results for the model scale analysis ( §5.4). Tables 12,13, 14, 15 show the results on the ML50, FLORES, NTREX and TICO19 test sets, respectively. For each test set, we report side-by-side the results from each evaluation metric.\nModel Averages per Scale As it is not easy to extract meaningful patterns from the results in table format, we also plot the corresponding line plots with the average score of each method per model scale across metrics, in Figure 11 (BLEU), Figure 12 (chrF), and Figure 13 (COMET). We observe that the trends are overall the same across both metrics. All metrics agree that at small scales, MASS fails to outperform the baseline but becomes much more effective, compared to the baseline, as the scale increases. This further supports the findings discussed in the main paper.\nHowever, we discover that metrics disagree with each other about the degree that scale benefits DAE/MASS. Specifically, we see that according to BLEU, DAE at the 1.6B scale is competitive with BT only on the ML50 test set, whereas chrF (middle column) and COMET (right column) suggest that DAE becomes much stronger with scale. In particular, according to COMET, at the 1.6B scale, MASS matches or outperforms BT on most test sets.\nModel Averages per Resource-Level For completeness, we also include the plots with the scaling patterns of each model across resource levels and translation directions, in Figure 14 (BLEU; left column), Figure 15 (chrF; middle column), Figure 16 (COMET; right column). Overall, the results are consistent across metrics and test sets and the discussion in the main paper still holds.\nHowever, we do discover one interesting discrepancy, which potentially relates to the observations of the previous paragraph. Specifically, in the chrF plots we see that BT in en→xx low-resource settings (bottom-left plot per test set) tends to become less effective than the parallel baseline in all test sets except for ML50. Recall that ML50 is the most distant test set with respect to the (Wikipedia) monolingual data. We do not have a reliable explanation for this observation. " }, { "figure_ref": [], "heading": "C Additional Tables and Figures", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Group", "publication_ref": [], "table_ref": [], "text": "Lang.\nParallel Parallel + cap (10M) wiki + cap (10M) cc100 + cap (10M) news + cap (10M) cs 51,517,074 10,000,000 5,000,000 5,000,000 5,000,000 de 45,992,835 10,000,000 5,000,000 5,000,000 5,000,000 fr 38,507,539 10,000,000 5,000,000 5,000,000 5,000,000 ja 17,203,227 10,000,000 5,000,000 5,000,000 5,000,000 ru 13,599,766 10,000,000 5,000,000 5,000,000 5,000,000 zh 11,173,646 10,000,000 5,000,000 5,000,000 5,000,000 es 10,531,168 10,000,000 5,000,000 5,000,000 " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was funded by UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10039436]. The computations described in this research were performed using the Baskerville Tier 2 HPC service (https://www.baskerville. ac.uk/). Baskerville was funded by the EPSRC and UKRI through the World Class Labs scheme (EP/T022221/1) and the Digital Research Infrastructure programme (EP/W032244/1) and is operated by Advanced Research Computing at the University of Birmingham. We would also like to thank Shruti Bhosale for helpful discussions." } ]
Multilingual machine translation (MMT), trained on a mixture of parallel and monolingual data, is key for improving translation in low-resource language pairs. However, the literature offers conflicting results on the performance of different methods of including monolingual data. To resolve this, we examine how denoising autoencoding (DAE) and backtranslation (BT) impact MMT under different data conditions and model scales. Unlike prior studies, we use a realistic dataset of 100 translation directions and consider many domain combinations of monolingual and test data. We find that monolingual data generally helps MMT, but models are surprisingly brittle to domain mismatches, especially at smaller model scales. BT is beneficial when the parallel, monolingual, and test data sources are similar but can be detrimental otherwise, while DAE is less effective than previously reported. Next, we analyze the impact of scale (from 90M to 1.6B parameters) and find it is important for both methods, particularly DAE. As scale increases, DAE transitions from underperforming the parallelonly baseline at 90M to converging with BT performance at 1.6B, and even surpassing it in low-resource. These results offer new insights into how to best use monolingual data in MMT.
When Does Monolingual Data Help Multilingual Translation: The Role of Domain and Model Scale
[ { "figure_caption": "Figure 2 :2Figure 1: Illustration of the MASS objective.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :Figure 5 :345Figure 3: BLEU differences between each model and the parallel-only model (red dotted line) on the ML50 test data.", "figure_data": "", "figure_id": "fig_1", "figure_label": "345", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: BLEU differences (∆-BLEU) of the BT models trained with the mixed-domain split with respect to the single-domain monolingual data (dotted red line). To plot the bars, we use the mean ∆-BLEU and the standard error.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Mean BLEU and COMET across model scales. The error bars show the standard error of the mean.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Average BLEU differences (∆-BLEU) of each model with respect to the corresponding parallel-only baseline in the same scale (red dotted line). The error bars show the standard error of the mean.", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "cs de fr ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mr mn gu az bn ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high Results on ML50 test sets. cs de fr ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa vi hr uk th id sv pt af kk ur mk te sl my ka gl mrmn gu az bn high Results on FLORES (wiki) test sets. cs de fr ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high ja ru zh es pl lv fi hi lt et ta ro si ps ne ml nl it ar ko he tr km fa hr uk th id sv pt af kk mk te sl my ka gl mr mn gu az bn high Results on NTREX (news) test sets.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure9: Score differences (∆-X) of the BT models trained with the mixed-domain split with respect to the single-domain monolingual data (dotted red line). The top plot shows the ∆-BLEU scores, whereas the bottom shows the ∆-ChrF scores. To plot the bars, we use the mean ∆-X and the standard error.", "figure_data": "", "figure_id": "fig_6", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 11: Average BLEU scores across model scales.", "figure_data": "", "figure_id": "fig_7", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Mean BLEU differences (and standard error of the mean) per model with respect to the parallel-only baseline in the same scale (red dotted line).", "figure_data": "", "figure_id": "fig_8", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Mean chrF differences (and standard error of the mean) per model with respect to the parallel-only baseline in the same scale (red dotted line).", "figure_data": "", "figure_id": "fig_9", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Mean COMET differences (and standard error of the mean) per model with respect to the parallel-only baseline in the same scale (red dotted line).", "figure_data": "", "figure_id": "fig_10", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Table 2 compares MASS and BART across all test sets. We consider their variants trained with the balanced monolingual data ( §5.2), as they work marginally better (see Appendix §B.2 for more results). MASS consistently outperforms BART, with larger gains in xx→en (up to 2 BLEU). However, BLEU scores (↑) of BART and MASS trained with the balanced mixed-domain monolingual data.", "figure_data": "Modelen→xx High Med Low High Med Low xx→enMeanFLORESBART23.6 15.2 14.3 27.8 24.4 22.0 20.8MASS23.8 15.3 14.5 28.4 25.0 23.4 21.3NTREXBART21.8 13.1 13.7 25.0 23.2 21.0 19.4MASS21.9 13.3 13.7 25.7 23.8 22.1 19.8ML50BART22.1 16.8 21.3 26.8 26.1 28.1 23.5MASS22.1 16.8 21.5 27.2 26.6 28.8 23.8TICO-19BART31.2 14.0 15.1 31.8 26.0 24.1 23.7MASS31.5 14.4 15.2 32.6 27.2 26.4 24.5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "• If in-domain or diverse monolingual data is not available, consider the trade-offs between collecting extra data or scaling up the model. If neither is possible, avoid using monolingual data with BT or DAE in en→xx low-resource directions.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hyperparameters used for the Transformer models of various sizes in the study.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of the Transformer-Big models evaluated on the ML50 (mixed-domain) test set and grouped by the monolingual split that has been used for training BT and DAE.", "figure_data": "Modelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanparallel 24.8 15.3 13.2 28.6 22.4 16.0 19.4parallel 50.6 46.1 45.0 57.1 50.4 42.6 48.2parallel 83.0 80.5 71.4 84.0 79.3 69.9 77.4Wiki +BART 24.0 15.6 14.7 28.3 24.9 22.5 21.2 +MASS 24.3 15.5 14.9 28.6 25.2 23.0 21.5 +BT 26.0 18.8 17.7 30.8 28.4 24.9 24.1Wiki +BART 49.9 46.2 47.1 56.9 53.1 50.6 50.4 +MASS 50.2 46.1 47.3 57.0 53.4 51.3 50.6 +BT 52.1 49.3 47.6 59.2 57.1 52.2 52.7Wiki +BART 82.5 80.5 74.4 83.9 81.5 78.9 80.0 +MASS 82.8 80.3 74.6 83.9 81.7 79.6 80.2 +BT 84.1 82.7 77.5 84.8 82.9 77.2 81.2Mix +BART 23.4 15.0 14.2 27.9 24.7 22.6 20.9 +MASS 23.5 15.2 14.1 28.5 24.8 23.0 21.1 +BT 25.6 17.6 17.6 30.8 28.6 26.9 24.2Mix +BART 49.5 45.1 46.4 56.5 52.9 50.6 49.9 +MASS 49.4 45.7 46.4 56.7 53.0 51.2 50.1 +BT 51.6 49.2 49.3 59.1 57.0 55.1 53.4Mix +BART 81.8 79.1 73.5 83.4 81.1 78.6 79.3 +MASS 82.0 80.0 73.7 83.7 81.3 79.3 79.7 +BT 84.1 83.0 78.1 84.6 82.7 80.1 81.9Mix+bal +BART 23.6 15.2 14.3 27.8 24.4 22.0 20.8 +MASS 23.8 15.3 14.5 28.4 25.0 23.4 21.3 +BT 25.5 18.3 18.0 31.1 29.1 27.2 24.6Mix+bal +BART 49.4 45.3 46.5 56.4 52.9 50.4 49.9 +MASS 49.7 45.8 46.8 56.8 53.3 51.6 50.4 +BT 51.7 49.2 49.3 59.3 57.3 55.2 53.5Mix+bal +BART 81.9 79.5 73.4 83.3 81.0 78.5 79.3 +MASS 82.3 80.0 74.0 83.7 81.6 79.7 79.9 +BT 84.2 82.9 78.6 84.8 83.0 80.4 82.1(a) BLEU scores (↑)(b) chrF scores (↑)(c) COMET scores (↑)", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of the Transformer-Big models on the FLORES (Wikipedia) test set and grouped by the monolingual split that has been used for training BT and DAE. Cells in red indicate worse scores than the baseline.", "figure_data": "Modelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanparallel 22.4 13.2 12.4 25.1 21.1 15.1 17.8parallel 48.3 43.3 42.2 54.5 49.3 40.8 46.0parallel 79.0 76.9 69.4 82.1 78.7 67.6 75.2Wiki +BART 21.9 13.2 13.4 25.5 23.3 20.8 19.4 +MASS 22.1 13.2 13.8 25.5 23.3 21.2 19.5 +BT 23.3 15.5 16.0 27.4 25.1 21.8 21.2Wiki +BART 47.6 43.2 44.0 54.5 51.7 47.9 47.9 +MASS 47.9 43.0 44.2 54.6 51.9 48.5 48.1 +BT 49.2 45.7 44.2 56.7 54.6 48.3 49.5Wiki +BART 78.3 76.7 72.0 82.1 80.6 75.5 77.3 +MASS 78.8 76.4 72.3 82.3 80.8 76.3 77.6 +BT 79.7 78.7 74.4 83.0 81.6 73.4 78.2Mix +BART 21.6 13.0 13.6 25.4 23.4 21.5 19.5 +MASS 21.7 13.1 13.7 26.0 23.9 22.1 19.8 +BT 22.8 14.9 16.4 30.9 28.6 27.1 23.2Mix +BART 47.4 42.8 43.9 54.5 51.7 48.4 47.9 +MASS 47.5 43.2 44.1 54.7 52.0 49.1 48.2 +BT 48.8 46.2 46.2 58.7 56.6 53.1 51.4Mix +BART 78.0 76.0 72.0 81.9 80.5 76.2 77.2 +MASS 78.5 76.9 72.3 82.3 81.0 76.9 77.8 +BT 80.1 79.6 76.2 83.8 82.6 77.6 79.8Mix+bal +BART 21.8 13.1 13.7 25.0 23.2 21.0 19.4 +MASS 21.9 13.3 13.7 25.7 23.8 22.1 19.8 +BT 22.9 15.4 16.6 30.4 28.5 27.0 23.2Mix+bal +BART 47.4 42.9 44.1 54.4 51.8 48.2 47.9 +MASS 47.6 43.3 44.3 54.6 52.1 49.3 48.3 +BT 49.1 46.3 46.2 58.4 56.5 52.8 51.4Mix+bal +BART 78.2 76.3 71.9 81.8 80.5 75.9 77.2 +MASS 78.6 76.8 72.3 82.2 81.0 77.2 77.8 +BT 80.2 79.7 76.3 83.8 82.7 77.6 79.9(a) BLEU scores (↑)(b) chrF scores (↑)(c) COMET scores (↑)", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of the Transformer-Big models on the NTREX (News) test set and grouped by the monolingual split that has been used for training BT and DAE. Cells in red indicate worse scores than the baseline.", "figure_data": "Modelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanModelen→xx High Med Low High Med Low xx→enMeanparallel 32.3 14.3 14.4 32.4 24.2 17.4 22.3parallel 53.3 45.5 46.8 61.0 52.4 45.4 50.6parallel 80.3 76.4 69.9 83.4 79.2 73.1 76.7Wiki +BART 31.9 14.9 15.1 32.9 26.3 24.2 24.2 +MASS 31.9 14.0 15.4 32.9 27.0 24.6 24.3 +BT 34.5 18.4 19.8 36.8 32.2 28.7 28.3Wiki +BART 52.8 46.2 47.9 61.0 54.6 53.3 52.6 +MASS 52.8 44.8 48.0 61.0 55.2 53.3 52.5 +BT 55.4 49.7 48.6 64.2 60.7 57.0 55.8Wiki +BART 79.8 76.6 70.9 83.6 81.2 80.2 78.5 +MASS 79.9 75.6 70.9 83.6 81.4 80.5 78.5 +BT 81.1 80.2 75.8 84.7 83.0 80.5 80.7Mix +BART 30.5 13.9 14.9 32.5 26.6 24.2 23.7 +MASS 31.1 14.3 15.2 33.0 26.9 25.6 24.3 +BT 33.2 16.5 19.2 36.9 32.5 30.6 28.2Mix +BART 51.7 44.5 47.6 60.8 54.8 52.8 52.1 +MASS 51.9 45.4 47.6 61.0 54.9 54.2 52.5 +BT 54.4 47.4 50.0 64.2 60.8 59.0 56.0Mix +BART 78.9 75.3 70.5 83.4 81.2 80.0 78.1 +MASS 79.2 76.3 70.8 83.6 81.3 81.0 78.5 +BT 81.3 79.3 76.9 84.9 83.5 82.2 81.2Mix+bal +BART 31.2 14.0 15.1 31.8 26.0 24.1 23.7 +MASS 31.5 14.4 15.2 32.6 27.2 26.4 24.5 +BT 34.3 17.7 20.2 37.4 33.0 30.9 28.9Mix+bal +BART 52.1 44.6 47.9 60.4 54.6 53.2 52.2 +MASS 52.5 45.3 47.9 60.6 55.6 54.9 52.9 +BT 55.2 48.8 50.5 64.5 61.0 58.9 56.5Mix+bal +BART 79.3 75.4 70.6 83.1 81.0 80.2 78.1 +MASS 79.5 76.3 70.7 83.4 81.8 81.5 78.7 +BT 81.6 80.1 77.3 85.1 83.6 82.4 81.6(a) BLEU scores (↑)(b) chrF scores (↑)(c) COMET scores (↑)", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results of the Transformer-Big models on the TICO-19 (Medical) test set and grouped by the monolingual split that has been used for training BT and DAE. Cells in red indicate worse scores than the baseline.", "figure_data": "", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Score (BLEU, chrF, COMET) differences between each model and the parallel-only baseline (red dotted line) across test sets, for models with the Transformer-Big architecture (370M).", "figure_data": "", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "", "figure_data": "with the single-domain (Wikipedia) monolingual split. We observethat the differences are very small between models,but MASS outperforms BART by a small marginin most settings, similar to what is discussed in themain paper.", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Comparison of the DAE objectives with models trained on the balanced mixed-domain.", "figure_data": "en→xxxx→enen→xxxx→enen→xxxx→enModelHigh Med Low High Med LowMeanModelHigh Med Low High Med LowMeanModelHigh Med Low High Med LowMeanFLORESFLORESFLORES+BART 23.4 15.0 14.2 27.9 24.7 22.6 20.9 +MASS 23.5 15.2 14.1 28.5 24.8 23.0 21.1+BART 49.5 45.1 46.4 56.5 52.9 50.6 49.9 +MASS 49.4 45.7 46.4 56.7 53.0 51.2 50.1+BART 81.8 79.1 73.5 83.4 81.1 78.6 79.3 +MASS 82.0 80.0 73.7 83.7 81.3 79.3 79.7NTREXNTREXNTREX+BART 21.6 13.0 13.6 25.4 23.4 21.5 19.5 +MASS 21.7 13.1 13.7 26.0 23.9 22.1 19.8+BART 47.4 42.8 43.9 54.5 51.7 48.4 47.9 +MASS 47.5 43.2 44.1 54.7 52.0 49.1 48.2+BART 78.0 76.0 72.0 81.9 80.5 76.2 77.2 +MASS 78.5 76.9 72.3 82.3 81.0 76.9 77.8ML50ML50ML50+BART 21.8 16.7 21.4 27.1 26.3 28.4 23.6 +MASS 22.0 16.8 21.5 27.4 26.5 28.9 23.8+BART 47.7 44.0 47.4 55.3 50.4 50.8 49.1 +MASS 47.9 44.4 47.5 55.3 50.4 51.2 49.3+BART 80.0 79.2 78.7 80.9 79.0 79.3 79.4 +MASS 80.5 79.9 78.8 81.2 79.3 79.8 79.8TICO-19TICO-19TICO-19+BART 30.5 13.9 14.9 32.5 26.6 24.2 23.7 +MASS 31.1 14.3 15.2 33.0 26.9 25.6 24.3+BART 51.7 44.5 47.6 60.8 54.8 52.8 52.1 +MASS 51.9 45.4 47.6 61.0 54.9 54.2 52.5+BART 78.9 75.3 70.5 83.4 81.2 80.0 78.1 +MASS 79.2 76.3 70.8 83.6 81.3 81.0 78.5(a) BLEU scores (↑)(b) chrF scores (↑)(c) COMET scores (↑)", "figure_id": "tab_15", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Comparison of the DAE objectives with models trained on the unbalanced mixed-domain.", "figure_data": "en→xxxx→enen→xxxx→enen→xxxx→enModelHigh Med Low High Med LowMeanModelHigh Med Low High Med LowMeanModelHigh Med Low High Med LowMeanFLORESFLORESFLORES+BART 24.0 15.6 14.7 28.3 24.9 22.5 21.2 +MASS 24.3 15.5 14.9 28.6 25.2 23.0 21.5+BART 49.9 46.2 47.1 56.9 53.1 50.6 50.4 +MASS 50.2 46.1 47.3 57.0 53.4 51.3 50.6+BART 82.5 80.5 74.4 83.9 81.5 78.9 80.0 +MASS 82.8 80.3 74.6 83.9 81.7 79.6 80.2NTREXNTREXNTREX+BART 21.9 13.2 13.4 25.5 23.3 20.8 19.4 +MASS 22.1 13.2 13.8 25.5 23.3 21.2 19.5+BART 47.6 43.2 44.0 54.5 51.7 47.9 47.9 +MASS 47.9 43.0 44.2 54.6 51.9 48.5 48.1+BART 78.3 76.7 72.0 82.1 80.6 75.5 77.3 +MASS 78.8 76.4 72.3 82.3 80.8 76.3 77.6ML50ML50ML50+BART 22.0 16.9 21.3 27.0 26.6 27.9 23.6 +MASS 22.1 16.9 21.3 27.1 26.5 28.5 23.7+BART 47.9 44.6 47.4 55.1 50.5 50.4 49.1 +MASS 48.1 44.5 47.5 55.2 50.6 50.9 49.3+BART 80.4 80.2 78.4 80.9 79.1 78.9 79.5 +MASS 80.7 79.9 78.6 81.0 79.4 79.4 79.7TICO-19TICO-19TICO-19+BART 31.9 14.9 15.1 32.9 26.3 24.2 24.2 +MASS 31.9 14.0 15.4 32.9 27.0 24.6 24.3+BART 52.8 46.2 47.9 61.0 54.6 53.3 52.6 52.8 44.8 48.0 61.0 55.2 53.3 52.5+BART 79.8 76.6 70.9 83.6 81.2 80.2 78.5 +MASS 79.9 75.6 70.9 83.6 81.4 80.5 78.5(a) BLEU scores (↑)(b) chrF scores (↑)(c) COMET scores (↑)", "figure_id": "tab_16", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Comparison of the DAE objectives with models trained on the (Wikipedia) single-domain.", "figure_data": "", "figure_id": "tab_17", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "The statistics of the parallel and training data we use for each language. The red-highlighted rows show the languages that we remove from our experiments.", "figure_data": "", "figure_id": "tab_19", "figure_label": "16", "figure_type": "table" } ]
Christos Baziotis; Samaya Ai; Biao Zhang; Google Deepmind; Alexandra Birch; Barry Haddow
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Antonios Anastasopoulos; Alessandro Cattelan; Zi-Yi Dou; Marcello Federico; Christian Federmann; Dmitriy Genzel; Franscisco Guzmán; Junjie Hu; Macduff Hughes; Philipp Koehn; Rosie Lazar; Will Lewis; Graham Neubig; Mengmeng Niu; Alp Öktem; Eric Paquin; Grace Tang; Sylwia Tur", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "TICO-19: the translation initiative for COvid-19", "year": "2020" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Dmitry Lepikhin; Melvin Johnson; Maxim Krikun; Mia Xu Chen; Yuan Cao; George Foster; Colin Cherry", "journal": "", "ref_id": "b2", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges", "year": "2019" }, { "authors": "Ankur Bapna; Isaac Caswell; Julia Kreutzer; Orhan Firat; Daan Van Esch; Aditya Siddhant; Mengmeng Niu; Pallavi Baljekar; Xavier Garcia; Wolfgang Macherey; Theresa Breiner; Vera Axelrod; Jason Riesa; Yuan Cao; Mia Xu Chen; Klaus Macherey; Maxim Krikun; Pidong Wang; Alexander Gutkin; Apurva Shah; Yanping Huang; Zhifeng Chen; Yonghui Wu; Macduff Hughes", "journal": "", "ref_id": "b3", "title": "Building machine translation systems for the next thousand languages", "year": "2022" }, { "authors": "Loïc Barrault; Magdalena Biesialska; Ondřej Bojar; Marta R Costa-Jussà; Christian Federmann; Yvette Graham; Roman Grundkiewicz; Barry Haddow; Matthias Huck; Eric Joanis; Tom Kocmi; Philipp Koehn; Chi-Kiu Lo; Nikola Ljubešić; Christof Monz; Makoto Morishita; Masaaki Nagata; Toshiaki Nakazawa; Santanu Pal; Matt Post; Marcos Zampieri", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Findings of the 2020 conference on machine translation (WMT20)", "year": "2020" }, { "authors": "Loïc Barrault; Ondřej Bojar; Marta R Costa-Jussà; Christian Federmann; Mark Fishel; Yvette Graham; Barry Haddow; Matthias Huck; Philipp Koehn; Shervin Malmasi; Christof Monz; Mathias Müller; Santanu Pal; Matt Post; Marcos Zampieri", "journal": "", "ref_id": "b5", "title": "Findings of the 2019 conference on machine translation (WMT19)", "year": "2019" }, { "authors": "Christos Baziotis; Ivan Titov; Alexandra Birch; Barry Haddow", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Exploring unsupervised pretraining objectives for machine translation", "year": "2021" }, { "authors": "Eleftheria Briakou; Colin Cherry; George Foster", "journal": "", "ref_id": "b7", "title": "Searching for needles in a haystack: On the role of incidental bilingualism in palm's translation capability", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b9", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Alexis Conneau; Guillaume Lample", "journal": "", "ref_id": "b11", "title": "Crosslingual language model pretraining", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b12", "title": "", "year": "" }, { "authors": "Zihang Dai; Zhilin Yang; Yiming Yang; Jaime Carbonell; Quoc Le; Ruslan Salakhutdinov", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Christian Federmann; Tom Kocmi; Ying Xin", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "NTREX-128 -news test references for MT evaluation of 128 languages", "year": "2022" }, { "authors": "Patrick Fernandes; Behrooz Ghorbani; Xavier Garcia; Markus Freitag; Orhan Firat", "journal": "", "ref_id": "b16", "title": "Scaling laws for multilingual neural machine translation", "year": "2023" }, { "authors": "Xavier Garcia; Yamini Bansal; Colin Cherry; George Foster; Maxim Krikun; Fangxiaoyu Feng; Melvin Johnson; Orhan Firat", "journal": "", "ref_id": "b17", "title": "The unreasonable effectiveness of few-shot learning for machine translation", "year": "2023" }, { "authors": "Behrooz Ghorbani; Orhan Firat; Markus Freitag; Ankur Bapna; Maxim Krikun; Xavier Garcia; Ciprian Chelba; Colin Cherry", "journal": "", "ref_id": "b18", "title": "Scaling laws for neural machine translation", "year": "2021" }, { "authors": "Kevin Mitchell A Gordon; Jared Duh; Kaplan", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Data and parameter scaling laws for neural machine translation", "year": "2021" }, { "authors": "Naman Goyal; Cynthia Gao; Vishrav Chaudhary; Peng-Jen Chen; Guillaume Wenzek; Da Ju; Sanjana Krishnan; Marc'aurelio Ranzato; Francisco Guzmán; Angela Fan", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b20", "title": "The Flores-101 evaluation benchmark for low-resource and multilingual machine translation", "year": "2022" }, { "authors": "Amr Hendy; Mohamed Abdelrehim; Amr Sharaf; Vikas Raunak; Mohamed Gabr; Hitokazu Matsushita; Young ; Jin Kim; Mohamed Afify; Hany Hassan Awadalla", "journal": "", "ref_id": "b21", "title": "How good are gpt models at machine translation? a comprehensive evaluation", "year": "2023" }, { "authors": "Dandan Huang; Kun Wang; Yue Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "A comparison between pre-training and large-scale back-translation for neural machine translation", "year": "2021" }, { "authors": "Hakan Inan; Khashayar Khosravi; Richard Socher", "journal": "", "ref_id": "b23", "title": "Tying word vectors and word classifiers: A loss framework for language modeling", "year": "2017" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b24", "title": "Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation", "year": "2017" }, { "authors": "Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei", "journal": "", "ref_id": "b25", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b26", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Tom Kocmi; Christian Federmann; Roman Grundkiewicz; Marcin Junczys-Dowmunt; Hitokazu Matsushita; Arul Menezes", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "To ship or not to ship: An extensive evaluation of automatic metrics for machine translation", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Victoria Xi; Todor Lin; Mikel Mihaylov; Tianlu Artetxe; Shuohui Wang; Daniel Chen; Myle Simig; Naman Ott; Shruti Goyal; Jingfei Bhosale; Ramakanth Du; Sam Pasunuru; Punit Shleifer; Vishrav Singh Koura; Brian O' Chaudhary; Jeff Horo; Luke Wang; Zornitsa Zettlemoyer; Mona Kozareva; Veselin Diab; Xian Stoyanov; Li", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Few-shot learning with multilingual generative language models", "year": "2022" }, { "authors": "Xuebo Liu; Longyue Wang; Derek F Wong; Liang Ding; Lidia S Chao; Shuming Shi; Zhaopeng Tu", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "On the complementarity between pre-training and back-translation for neural machine translation", "year": "2021" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer; ; ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b31", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b32", "title": "Multilingual denoising pre-training for neural machine translation", "year": "2020" }, { "authors": "Marta Nllb Team; James Costa-Jussa; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janicec Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loïc Akula; Gabriel Barrault; Prangthip Gonzalez; Jeff Hansanti; Wang", "journal": "", "ref_id": "b33", "title": "No language left behind: Scaling humancentered machine translation", "year": "2022" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "", "ref_id": "b34", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Xiao Pan; Mingxuan Wang; Liwei Wu; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Contrastive learning for many-to-many multilingual neural machine translation", "year": "2021" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Maja Popović", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Ofir Press; Lior Wolf", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Using the output embedding to improve language models", "year": "2017" }, { "authors": "Ricardo Rei; Craig Stewart; Ana C Farinha; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "COMET: A neural framework for MT evaluation", "year": "2020" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Improving neural machine translation models with monolingual data", "year": "2016" }, { "authors": "Aditya Siddhant; Ankur Bapna; Yuan Cao; Orhan Firat; Mia Chen; Sneha Kudugunta; Naveen Arivazhagan; Yonghui Wu", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Leveraging monolingual data with self-supervision for multilingual neural machine translation", "year": "2020" }, { "authors": "Aditya Siddhant; Ankur Bapna; Orhan Firat; Yuan Cao; Mia Xu Chen; Isaac Caswell; Xavier Garcia", "journal": "", "ref_id": "b43", "title": "Towards the next 1000 languages in multilingual machine translation: Exploring the synergy between supervised and self-supervised learning", "year": "2022" }, { "authors": "Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu", "journal": "PMLR", "ref_id": "b44", "title": "MASS: Masked sequence to sequence pretraining for language generation", "year": "2019" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b45", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Yuqing Tang; Chau Tran; Xian Li; Peng-Jen Chen; Naman Goyal; Vishrav Chaudhary; Jiatao Gu; Angela Fan", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Multilingual translation from denoising pre-training", "year": "2021" }, { "authors": "Yi Tay; Mostafa Dehghani; Q Vinh; Xavier Tran; Jason Garcia; Xuezhi Wei; Hyung Won Wang; Dara Chung; Tal Bahri; Steven Schuster; Denny Zheng; Neil Zhou; Donald Houlsby; Metzler", "journal": "", "ref_id": "b47", "title": "UL2: Unifying language learning paradigms", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b48", "title": "Attention is all you need", "year": "2017" }, { "authors": "David Vilar; Markus Freitag; Colin Cherry; Jiaming Luo; Viresh Ratnakar; George Foster", "journal": "", "ref_id": "b49", "title": "Prompting palm for translation: Assessing strategies and performance", "year": "2022" }, { "authors": "Yiren Wang; Chengxiang Zhai; Hany Hassan", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Multi-task learning for multilingual neural machine translation", "year": "2020" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler", "journal": "Transactions on Machine Learning Research", "ref_id": "b51", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Haoran Xu; Jean Maillard; Vedanuj Goswami", "journal": "", "ref_id": "b52", "title": "Language-aware multilingual machine translation with self-supervised learning", "year": "2023" }, { "authors": "Biao Zhang; Behrooz Ghorbani; Ankur Bapna; Yong Cheng; Xavier Garcia; Jonathan Shen; Orhan Firat", "journal": "", "ref_id": "b53", "title": "Examining scaling and transfer of language model architectures for machine translation", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b54", "title": "", "year": "" }, { "authors": "Biao Zhang; Barry Haddow; Alexandra Birch", "journal": "", "ref_id": "b55", "title": "Prompting large language model for machine translation: A case study", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b56", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Wenhao Zhu; Hongyi Liu; Qingxiu Dong; Jingjing Xu; Lingpeng Kong; Jiajun Chen; Lei Li; Shujian Huang", "journal": "", "ref_id": "b57", "title": "Multilingual machine translation with large language models: Empirical results and analysis", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 70.87, 399.61, 23.34, 15.07 ], "formula_id": "formula_0", "formula_text": "p 1/T D ," } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b41", "b3", "b7", "b43", "b7" ], "table_ref": [], "text": "Real-world environments are diverse and unpredictable; we often cannot control correlations between environment features or even know they exist. As such, real-world Reinforcement Learning (RL) training environments can contain spurious correlations between features that are unknown or unintended by the data collector, e.g. an object correlated with colour. Furthermore, an RL agent influences the data collection through its actions, which may shift the data distribution to contain feature correlations as the agent learns, e.g. the agent position is correlated with a goal position as it learns an optimal policy. For RL agents to be resilient in the real world, it is beneficial to learn robust representations of high-dimensional observations (e.g. images). However, an agent trained with correlated data may learn a representation that encodes the spurious correlation and, therefore, cannot generalise when the correlation no longer holds (Träuble et al., 2021). For example, an autonomous driving agent trained in an environment where aggressive drivers often have green cars can encode this correlation that does not hold in real world.\nMethods for disentanglement aim to separate the ground truth factors of variation that generated a highdimensional observation, such as an image, into meaningful subspaces in the learned representation (Bengio et al., 2013). Both Higgins et al. (2017b) and Dunion et al. (2023) show that disentanglement improves generalisation to visual changes in RL environments that were not seen during training. A disentangled representation can be robust to environment changes because a change in one image factor causes only a subset of features in the representation to change, while the remaining features can still be relied upon for better generalisation. The basic principle of most disentanglement techniques is to enforce independence between groups of features in the latent representation. Therefore, common approaches to disentanglement, such as VAEs, require independence between factors of variation, i.e. uncorrelated factors. RL agents trained to learn independent features in the representation will fail to separate correlated factors because they cannot be separated into distinct independent features, and thus suffer from a failure to generalise under correlation shifts. For example, consider a scenario where the colour of an object is correlated with its size during training, and size impacts the optimal policy. Colour contains information predictive of size and vice versa. Hence, size and colour cannot be separated into distinct independent features in the representation, so will be encoded into the same feature. When the agent is presented with a colour that is rarely or never previously seen with the object size, the feature representing both colour and size will change, preventing the agent from performing optimally even if it has already learned an optimal policy for the object size.\nIn this work, we relax the assumption of independence between factors of variation to conditional independence, to learn a disentangled representation with correlated features. We propose Conditional Mutual Information for Disentanglement (CMID) as an auxiliary task that can be applied to online RL algorithms to learn a disentangled representation by minimising the conditional mutual information between dimensions in the latent representation. We use the causal graph of a Markov Decision Process (MDP) to determine a general conditioning set that can render the features in a representation conditionally independent given the conditioning set. The resulting disentangled representation avoids the reliance on spurious correlations, separating correlated factors of variation in a highdimensional observation into distinct features in the representation, allowing for better generalisation under correlation shifts. To the best of our knowledge, this is the first approach to learn disentangled representations specifically for RL based on conditional independence.\nWe evaluate our approach on continuous control tasks with image observations from the DeepMind Control Suite where we add correlations between object colour and properties impacting dynamics (e.g. joint positions). Our results show that CMID improves the training performance as well as the generalisation performance of the base RL algorithm, SVEA (Hansen et al., 2021), under correlation shifts, with a 77% increase in zero-shot generalisation returns on average across all tasks in our experiments. We also demonstrate improved generalisation performance compared to state-of-the art baselines: DrQ (Yarats et al., 2021), CURL (Laskin et al., 2020b) and TED (Dunion et al., 2023)." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b22", "b4", "b21", "b28", "b38", "b29", "b10", "b20", "b18", "b19", "b41", "b8", "b8" ], "table_ref": [], "text": "2.1 Disentangled representations VAE approaches. Disentanglement has been studied widely in the unsupervised learning literature. Many approaches are based on the Variational Autoencoder (VAE) (Kingma and Welling, 2014). The β-VAE (Higgins et al., 2017a;Burgess et al., 2017) aims to improve disentanglement by scaling up the independence constraint in the VAE loss; the Factor-VAE (Kim and Mnih, 2018) encourages disentanglement through a factorial distribution of features. More recent approaches add supervision during training to bypass the impossibility of learning disentangled representations from independent and identically distributed (i.i.d.) data (Locatello et al., 2019). Shu et al. (2020) use labels of image groupings, and Locatello et al. (2020) use pairs of images. However, VAE-based approaches assume independence between factors of variation and therefore cannot disentangle correlated factors.\nICA approaches. Disentanglement is also the focus of Independent Component Analysis (ICA) (Hälvä et al., 2021;Hyvärinen et al., 2023), where it is referred to as 'blind source separation'. Hyvärinen and Morioka (2016) and Hyvärinen and Morioka (2017) both learn disentangled representations from time-series data under different assumptions. However, similarly to VAE approaches, the independence assumption is central to ICA and does not hold when there are correlations in the data.\nDisentanglement with correlated features. Both ICA and VAE approaches to disentanglement assume independence between factors of variation (or 'sources' in ICA terminology). Träuble et al. (2021) conduct an analysis of VAE-based disentanglement techniques on correlated data and show that the correlations are being encoded in the representations. They propose weak supervision in training to learn disentangled representations on correlated data and an adaptation technique to 'correct' the latent entanglement using labeled data. Recently, Funke et al. (2022) propose a Conditional Mutual Information (CMI) approach to learn disentangled representations to improve generalisation under correlation shifts in a supervised learning setting. We use a similar adversarial approach to CMI as Funke et al. (2022), but leverage the structure of an MDP to determine an appropriate conditioning set that does not require labelled data or any prior knowledge of the ground truth factors of variation." }, { "figure_ref": [], "heading": "Representation learning in RL", "publication_ref": [ "b43", "b44", "b45", "b27", "b2", "b5", "b30", "b0", "b7" ], "table_ref": [], "text": "Visual invariances. Image augmentations are commonly used to improve robustness of representations in RL (Laskin et al., 2020a;Yarats et al., 2021;Hansen and Wang, 2021;Hansen et al., 2021). Several approaches have also been proposed to learn representations that are invariant to distractors in the image such as background colour (Zhang et al., 2020(Zhang et al., , 2021;;Li et al., 2021;Allen et al., 2021). These methods do not account for correlations between features and do not prevent the encoding of spurious correlations. Mutual information has recently been used for invariant representation learning outside of RL by Cerrato et al. (2023) to ensure model decisions are independent of specific input features.\nMutual information. Mutual information (MI) based approaches are commonly used in RL for representation learning. Laskin et al. (2020b) maximise similarity between different augmentations of the same observation; Mazoure et al. (2020) maximise similarity between successive observations; and Agarwal et al. (2021) use policy similarity metrics. However, these approaches all maximise MI in some way, whereas disentanglement aims to minimise MI between features in the representation.\nDisentangled representations. To learn a disentangled representation for RL, Higgins et al. (2017b) train a β-VAE offline using i.i.d. data from a pre-trained agent. Dunion et al. (2023) propose an auxiliary task to learn disentangled representations online using the non-i.i.d. temporal structure of RL training data. Both of these approaches to disentanglement in RL assume independent factors of variation, so they are unable to disentangle correlated factors." }, { "figure_ref": [], "heading": "Conditional mutual information estimators", "publication_ref": [ "b36", "b34", "b33", "b32" ], "table_ref": [], "text": "Our approach is the first to use CMI for disentangled representations in RL. However, many approaches have been proposed to estimate CMI outside of RL. Initial approaches were extensions of MI estimators, such as Runge (2018). Recent approaches make use of advances in neural networks. Mukherjee et al. (2020) propose CCMI, using the difference between two MI terms for CMI estimation: I(X; Y | Z) = I(X; Y, Z) -I(X; Z). They propose an estimator for the KL-divergence by training a classifier to distinguish the observed joint distribution from the product distribution. Mondal et al. (2020) estimates CMI by re-formulating it as a minmax optimisation problem and using a training procedure similar to generative adversarial networks. Molavipour et al. (2021) extend the classifier approach of CCMI, applying it directly to the estimation of CMI, rather than the difference of two MI terms. They use k nearest neighbours (kNN) to sample from the product of marginals distribution, and train the classifier to distinguish between the original distribution and the product of marginals. We also use kNN permutations to sample from the product of marginals distribution." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b6" ], "table_ref": [], "text": "Reinforcement learning. We assume the agent is acting in a Markov Decision Process (MDP), which is defined by the tuple M = (S, A, P, R, γ), where S is the state space, A is the action space, P (s t+1 |s t , a t ) is the probability of next state s t+1 given action a t ∈ A is taken in state s t ∈ S at time t, R(s t , a t ) is the reward function, and γ ∈ [0, 1) is the discount factor. The goal is to learn a policy π that maximises the discounted return,\nmax π E P,π [ ∞ t=0 [γ t R(s t , a t )]].\nIn RL from pixels, the agent receives an observation of image pixels o t ∈ O ⊂ R i×j at time t, a high-dimensional representation of s t . The agent learns a latent representation z t = f θ (o t ) of size N ≪ dim(O), where f θ : O → Z is an encoder parameterised by θ. The policy π is a function of the latent representation, such that a t ∼ π(z t ). We denote the n-th component of the vector z as z n , and all components of z except z n as z -n . We use z t ′ :t ′′ to refer to representations for all consecutive timesteps from t ′ to t ′′ inclusive:\nz t ′ , z t ′ +1 , ..., z t ′′ -1 , z t ′′ .\nConditional mutual information. The Conditional Mutual Information (CMI) of continuous random variables X and Y given a third variable Z measures the amount of information Y contains t and z 2 t are conditionally independent when conditioned on z 1 0:t-1 and a0:t-1, where z n t denotes the nth dimension of zt.\nabout X given Z is already known (Cover and Thomas, 2006), defined as:\nI(X; Y | Z) := p(x, y, z) log p(x, y, z) p(x, z)p(y|z) dxdydz(1)\nwhere lower-case letters denote instances of a random variables (e.g. x is an instance of X). By definition, CMI is given by the KL-divergence:\nI(X; Y | Z) = D KL [p(x, y, z) || p(x, z)p(y|z)].(2)\nIf the CMI between X and Y given Z is 0, then X and Y are conditionally independent given Z:\nI(X; Y | Z) = 0 ⇐⇒ X ⊥ ⊥ Y | Z .\n(3)" }, { "figure_ref": [], "heading": "Conditional mutual information for disentanglement in RL", "publication_ref": [], "table_ref": [], "text": "We propose Conditional Mutual Information for Disentanglement (CMID) as an auxiliary task that can be applied to existing RL algorithms to learn a disentangled representation with correlated data.\nThe goal is to learn a representation with features that are conditionally independent to improve representation robustness in the presence of unintended correlations during training. We discuss the conditioning set for an MDP in Section 4.1 and describe the CMID auxiliary task in Section 4.2." }, { "figure_ref": [ "fig_0" ], "heading": "MDP conditioning set", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "The causal graph for three timesteps of an MDP is shown in Figure 1. For readability, the graph shows only two state features s 1 and s 2 , and two representation features z 1 and z 2 , however the graph and subsequent discussion can be extended to an arbitrary number of features. The graph shows the desired causal relationships for the learned representation, such that each feature in the representation is caused by a single state feature. An overview of the relevant concepts from causality that we will use in this section is provided in Appendix A.2.\nThe goal is to learn a representation z t where features z 1 t and z 2 t are conditionally independent by blocking all backdoor paths between z 1 t and z 2 t in the causal graph (Pearl, 2009). One option is to condition on the true underlying state feature s 1 t , which is the approach taken by Funke et al. ( 2022), but we do not usually know the true state features. Given the temporal structure of an MDP, another suitable conditioning set is the history of z 1 t , denoted z 1 0:t-1 , and the history of actions a 0:t-1 , giving:\nz 1 t ⊥ ⊥ z 2 t | z 1 0:t-1 , a 0:t-1 .(4)\nIn other words, z 2 t does not contain any additional information about z 1 t given z 1 0:t-1 and a 0:t-1 are known. Similarly, the history of z 2 t would also make a suitable conditioning set, since the backdoor path can be blocked by a 0:t-1 and either z 1 0:t-1 or z 2 0:t-1 . Conditioning on the history of actions a 0:t-1 alone is not sufficient because the action is a collider in the causal graph, so the conditioning set must also contain a parent of this collider to avoid opening up new backdoor paths (Pearl, 2009). This conditioning set means that we do not need to know the true state features s t to learn conditionally independent representation features z 1 t and z 2 t . To guarantee conditional independence, the conditioning set must include the full history z 1 0:t-1 and a 0:t-1 from the beginning of the episode to the previous timestep t -1. However, conditioning on the full history makes a very large conditioning set, of size t • (N + dim(a)) when using one-hot \n| z n t-1 , at-1) and p(z n t | z n t-1 , at-1)p(z -n t | z n t-1 , at-1).\nThe encoder is trained adversarially to make the two distributions similar to minimise CMI. The double slash '//' on the encoder outputs indicates where gradient flow is stopped.\nencoding of features, which can be difficult to learn. In practice, we condition on only the most recent timestep z 1 t-1 and a t-1 to adjust for the most recent correlations while keeping the auxiliary task reasonable to achieve during training. We show experimentally in Section 6 that conditioning on only the most recent timestep achieves good generalisation performance while converging to an optimal policy faster in training than larger conditioning sets that use more timesteps from the episode history." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Conditional Mutual Information for Disentanglement", "publication_ref": [ "b43", "b32", "b13", "b42" ], "table_ref": [], "text": "CMID is an auxiliary task to learn disentangled representations. The architecture for CMID is shown in Figure 2, and the pseudocode is provided in Algorithm 1. CMID uses the same image inputs as the base RL algorithm. Where the base algorithm uses image augmentations (Hansen et al., 2021;Yarats et al., 2021;Laskin et al., 2020a), these are also used for CMID. However, CMID does not use frame stacking because it would introduce causal relationships between features in the frame stack. For example, object velocity and positions are often extracted from the frame stack, but velocity is also a direct cause of position. CMID processes each frame individually, and representations can be stacked if required to allow velocity information to be extracted by the RL networks for policy learning.\nTo learn a conditionally independent representation, CMID minimises the CMI between features in the representation. For each feature z n t in the representation z t , it follows from Equation 2 and the conditioning set c n t = (z n t-1 , a t-1 ) discussed in Section 4.1 that:\nI(z n t ; z -n t | c n t ) = D KL p(z t , c n t ) || p(z n t , c n t )p(z -n t | c n t ) .(5)\nAs such, we minimise the KL-divergence between the joint probability distribution p(z t , c n t ) and the product of marginals p(z n t , c n t )p(z -n t | c n t ). The agent has access to samples from the joint distribution p(z t , c n t ) collected during training, e.g. from the replay buffer. To sample from the product of marginals, we use the isolated k nearest neighbours (kNN) permutation approach (Molavipour et al., 2021). For each sample {z t , c n t } ∼ p(z t , c n t ), we find the kNN of c n t by Euclidean distance, then permute the sample with the kNN to get a sample {z n t , z -n t ′ , c n t } where t ′ ̸ = t and c n t ′ (the conditioning set of z t ′ that is used for the permutation) is a kNN of c n t . We will use z perm,n t to denote the permutations {z n t , z -n t ′ }. The permuted sample {z perm,n\nt , c n t } is from the distribution p(z n t , c n t )p(z -n t | c n t ).\nThe permutation process is also depicted in Figure 2. To minimise the KL-divergence in Equation 5, we train a discriminator D ϕ adversarially to distinguish between samples {z t , c n t } ∼ p(z t , c n t ) and {z perm,n t , c n t } ∼ p(z n t , c n t )p(z -n t | c n t ). The adversarial training encourages the encoder f θ to ensure the two distributions are as similar as possible by minimising the cross entropy. This objective is equivalent to minimising the KL-divergence since for any two distributions p and q, D KL (p || q) = H(p, q) -H(p) where H(p, q) is the cross entropy and H(p) is the entropy of p which does not depend on the learned parameters. The discriminator Algorithm 1 CMID update step Input: batch of transitions B = {..., (ot-1, at-1, ot), ...} ∼ D Input: parameters for the encoder θ and the discriminator ϕ Calculate RL loss LRL and update RL networks (including encoder) following base RL algorithm Initialise LD ← 0 and LA ← 0 Forward pass though encoder zt = f θ (ot) and momentum encoder zt-1 = f θ ′ (ot-1) for n ∈ (1, ..., N ) do Create conditioning set c n t = (z n t-1 , at-1) D ϕ is trained to discriminate between true and permuted samples for each feature z n t using a binary cross entropy loss:\nL D = 1 N N i=0 log σ(D ϕ (z t , c n t )) + log(1 -σ(D ϕ (z perm,n t , c n t )))(6)\nwhere σ is the sigmoid function. The encoder f θ is updated using only true samples z t to learn a representation such that the discriminator cannot determine whether the sample is true or permuted:\nL A = α N N i=0 log(1 -σ(D ϕ (z t , c n t ))) . (7\n)\nThe loss coefficient α is a hyperparameter to be tuned to the task to determine the scale of the adversarial loss compared to the RL loss that also updates the encoder f θ . For training stability, we use a momentum encoder (He et al., 2020;Laskin et al., 2020b) \nf θ ′ : O → Z for the conditioning set representation z t-1 = f θ ′ (o t-1 )\n, where θ ′ = τ θ ′ + (1 -τ )θ. Our experiments evaluate generalisation performance under correlation shifts. We evaluate zero-shot generalisation as well as adaptation with continued learning on test correlations (which differ from the training correlations). We test our approach on continuous control tasks from the DeepMind Control Suite (DMC) (Tunyasuvunakool et al., 2020) where we have created strong correlations between task-relevant features and irrelevant colours. We use a training environment with correlated variables and evaluate generalisation on a test environment under correlation shift. Our results show that the CMID auxiliary task consistently improves the generalisation of the base RL algorithm in all tasks, as well as outperforming other baselines." }, { "figure_ref": [], "heading": "Experimental results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Blue Green", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Experimental setup", "publication_ref": [ "b43", "b7" ], "table_ref": [], "text": "Adding strong correlations to DMC. To demonstrate generalisation under correlation shifts, we add correlations between object colour and dynamics. We use two variations of the object controlled by the agent (A and B), each of which has a slightly different morphology (e.g. lengths, joint positions), which affects both the object appearance and the dynamics. This means each variation of the control object requires a different optimal policy. A description of the morphology variations for each task along with images are provided in Appendix D. At the start of an episode, object A or B is chosen at random with equal probability. Object A appears blue with probability 0.95 and green with probability 0.05, conversely object B is green with probability 0.95 and blue otherwise. After training, the correlation is changed (at the vertical dotted line in the graphs) and the agent continues training to assess adaptation. We test two different correlation shifts: reversed correlation, and no correlation (i.e. each object is equally likely to be blue or green). An example of the correlated setup with testing on reversed correlation for cartpole is shown in Figure 3, and these correlation probabilities are used across all DMC tasks unless stated otherwise.\nBase RL algorithm. We use SVEA (Hansen et al., 2021) as the base RL algorithm which we augment with the CMID auxiliary task, called SVEA-CMID in the results. SVEA is used because it is a state-of-the-art RL algorithm that already uses image augmentations to improve robustness to an extent. An overview of SVEA is provided in Appendix A.1 and implementation details in Appendix B.\nBaselines. We compare with SVEA, as the base RL algorithm for CMID, to demonstrate the extent to which the CMID auxiliary task improves performance. We also compare with DrQ (Yarats et al., 2021) as an alternative image augmentation baseline. We compare with CURL (Laskin et al., 2020b) as a state-of-the-art auxiliary task based on maximising mutual information. As a disentanglement baseline that assumes independent features, we use TED (Dunion et al., 2023) to demonstrate that disentanglement techniques requiring fully independent features fail with strong correlations. " }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Generalisation results", "publication_ref": [], "table_ref": [], "text": "Our results in Figure 4 show the generalisation to reversed correlation and Figure 5 shows generalisation to uncorrelated features at the vertical dotted line. In both cases, the results show that CMID improves the generalisation performance of SVEA in all tasks, as well as outperforming the other baselines. CMID achieves good zero-shot generalisation performance on the vertical dotted line, while all baselines have some failure to generalise (except where they cannot learn a reasonable policy in the hopper task). Tables showing the numerical values of the zero-shot generalisation performance are also provided in Appendix C.1. Some baselines are able to to adapt with continued learning on the test environment to eventually achieve optimal performance in line with CMID, but others are unable to recover an optimal policy after overfitting to the training correlations. CMID also improves the training performance of SVEA in all tasks, achieving higher training returns even before the switch to the test environment. Many baselines also suffer from this inability to achieve optimal performance in training because the strong correlation makes it harder to learn as an optimal agent needs to learn a different policy for each control object without relying on the colour to distinguish between control objects. Appendix C.2 shows the evaluation performance on each scenario for cartpole swingup to further demonstrate why the failure to learn an optimal policy and generalise occurs." }, { "figure_ref": [], "heading": "Discussion and analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct more detailed analysis of CMID on the reverse correlation testing scenario for the cartpole swingup task from DMC. We also provide some further analysis on correlation strength and greyscale images in Appendix C.3 and C.4 respectively." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_6", "fig_6", "fig_7" ], "heading": "Mutual information.", "publication_ref": [ "b8", "b13", "b41", "b40" ], "table_ref": [], "text": "To validate that it is necessary to minimise the conditional MI in our approach, we compare with an (unconditional) MI variant with no conditioning set. The results in Figure 6a show that SVEA-CMID outperforms SVEA-MI in training performance and generalisation. To validate our conditioning set, we also compare to the CMI approach of Funke et al. (2022) modified to an RL setting. This approach, which we call SVEA-CMI-labels, assumes access to the ground truth state features. The representation is split into subspaces corresponding to each of the state features, and a classifier is trained for each subspace to predict the state feature. The true state features can then be used as the conditioning set. Figure 6a also shows that SVEA-CMID outperforms SVEA-CMI-labels because of the added complexity of using the true state features and training the classifiers.\nHistory length. We discussed in Section 4.1 that achieving full conditional independence requires conditioning on the history of representations z 0:t-1 and actions a 0:t-1 . However, in practice (Section 5) we found that conditioning only on the most recent representation z t-1 and action a t-1 breaks the strongest correlation to achieve zero-shot generalisation. Figure 6b compares CMID with variations that condition on the previous two (CMID-history2) and three (CMID-history3) timesteps. The results show that it is harder to learn with the larger conditioning sets, which fail to achieve optimal performance as quickly, while CMID with one previous timestep works well in practice. CMID loss coefficient. The CMID loss coefficient α in Equation 7is a hyperparameter to be tuned to the task. We found α = 0.5 performs well in the cartpole task. The results in Figure 6c show that decreasing α decreases generalisation performance because the agent is not prioritising disentanglement, as well as reducing training performance because it is harder to learn the rare cases with this lower priority on disentanglement. Increasing α makes it harder to learn the optimal policy, decreasing the training performance.\nMomentum encoder. The momentum encoder is not a strictly necessary component for minimising CMI, but we found empirically that it improves stability as it does for some other RL algorithms (He et al., 2020;Laskin et al., 2020b). All algorithms used in our experiments are implemented with a momentum encoder, so we make use of the momentum encoder that is already available for many algorithms. Figure 6d shows the results for SVEA-CMID on the cartpole swingup task with and without a momentum encoder. CMID without a momentum encoder still improves the performance of SVEA but does not perform as well as the momentum encoder version and has higher variance.\nVisualising the learned representation. Existing disentanglement metrics assume independent factors of variation so are not suitable to measure disentanglement on correlated data (Träuble et al., 2021). Instead we conducted qualitative analysis of the learned representation to visualise the disentanglement. We use integrated gradients (Sundararajan et al., 2017) to attribute the encoder output value of each feature in the representation to the input image pixels. We overlay the attributions on the original image to create saliency maps showing the parts of the image that each representation feature focuses on. We show illustrative saliency maps in Figure 7. Implementation details and saliency maps for all representation features are provided in Appendix E. The saliency maps show that SVEA encodes many image features in one representation feature, while SVEA-CMID has designated features in the representation to focus on individual features in the image, such as pole length, which is necessary to distinguish between cartpole A and B. Robustness analysis. We analyse the robustness of the trained RL agents to unseen colours on the cartpole swingup task. Using the model at the end of training (before changing the correlation), we test the model on the same control objects but with unseen colours. We test on 216 different colours, using equally spaced RGB values. The results in Table 1 show that CMID achieves improved zero-shot generalisation performance on the unseen colours, in terms of the worst performing colour, the best performing colour and the average." }, { "figure_ref": [], "heading": "Limitations and future work", "publication_ref": [ "b1", "b14", "b39", "b37", "b46", "b31" ], "table_ref": [], "text": "Instead of the common practice of frame stacking, we stack representations when using CMID to avoid introducing causal relationship between variables in the stack of frames as discussed in Section 4.2. Future work could consider how to adapt CMID to allow for these more complex causal relationships. The kNN permutations approach to minimise CMI also adds computational complexity to update the encoder for each kNN, adding a 67% increase in run time on average for our experiments compared to the base RL algorithm. The number of kNN and the size of the representation could be reduced in scenarios where computation time is of high importance, and future work could consider more efficient ways to sample from the product of marginals distribution.\nWe evaluated our approach on tasks with correlation between object properties (that impact dynamics) and colour. This scenario already shows that state-of-the-art baselines suffer from a significant deterioration in performance under correlation shifts as well as being unable to learn an optimal policy in training for some tasks. As such, the colour correlations are sufficient to demonstrate the effectiveness of our approach in improving generalisation. However, future work could evaluate our approach on correlations with more complex distractors, such as background videos and camera angles.\nIn particular, we use a conditioning set containing only the most recent timestep in our experiments, but more complex environments can have strong correlations over multiple timesteps (e.g. background videos) which may require more history in the conditioning set (Albrecht and Ramamoorthy, 2016). Future work could consider using more representations and actions from history in the conditioning set efficiently to apply CMID to more complex environments and correlations.\nFinally, CMID learns a disentangled representation while exploring using the same strategy as used by the base RL algorithm. However, it is possible that the learning agent could discover a disentangled representation faster through a new exploration strategy that actively probes the environment to determine state structure. In the future, we plan to investigate the combination of CMID with recent advances in exploration (Henaff, 2019;Sontakke et al., 2021;Schäfer et al., 2022;Zhong et al., 2022;McInroe et al., 2023) to see whether these advances allow the CMID agent to more quickly discover disentangled representations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we explored the problem of training with strong correlations and generalisation under correlation shifts in RL. Existing techniques for learning disentangled representations in RL are insufficient for real-world problems because they assume the ground truth features are independent, which is unlikely to hold in practice. We proposed Conditional Mutual Information for Disentanglement (CMID) to learn disentangled representations with correlated image features that requires only conditional independence between features. CMID is an auxiliary task that can be used with existing RL algorithms. We showed experimentally that CMID improves the training and generalisation performance of SVEA as the base RL algorithm as well as DrQ, CURL and TED baselines. CMID allows the RL agent to generalise under correlation shifts and continue learning without performance reduction as a step towards training on real-world correlated data.\nA Extended background" }, { "figure_ref": [], "heading": "A.1 Reinforcement Learning", "publication_ref": [ "b9" ], "table_ref": [], "text": "We use SVEA (Hansen et al., 2021) as the base RL algorithm for the CMID auxiliary task, which is an extension of the Soft Actor-Critic (SAC) algorithm (Haarnoja et al., 2018).\nSAC is an off-policy RL algorithm for continuous control. SAC learns a stochastic policy π that maximises the expected sum of rewards and the entropy of the policy. The critic Q is learned by minimising the loss:\nL Q = E (ot,at,ot+1,rt)∼D Q(o t , a t ) -r t -γ V (o t+1 )) 2 (8)\nwhere o t is the image observation and a t is the action at time t as defined in Section 3. SAC uses the minimum two Q networks, Q 1 and Q 2 , for the training updates to reduce overestimation of Q values. The actor π is trained by minimising the loss:\nL π = E ot∼D E at∼π α SAC log(π(a t | o t )) -min i=1,2 Qi (o t , a t )(9)\nwhere Q is exponential moving average of the Q network parameters.\nSVEA aims to stabilise SAC training using a combination of both augmented and unaugmented images for Q learning with an modified loss:\nL SVEA Q = α SVEA L Q (o t , a t , o t+1 ) + β SVEA L Q (o aug t , a t , o t+1 )(10)\nHowever, the actor π is optimised on unaugmented images only, using the SAC policy loss in Equation 9." }, { "figure_ref": [], "heading": "A.2 Causality", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "We will provide a brief overview of the relevant concepts from causal inference used in Section 4.1, and we refer the interested reader to the book by Pearl (2009) for details.\nA causal graph is a directed acyclic graph. The nodes in the graph correspond to random variables and the directed edges represent a causal relationship between two variables. The causal graph defines the (in)dependence between the variables. Two variables X and Y can be considered separated by Z in a causal graph if X is independent of Y given the conditioning set Z. In other words, once the value of Z is known, knowing the value of X will no longer influence the belief about Y . This condition is called separation in the graph and forms the link between blocking paths in the causal graph and (in)dependencies in the data. A path in the graph is a sequence of consecutive edges, and a backdoor path is the non-causal path between X and Y containing no descendants of X, i.e. the paths that flow \"backwards\" from X. A path between two variables X and Y is blocked by a set of nodes Z (the conditioning set) if the following conditions hold (Pearl, 2009):\n1. if the path contains a chain X → M → Y , then a node in the mediator set M is in Z 2. if the path contains a fork X ← U → Y , then a node in the confounder set U is in Z 3. if path contains a collider X → C ← Y then the collider node C is not in Z and no descendant of C is in Z.\nA collider node C naturally blocks a path that traces it, so conditioning on a collider (or a descendant of a collider), opens the path. As such, where conditioning on a collider is necessary, then the conditioning set should also include variables that block the newly opened path. If all paths between X and Y are blocked by Z then X is independent of Y given Z." }, { "figure_ref": [], "heading": "B Implementation details", "publication_ref": [ "b43" ], "table_ref": [ "tab_2" ], "text": "In this section, we provide the implementation details for CMID. Our codebase is built on top of the publicly released DrQ PyTorch implementation by Yarats et al. (2021) as well as the official implementation of SVEA by Hansen et al. (2021) Encoder. The encoder consists of 4 convolutional layers, each with a 3 × 3 kernel size and 32 channels. The first layer has a stride of 2, all other layers have a stride of 1. There is a ReLU activation between each of the convolutional layers. The convolutional layers are followed by a linear layer, normalisation, then a tanh activation. The encoder weights are shared between the actor π and critic Q.\nActor and critic. Both the actor π and critic Q networks are MLPs consisting of two layers and a hidden dimension of 1024. There is a ReLU activation after each layer except the last layer.\nCMID discriminator. The CMID discriminator is implemented as an MLP consisting of two layers and a hidden dimension of 1024. There is a ReLU activation after each layer except the last layer.\nThe same conditional discriminator is used for all features in the representation so the inputs are one-hot encoded. This means the input size is: 56 (representation or permuted representation) + 56 (one-hot encoding of previous representation) + action size.\nHyperparameters. We tuned learning rate and CMID hyperparameters by grid search; other hyperparameters follow the original SVEA implementation. Table 2 shows the hyperparameters for all tasks.\nHardware. For each experiment run we use a single NVIDIA Volta V100 GPU with 32GB memory and a single CPU." }, { "figure_ref": [], "heading": "C Additional results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "C.1 Zero-shot generalisation", "publication_ref": [], "table_ref": [], "text": "The zero-shot generalisation performance under correlation shift can be seen at the vertical dotted line in the graphs of Figure 4 and Figure 5. For completeness and to avoid loss of information caused by smoothing in the graphs, the numerical values of the zero-shot generalisation performance are provided in Table 3 andTable 4. " }, { "figure_ref": [ "fig_9", "fig_7" ], "heading": "E Saliency maps", "publication_ref": [ "b23", "b40" ], "table_ref": [], "text": "The full set of saliency maps, as described in Section 6, for each representation feature is provided in Figure 11 for a trained SVEA encoder and a trained SVEA-CMID encoder. The features are sorted in order of most active to least active based on the sum of attributions for each feature.\nTo create the saliency maps, we use the Captum open-source interpretability library for PyTorch (Kokhlikyan et al., 2020) to calculate the integrated gradients (Sundararajan et al., 2017) pixel attributions for each feature in the representation output of the encoder. We use an all black image as the baseline image for integrated gradients which is compared to the input image in Figure 7a. The absolute value of the attributions are overlayed onto the input image to create the saliency maps. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the United Kingdom Research and Innovation (grant EP/L016834/1), EPSRC Centre for Doctoral Training in Robotics and Autonomous Systems (RAS) in Edinburgh. This work was also supported by the Academy of Finland Flagship programme: Finnish Center for Artificial Intelligence FCAI. The authors wish to acknowledge the generous computational resources provided by the Aalto Science-IT project and the CSC -IT Center for Science, Finland." }, { "figure_ref": [], "heading": "C.2 Evaluation on each scenario", "publication_ref": [], "table_ref": [], "text": "The results in Section 5 show the average returns over 10 evaluation episode for each seed, where a given scenario is selected based on the train/test probabilities depicted in Figure 3. To further assess performance, Figure 8 shows the average evaluation returns on 10 episodes for each object/colour combination on the cartpole swingup task with generalisation to reversed correlation. These results show that the correlation makes it difficult for SVEA to learn an optimal policy for any scenario, but with lower returns on the unlikely training scenarios in particular (cartpole A in green and cartpole B in blue). This explains the failure to generalise in Figure 4 when the correlation reverses, making the scenarios that were rare in training become frequent in testing at the vertical dotted line." }, { "figure_ref": [], "heading": "C.3 Correlation strength.", "publication_ref": [], "table_ref": [], "text": "The generalisation results in Section 5 show training with a 0.95 correlation (0.95 probability of being on the leading diagonal in Figure 3, and only 0.05 probability of being in the anti-diagonal scenarios).\nWe conducted further analysis of different correlation strengths, denoting the sum of probabilities on the leading diagonal as the correlation strength. The results for generalisation to the reversed correlation are shown in Figure 9. While the generalisation performance of SVEA decreases as the correlation gets stronger, SVEA-CMID consistently generalises well up to a very strong correlation of 0.99 at which point the performance deteriorates but still significantly improves the performance of SVEA in this setting." }, { "figure_ref": [], "heading": "C.4 Greyscale images.", "publication_ref": [], "table_ref": [], "text": "Our experiments use colour correlations to demonstrate the failure to generalise under correlation shifts. So we also demonstrate that the results still hold in greyscale images in Figure 10." }, { "figure_ref": [], "heading": "D Environment variations", "publication_ref": [], "table_ref": [], "text": "In Table 5, we provide a description of the differences between the two object variations (A and B) in each task, along with images of example observations for each object and colour combination. The exact specification of the world model for each task is available in our code. " } ]
Reinforcement Learning (RL) environments can produce training data with spurious correlations between features due to the amount of training data or its limited feature coverage. This can lead to RL agents encoding these misleading correlations in their latent representation, preventing the agent from generalising if the correlation changes within the environment or when deployed in the real world. Disentangled representations can improve robustness, but existing disentanglement techniques that minimise mutual information between features require independent features, thus they cannot disentangle correlated features. We propose an auxiliary task for RL algorithms that learns a disentangled representation of high-dimensional observations with correlated features by minimising the conditional mutual information between features in the representation. We demonstrate experimentally, using continuous control tasks, that our approach improves generalisation under correlation shifts, as well as improving the training performance of RL algorithms in the presence of correlated features.
Conditional Mutual Information for Disentangled Representations in Reinforcement Learning
[ { "figure_caption": "Figure 1 :1Figure1: The conditioning set for an MDP is highlighted in grey. Representation features z 1 t and z 2 t are conditionally independent when conditioned on z 1 0:t-1 and a0:t-1, where z n t denotes the nth dimension of zt.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: CMID architecture: The discriminator learns to discriminate between samples from p(zt | z n t-1 , at-1) and p(z n t | z n t-1 , at-1)p(z -n", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of correlations with testing on reversed correlation in the cartpole environment, ρ indicates the probability of an object/colour combination.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Generalisation to reversed correlation at the vertical dotted line. Returns are the average of 10 evaluation episodes, averaged over 5 seeds; the shaded region is the standard error.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Generalisation to uncorrelated features at the vertical dotted line. Returns are the average of 10 evaluation episodes, averaged over 5 seeds; the shaded region is the standard error.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Ablation experiments on cartpole with generalisation to reversed correlation at the vertical dotted line: (a) comparison with MI and labelled features, (b) varying history lengths in the conditioning set, (c) different values of the CMID coefficient α, and (d) CMID with and and without the momentum encoder.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Saliency maps: (a) raw image used to calculate attributions, (b) SVEA and (c) SVEA-CMID saliency maps showing two representation features. Brighter pixels correspond to higher attributions. SVEA-CMID has designated features focusing on the pole length which it has disentangled from other features.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Saliency maps for each representation feature of a trained (a) SVEA and (b) SVEA-CMID encoder on the cartpole swingup task, sorted in order of highest total attributions to lowest.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Find k nearest neighbours of c n t in the batch, measured by the Euclidean distance: Calculate adversarial loss LA ← LA + log(1 -σ(D ϕ (zt, c n t ))) end for Update encoder parameters to minimise LA Output: Losses LD, LA, LRL and updated parameters ϕ, θ", "figure_data": "i ((c n t ) i -(c n t ′ ) i ) 2 t )) + log(1 -σ(D ϕ (z perm,n by shuffling k nearest neighbours, keeping z n t fixed Calculate discriminator loss LD ← LD + log σ(D ϕ (zt, c n Create permuted samples z perm,n t t , c n t )))end forUpdate discriminator parameters to minimise LDfor n ∈ (1, ..., N ) doCreate conditioning set c n t = (z n t-1 , at-1)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Zero-shot generalisation to unseen colours on the cartpole task. Mean returns on 10 evaluation episodes over 5 seeds.", "figure_data": "SVEASVEA-CMIDWorst colour 165.8 ± 13.8379.1 ± 70.1Best colour588.7 ± 87.2 834.1 ± 105.8Average220.2 ± 27.4 692.2 ± 166.3", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Hyperparameter values for both SVEA and SVEA-CMID.", "figure_data": "HyperparameterValueReplay buffer capacity100000Initial steps before training begins1000Stacked frames (stacked representations for CMID)3Action repeat2 for finger_spin, 8 for cartpole_swingup, 4 otherwiseBatch size128Discount factor0.99OptimizerAdamLearning rate (actor, critic and encoder)1e-3SAC learning rate for α SAC1e-4Discriminator learning rate (CMID only)1e-2SVEA coefficientsα SVEA = 0.5, β SVEA = 0.5Target soft-update rate τcritic 0.01, actor 0.05Actor update frequency2Actor log stddev bounds[-10, 2]Latent representation dimension56Image size(84, 84, 3)Image pad4Initial temperature0.1CMID loss coef α0.5 for cartpole_swingup, 0.1 otherwisek nearest neighbours5", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Environment images", "figure_data": "", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
Mhairi Dunion; Trevor Mcinroe; Kevin Sebastian Luck; Josiah P Hanna; Stefano V Albrecht
[ { "authors": "Rishabh Agarwal; Marlos C Machado; Pablo Samuel Castro; Marc G Bellemare", "journal": "", "ref_id": "b0", "title": "Contrastive behavioral similarity embeddings for generalization in reinforcement learning", "year": "2021" }, { "authors": "Stefano V Albrecht; Subramanian Ramamoorthy", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b1", "title": "Exploiting causality for selective belief filtering in dynamic Bayesian networks", "year": "2016" }, { "authors": "Cameron Allen; Neev Parikh; Omer Gottesman; George Konidaris", "journal": "", "ref_id": "b2", "title": "Learning markov state abstractions for deep reinforcement learning", "year": "2021" }, { "authors": "Y Bengio; Aaron Courville; Pascal Vincent", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b3", "title": "Representation learning: A review and new perspectives", "year": "2013" }, { "authors": "Christopher P Burgess; Irina Higgins; Arka Pal; Loic Matthey; Nick Watters; Guillaume Desjardins; Alexander Lerchner", "journal": "", "ref_id": "b4", "title": "Understanding disentangling in β-vae", "year": "2017" }, { "authors": "Mattia Cerrato; Marius Köppel; Roberto Esposito; Stefan Kramer", "journal": "", "ref_id": "b5", "title": "Invariant representations with stochastically quantized neural networks", "year": "2023" }, { "authors": "Thomas M Cover; Joy A Thomas", "journal": "Wiley-Interscience", "ref_id": "b6", "title": "Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing", "year": "2006" }, { "authors": "Mhairi Dunion; Trevor Mcinroe; Kevin Sebastian Luck; Josiah P Hanna; Stefano V Albrecht", "journal": "", "ref_id": "b7", "title": "Temporal disentanglement of representations for improved generalisation in reinforcement learning", "year": "2023" }, { "authors": "Christina M Funke; Paul Vicol; Kuan-Chieh Wang; Matthias Kümmerer; Richard S Zemel; Matthias Bethge", "journal": "", "ref_id": "b8", "title": "Disentanglement and generalization under correlation shifts", "year": "2022" }, { "authors": "Tuomas Haarnoja; Aurick Zhou; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b9", "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "year": "2018" }, { "authors": "Hermanni Hälvä; Le Sylvain; Luc Corff; Jonathan Lehéricy; Yongjie So; Elisabeth Zhu; Aapo Gassiat; Hyvarinen", "journal": "", "ref_id": "b10", "title": "Disentangling identifiable features from noisy data with structured nonlinear ica", "year": "2021" }, { "authors": "Nicklas Hansen; Xiaolong Wang", "journal": "", "ref_id": "b11", "title": "Generalization in reinforcement learning by soft data augmentation", "year": "2021" }, { "authors": "Nicklas Hansen; Hao Su; Xiaolong Wang", "journal": "", "ref_id": "b12", "title": "Stabilizing deep q-learning with convnets and vision transformers under data augmentation", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b13", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Mikael Henaff", "journal": "", "ref_id": "b14", "title": "Explicit explore-exploit algorithms in continuous state spaces", "year": "2019" }, { "authors": "Irina Higgins; Loïc Matthey; Arka Pal; Christopher P Burgess; Xavier Glorot; Matthew M Botvinick; Shakir Mohamed; Alexander Lerchner", "journal": "", "ref_id": "b15", "title": "β-vae: Learning basic visual concepts with a constrained variational framework", "year": "2017" }, { "authors": "Irina Higgins; Arka Pal; Andrei Rusu; Loic Matthey; Christopher Burgess; Alexander Pritzel; Matthew Botvinick; Charles Blundell; Alexander Lerchner", "journal": "", "ref_id": "b16", "title": "DARLA: Improving zero-shot transfer in reinforcement learning", "year": "" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "2017" }, { "authors": "Aapo Hyvärinen; Hiroshi Morioka", "journal": "", "ref_id": "b18", "title": "Unsupervised feature extraction by time-contrastive learning and nonlinear ica", "year": "2016" }, { "authors": "Aapo Hyvärinen; Hiroshi Morioka", "journal": "", "ref_id": "b19", "title": "Nonlinear ica of temporally dependent stationary sources", "year": "2017" }, { "authors": "Aapo Hyvärinen; Ilyes Khemakhem; Hiroshi Morioka", "journal": "", "ref_id": "b20", "title": "Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning", "year": "2023" }, { "authors": "Hyunjik Kim; Andriy Mnih", "journal": "PMLR", "ref_id": "b21", "title": "Disentangling by factorising", "year": "2018" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b22", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Miguel Martin; Edward Wang; Bilal Alsallakh; Jonathan Reynolds; Alexander Melnikov; Natalia Kliushkina; Carlos Araya; Siqi Yan; Orion Reblitz-Richardson", "journal": "", "ref_id": "b23", "title": "Captum: A unified and generic model interpretability library for pytorch", "year": "2020" }, { "authors": "Michael Laskin; Kimin Lee; Adam Stooke; Lerrel Pinto; Pieter Abbeel; Aravind Srinivas", "journal": "", "ref_id": "b24", "title": "Reinforcement learning with augmented data", "year": "2020" }, { "authors": "Michael Laskin; Aravind Srinivas; Pieter Abbeel", "journal": "", "ref_id": "b25", "title": "CURL: Contrastive unsupervised representations for reinforcement learning", "year": "" }, { "authors": " Pmlr", "journal": "", "ref_id": "b26", "title": "", "year": "2020" }, { "authors": "Bonnie Li; Vincent François-Lavet; Thang Doan; Joelle Pineau", "journal": "", "ref_id": "b27", "title": "Domain adversarial reinforcement learning", "year": "2021" }, { "authors": "Francesco Locatello; Stefan Bauer; Mario Lucic; Gunnar Rätsch; Sylvain Gelly; Bernhard Schölkopf; Olivier Bachem", "journal": "PMLR", "ref_id": "b28", "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "year": "2019" }, { "authors": "Francesco Locatello; Ben Poole; Gunnar Rätsch; Bernhard Schölkopf; Olivier Bachem; Michaël Tschannen", "journal": "PMLR", "ref_id": "b29", "title": "Weakly-supervised disentanglement without compromises", "year": "2020" }, { "authors": "Bogdan Mazoure; Remi Tachet Des Combes; Thang Long Doan; Philip Bachman; Devon Hjelm", "journal": "", "ref_id": "b30", "title": "Deep reinforcement and infomax learning", "year": "2020" }, { "authors": "Trevor Mcinroe; Stefano V Albrecht; Amos Storkey", "journal": "", "ref_id": "b31", "title": "Planning to go out-of-distribution in offline-to-online reinforcement learning", "year": "2023" }, { "authors": "Sina Molavipour; Germán Bassi; Mikael Skoglund", "journal": "IEEE Transactions on Signal Processing", "ref_id": "b32", "title": "Neural estimators for conditional mutual information using nearest neighbors sampling", "year": "2021" }, { "authors": "Arnab Mondal; Arnab Bhattacharjee; Sudipto Mukherjee; Himanshu Asnani; Sreeram Kannan; A P Prathosh", "journal": "PMLR", "ref_id": "b33", "title": "C-mi-gan : Estimation of conditional mutual information using minmax formulation", "year": "2020" }, { "authors": "Sudipto Mukherjee; Himanshu Asnani; Sreeram Kannan", "journal": "PMLR", "ref_id": "b34", "title": "Ccmi : Classifier based conditional mutual information estimation", "year": "2020" }, { "authors": "Judea Pearl", "journal": "Cambridge University Press", "ref_id": "b35", "title": "Causality", "year": "2009" }, { "authors": "Jakob Runge", "journal": "PMLR", "ref_id": "b36", "title": "Conditional independence testing based on a nearest-neighbor estimator of conditional mutual information", "year": "2018" }, { "authors": "Lukas Schäfer; Josiah P Hanna; Filippos Christiano; Stefano V Albrecht", "journal": "", "ref_id": "b37", "title": "Decoupled reinforcement learning to stabilise intrinsically-motivated exploration", "year": "2022" }, { "authors": "Rui Shu; Yining Chen; Abhishek Kumar; Stefano Ermon; Ben Poole", "journal": "", "ref_id": "b38", "title": "Weakly supervised disentanglement with guarantees", "year": "2020" }, { "authors": "Arash Sumedh A Sontakke; Laurent Mehrjou; Bernhard Itti; Schölkopf", "journal": "PMLR", "ref_id": "b39", "title": "Causal curiosity: Rl agents discovering self-supervised experiments for causal representation learning", "year": "2021" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "PMLR", "ref_id": "b40", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": "Frederik Träuble; Elliot Creager; Niki Kilbertus; Francesco Locatello; Andrea Dittadi; Anirudh Goyal; Bernhard Schölkopf; Stefan Bauer", "journal": "PMLR", "ref_id": "b41", "title": "On disentangled representations learned from correlated data", "year": "2021" }, { "authors": "Saran Tunyasuvunakool; Alistair Muldal; Yotam Doron; Siqi Liu; Steven Bohez; Josh Merel; Tom Erez; Timothy Lillicrap; Nicolas Heess; Yuval Tassa", "journal": "Software Impacts", "ref_id": "b42", "title": "dm_control: Software and tasks for continuous control", "year": "2020" }, { "authors": "Denis Yarats; Ilya Kostrikov; Rob Fergus", "journal": "", "ref_id": "b43", "title": "Image augmentation is all you need: Regularizing deep reinforcement learning from pixels", "year": "2021" }, { "authors": "Amy Zhang; Clare Lyle; Shagun Sodhani; Angelos Filos; Marta Kwiatkowska; Joelle Pineau; Yarin Gal; Doina Precup", "journal": "PMLR", "ref_id": "b44", "title": "Invariant causal prediction for block mdps", "year": "2020" }, { "authors": "Amy Zhang; Rowan Mcallister; Roberto Calandra; Yarin Gal; Sergey Levine", "journal": "", "ref_id": "b45", "title": "Learning invariant representations for reinforcement learning without reconstruction", "year": "2021" }, { "authors": "Rujie Zhong; Duohan Zhang; Lukas Schäfer; Stefano V Albrecht; Josiah P Hanna", "journal": "", "ref_id": "b46", "title": "Robust on-policy sampling for data-efficient policy evaluation in reinforcement learning", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 259.38, 608.68, 128.97, 14.11 ], "formula_id": "formula_0", "formula_text": "max π E P,π [ ∞ t=0 [γ t R(s t , a t )]]." }, { "formula_coordinates": [ 3, 108, 678.38, 98.36, 9.68 ], "formula_id": "formula_1", "formula_text": "z t ′ , z t ′ +1 , ..., z t ′′ -1 , z t ′′ ." }, { "formula_coordinates": [ 4, 189.65, 220.94, 315.02, 22.31 ], "formula_id": "formula_2", "formula_text": "I(X; Y | Z) := p(x, y, z) log p(x, y, z) p(x, z)p(y|z) dxdydz(1)" }, { "formula_coordinates": [ 4, 209.85, 280.05, 294.82, 9.81 ], "formula_id": "formula_3", "formula_text": "I(X; Y | Z) = D KL [p(x, y, z) || p(x, z)p(y|z)].(2)" }, { "formula_coordinates": [ 4, 230.44, 315.3, 151.11, 8.74 ], "formula_id": "formula_4", "formula_text": "I(X; Y | Z) = 0 ⇐⇒ X ⊥ ⊥ Y | Z ." }, { "formula_coordinates": [ 4, 254.59, 586.85, 250.08, 12.69 ], "formula_id": "formula_5", "formula_text": "z 1 t ⊥ ⊥ z 2 t | z 1 0:t-1 , a 0:t-1 .(4)" }, { "formula_coordinates": [ 5, 108, 251.03, 397.08, 22.3 ], "formula_id": "formula_6", "formula_text": "| z n t-1 , at-1) and p(z n t | z n t-1 , at-1)p(z -n t | z n t-1 , at-1)." }, { "formula_coordinates": [ 5, 184.5, 526.22, 320.17, 12.84 ], "formula_id": "formula_7", "formula_text": "I(z n t ; z -n t | c n t ) = D KL p(z t , c n t ) || p(z n t , c n t )p(z -n t | c n t ) .(5)" }, { "formula_coordinates": [ 5, 108, 626.67, 396, 24.69 ], "formula_id": "formula_8", "formula_text": "t , c n t } is from the distribution p(z n t , c n t )p(z -n t | c n t )." }, { "formula_coordinates": [ 6, 172.99, 323.5, 331.68, 30.32 ], "formula_id": "formula_9", "formula_text": "L D = 1 N N i=0 log σ(D ϕ (z t , c n t )) + log(1 -σ(D ϕ (z perm,n t , c n t )))(6)" }, { "formula_coordinates": [ 6, 228.14, 386.22, 272.65, 30.32 ], "formula_id": "formula_10", "formula_text": "L A = α N N i=0 log(1 -σ(D ϕ (z t , c n t ))) . (7" }, { "formula_coordinates": [ 6, 500.8, 396.95, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 108, 443.84, 396, 20.56 ], "formula_id": "formula_12", "formula_text": "f θ ′ : O → Z for the conditioning set representation z t-1 = f θ ′ (o t-1 )" }, { "formula_coordinates": [ 14, 186.57, 183.15, 318.1, 13.92 ], "formula_id": "formula_13", "formula_text": "L Q = E (ot,at,ot+1,rt)∼D Q(o t , a t ) -r t -γ V (o t+1 )) 2 (8)" }, { "formula_coordinates": [ 14, 175.5, 249.07, 329.17, 16.95 ], "formula_id": "formula_14", "formula_text": "L π = E ot∼D E at∼π α SAC log(π(a t | o t )) -min i=1,2 Qi (o t , a t )(9)" }, { "formula_coordinates": [ 14, 186.59, 317.67, 318.07, 13.3 ], "formula_id": "formula_15", "formula_text": "L SVEA Q = α SVEA L Q (o t , a t , o t+1 ) + β SVEA L Q (o aug t , a t , o t+1 )(10)" }, { "formula_coordinates": [ 14, 131.41, 536.97, 372.59, 49.03 ], "formula_id": "formula_16", "formula_text": "1. if the path contains a chain X → M → Y , then a node in the mediator set M is in Z 2. if the path contains a fork X ← U → Y , then a node in the confounder set U is in Z 3. if path contains a collider X → C ← Y then the collider node C is not in Z and no descendant of C is in Z." } ]
2023-12-14
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b6", "b7", "b8", "b9", "b10", "b15", "b20" ], "table_ref": [], "text": "R EMOTE Sensing Object detection (RSOD) plays an important role in many fields, such as national defense and security, resource managing and emergency rescuing. With the development of deep learning, many deep-netural-network (DNN) based detection methods [1]- [7] were proposed and achieved promising performance. Besides, a number of Remote Sensing (RS) datasets (e.g., HRSC2016 [8], NWPU VHR-10 [9] and DOTA series [10]) containing accurate and rich annotations were proposed to develop and benchmark RSOD methods. In these datasets, accurate location, scale, category and quantity information of objects are provided and greatly facilitate the development of RSOD. However, such rich annotation formats will lead to expensive labor costs when RSOD methods are transferred to the new RS data (e.g., images captured by new satellites).\nTo reduce the labor costs of annotating new RS data, researchers explored image-level annotations where only category information of objects are provided, and introduced image-level supervised detection methods [11]- [16]. These methods generally detect objects in a \"find-and-refine\" in varied scales, aspect ratios and shaking degrees for each point label, and used multi-instance learning (MIL) to select and refine the most suitable proposals as the final results.\nA straightforward way to achieve pointly supervised RSOD is to directly apply existing PSOD methods to RS images. These PSOD methods mainly follow the MIL pipeline, in which many proposals are preset for each point label, and then the optimal one is selected as the pseudo box label. However, this framework is unsuitable for the RSOD task due to the low recall of proposal bags caused by the extremely huge variation of scales and aspect ratios of RS objects. In this paper, we make the first attempt to achieve RSOD with single point supervision, and propose a point label upgrader (PLUG) to generate high quality pseudo box labels from single points. Specifically, the semantic response map is first learned under point-level supervision, and then pseudo boxes can be generated in shortest path paradigm. Due to the discard of proposal generation, our PLUG is less susceptible to the interference from varied scales and aspect ratios. Moreover, the dense and cluttered objects in RS images hamper the extraction of discriminative features, and thus degrade the qualities of generated pseudo boxes. Considering this issue, we propose a sparse feature guided semantic prediction (SemPred) module to extract general representations of sparse objects and utilize them to improve the quality of the pseudo boxes of dense objects. In this way, our PLUG can obtain more discriminative feature representations and improve the downstream detection performance.\nBy utilizing PLUG to transform single point labels into boxlevel ones, we can develop a PLUG-Det method to achieve PSOD tailored for RS images. The training pipeline of our PLUG-Det consists of three stages (as shown in Fig. 1). Firstly, our PLUG is trained under the single point supervision. Then, pseudo boxes are generated by performing inference using the well-trained PLUG. Finally, existing fully supervised detectors (e.g., Faster-RCNN) are trained using the pseudo boxes to achieve PSOD.\nIn summary, our main contributions are as follows.\n• We present the first study on single pointly supervised RSOD, and propose a simple yet effective method called PLUG to generate pseudo box annotations from single point ones. • To handle the challenge of dense and clustered objects in RS images, we propose a sparse feature guided semantic prediction approach to enhance the discriminative feature representation capability of our PLUG. • By using the generated pseudo boxes to train existing detectors (Faster-RCNN [21] in this paper), our method (i.e., PLUG-Det) achieves promising detection performance, and outperforms many existing weakly supervised detectors.\nThe remainder of this paper is organized as follows. In Section II, we briefly review the related works. Section III presents the details of the proposed method. Comprehensive experimental results are provided in Section IV, and Section V concludes this paper." }, { "figure_ref": [], "heading": "II. RELATED WORKS A. Object Detection in Remote Sensing Images", "publication_ref": [ "b21", "b22", "b23", "b26", "b27", "b28", "b29", "b31", "b32", "b33", "b6", "b34", "b35", "b22", "b36", "b37", "b38", "b39", "b22", "b40", "b5", "b41", "b42", "b43", "b44", "b45", "b46", "b47", "b48", "b50", "b51", "b52" ], "table_ref": [], "text": "RSOD has been extensively investigated in the past decades. Since convolutional neural network (CNN) was proposed, deep learning based RSOD methods have achieved significant improvements [22]. Compared to objects in natural images, RS objects have some special characteristics [23], including varied orientation, dynamic scales, densely packed arrangements, significant intra-class difference, etc. Therefore, RSOD methods generally focus on the solutions to the above unique issues.\nSpecifically, regarding the varied orientation issue, many researchers proposed new representation approaches, e.g., rotated bounding boxes [24]- [27], intersecting lines [28], [29], key-points [30]- [32] and rotated Gaussian distribution [33], [34]. Besides, some researchers proposed improved feature extraction modules [7], [35], [36], novel loss functions [23], [37] and new angle regression mechanisms [38], [39] to improve the detection performance on multi-oriented objects. Regarding the dynamic scales issue, Hu et al. [40] proposed a feature enhancement method that can extract more discriminative features containing structure, deep semantic and relation information simultaneously. In [23], [41], multi-scale features were used to extract the scale-invariant representation of objects. Besides, Li et al. [6] proposed a ground sample distance (GSD) identification sub-network and combined GSD information with the sizes of Regions of Interest (RoIs) to determine the physical size of objects. Regarding the densely packed arrangement issue, Yang et al. [42] proposed ClusDet, in which clustering regions were first extracted by a cluster proposal sub-network, and then fed to a specific detection sub-network for final prediction. Li et al. [43] proposed a density map guided detection method, where the density map can represent whether a region contains objects or not, and thus provide guidance for cropping images statistically.\nApart from the above studies, there are still many works trying to tackle other issues (e.g., excessive feature coupling [44], [45], unbalanced label assignment [46], various aspect ratios [47], [48]) in RSOD. Recently, Transformerbased object detection methods [49]- [51] have attracted much attention due to their strong modeling capability. Therefore, some Transformer-based RSOD methods [52], [53] have been proposed and achieved remarkable detection performance.\nThe aforementioned methods improve the detection performance under box-level supervision. In this paper, we aim at relieving the labor cost of annotating RS images, and propose a single pointly supervised RSOD method." }, { "figure_ref": [], "heading": "B. Image-level Supervised Object Detection", "publication_ref": [ "b10", "b15", "b53", "b55", "b14", "b15", "b56", "b57", "b10", "b13", "b58", "b59", "b57", "b10", "b11", "b12" ], "table_ref": [], "text": "To relieve the burden of box-level labeling, numerous image-level supervised detection methods [11]- [16], [54]- [56] were proposed, which can be categorized into class activation map (CAM) based and MIL-based methods.\nCAM-based methods [15], [16] detect objects based on the class activation maps. Li et al. [57] proposed a CAMbased detection framework, in which the mutual information between images was exploited, and the class-specific activation weights were learnt to better distinguish multi-class objects.\nSince CAM-based methods can only generate few proposals for each class [58], it is not suitable for RS images with multiple instances. [11]- [14] generally utilize off-the-shelf proposal generators (e.g., selective search [59], edge boxes [60] and sliding windows) to produce initial proposals, and then consider the proposal refinement process as an MIL problem to make final predictions [58]. For example, WSDDN [11] first generates proposals using edge boxes, then feeds the extracted features of proposals to two parallel branches for classification and detection scoring, respectively. The two obtained scores are used to classify positive proposals. Based on WSDDN, OICR [12] uses selective search to generate proposals, and adds an instance classification refinement process to enhance the discriminatory capability of the instance classifier. PCL [13] improves the original proposal bags to proposal clusters, so that spatially adjacent proposals with the same label can be assigned to the same category cluster." }, { "figure_ref": [], "heading": "MIL-based methods", "publication_ref": [ "b60", "b61", "b62", "b55", "b63", "b64", "b65", "b66", "b67", "b68" ], "table_ref": [], "text": "In 2014, Zhang et al. [61] firstly transferred the imagelevel supervised detection methods into the RSOD field. Specifically, they first performed saliency-based segmentation and negative sample mining to generate initial training samples, and then proposed an iterative training approach to refine the samples and the detector gradually. On this basis, Han et al. [62] proposed a Bayesian framework to generate training samples, in which a deep Boltzmann machine was employed to extract the high-level features. In image-level supervised RSOD field, the key challenging issues are the local discrimination, multi-instances and the imbalance between easy and difficult samples. Recent methods put efforts on the improvement against these issues. For example, regarding the local discrimination issue, Feng et al. [63] proposed a novel triple context-aware network, named TCANet, to learn complementary and discriminative visual features. Feng et al. [56] subsequently proposed a progressive contextual instance refinement method. Qian et al. [64] proposed a semantic segmentation guided pseudo label mining module to mine high-quality pseudo ground truth instances. Regarding the multi-instances issue, Wang et al. [65] proposed a unique multiple instance graph learning framework. Feng et al. [66] proposed to utilize the rotation-consistency to pursue all possible instances. Wang et al. [67] developed a novel multi-view noisy learning framework, named MOL, which uses reliable object discovery and progressive object mining to reduce the background interference and tackle the multi-instance issue. For the imbalanced easy and difficult samples, Yao et al. [68] performed dynamic curriculum learning to progressively learn the object detectors in an easy-to-hard manner. Qian et al. [69] incorporated a difficulty evaluation score into training loss to alleviate the imbalance between easy and difficult samples.\nThe aforementioned studies improve the detection performance of image-level supervised RSOD methods. However, since image-level annotations cannot provide enough location and quantity information, these methods cannot achieve reasonable performance when applying to RSOD task (see Sec. IV). In this paper, we sacrifice little labor cost and focus on single pointly supervised RSOD." }, { "figure_ref": [], "heading": "C. Point Supervision in Vision Tasks", "publication_ref": [ "b16", "b18", "b69", "b71", "b72", "b74", "b75", "b19", "b76", "b77", "b78", "b79", "b71", "b73", "b75", "b19", "b16", "b17", "b18" ], "table_ref": [], "text": "Recently, point-level labels gradually attract research attention due to its similar labeling time and richer labeling information. Point-level supervision have been extensively investigated in many vision tasks, including object detection [17]- [19], semantic segmentation [70]- [72], instance segmentation [73]- [75], panoptic segmentation [76], localization [20], [77], [78], infrared small target segmentation [79], [80], and so on.\nWu et al. [72] proposed a deep bilateral filtering network (DBFNet) for single pointly supervised semantic segmentation, in which bilateral filter was introduced to enhance the consistency of features in smooth regions and enlarge the distance of features on different sides of edges. Cheng et al. [74] proposed a multi-pointly supervised instance segmentation method, named Implicit PointRend, that can generate parameters of the mask prediction function for each object. Fan et al. [76] considered panoptic pseudo-mask generation as a shortest path searching puzzle, and used semantic similarity, low-level texture cues, and high-level manifold knowledge as traversing costs between adjacent pixels. Yu et al. [20] proposed a coarse point refine (CPR) method for single pointly supervised object localization, and the CPR method can select semantic-correlated points around point labels and find semantic center points through MIL learning.\nIn object detection field, Papadopoulos et al. [17] firstly introduced center-click annotation, in which the error distribution between two clicks is utilized to estimate object scales. Hence, two repetitive and independent center annotations are needed in their method. Different from that, our method try to generate pseudo boxes from single arbitrary point on the object mask. Ren et al. [18] proposed a unified object detection framework (i.e., UFO 2 ) that can handle different forms of supervision (e.g., tags, points, scribbles and boxes) simultaneously. Different from handling different forms of supervision, the emphasis of our method is better generating pseudo boxes from single points based on the characteristics of RS objects. Chen et al. [19] proposed an MIL based single pointly supervised detection framework that can adaptively generate and refine proposals via multi-stage cascaded networks. In their method, proposal bags are generated through some fixed parameters that control the proposal scales, aspect ratios, shaking degrees and quantities. However, due to the challenges in RS field (as mentioned in Introduction), their method suffers a performance degradation when applying to RS images. In this paper, we focus on the special challenges of RSOD and explore single pointly supervised detection methods tailored for RS images." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the details of our method. We first introduce the architecture of the proposed point label upgrader (PLUG), which consists of the feature extraction module, the sparse feature guided semantic prediction (Sem-Pred) module and the instance label generation (ILG) module (see Fig. 2). Afterwards, we introduce the training losses of our PLUG. FC + ReLU FC + ReLU FC + ReLU Fig. 2. An overview of the proposed point label upgrader (PLUG), which is designed to transform point labels into pseudo boxes. Specifically, the feature extraction module extracts discriminative features from input images. Then, the sparse feature guided semantic prediction (SemPred) module takes the extracted features as its input and is responsible for the semantic response prediction. Finally, the instance label generation (ILG) module takes both the input images and the predicted response as its input to generate pseudo boxes." }, { "figure_ref": [], "heading": "A. Feature Extraction", "publication_ref": [ "b80", "b81", "b19" ], "table_ref": [], "text": "In our method, ResNet [81] with FPN [82] is used as the feature extraction module. The ResNet backbone extracts features of images of different scales, and FPN fuses the multiscale features to balance the contents of semantic and structure information. Following [20], the P2 layer (with 8× downsampling ratio) of FPN is used for subsequent processing." }, { "figure_ref": [], "heading": "B. Sparse Feature Guided Semantic Prediction", "publication_ref": [ "b82" ], "table_ref": [], "text": "Taking the extracted features as input, the sparse feature guided semantic prediction (SemPred) module is responsible for obtaining the semantic response of objects, in which object regions are activated in the specific category layers. Besides, the SemPred module can reduce the difficulty of discriminative feature extraction on dense objects. Specifically, we observe that the pseudo boxes generated on sparse objects are of higher quality than those generated on dense objects (see Sec. IV-F for details). Consequently, in our SemPred module, the general representation of sparse objects is used to enhance the extracted features, and thus improves the discriminative feature representation capability of our PLUG. The detailed architecture of the SemPred module is shown in Fig. 2(a), which consists of three stages: meta feature encoding, feature aggregation and semantic prediction.\n1) Meta feature encoding. In this stage, the general representation (i.e., meta feature) of sparse objects is encoded from the extracted features. As shown in Fig. 2(b), meta feature encoding takes the extracted features as input, and obtains sparse features by selecting the features of images with single object. Then, the sparse features are fed to a predictor and the ILG module to generate the pseudo labels of sparse objects. With the sparse features and the pseudo labels, masked average pooling is performed to obtain the feature representation of each sparse object. In order to obtain more representative and stable meta features, all the sparse representations in the dataset are averaged according to their categories. Finally, C (the number of categories) meta features are obtained, each of which can represent the general information of objects in a specific category.\n2) Feature aggregation. After obtaining C meta features, C aggregated features are generated in this stage by using meta features to enhance the extracted features. The architecture of our aggregator is shown in Fig. 2(c). Specifically, for each meta feature, element-wise subtraction and multiplication are first performed. Then, the processed features are concatenated with the original feature to obtain the aggregated features. Note that, a fully-connected layer and a ReLU layer are used after each operation (i.e., subtraction, multiplication and concatenation).\n3) Semantic prediction. For each aggregated feature, a predictor (composed of a Linear layer and a Sigmoid function) is used for semantic response prediction. Since the representations in meta features are category-aware, different aggregated features are expert in predicting objects in corresponding categories. Hence, the specific layer of the semantic response from different aggregated features are selected and concatenated to generate the final semantic response. It is worth noting that the predictor in different branches and in the meta feature encoding module share the same architecture and parameters.\nNote that, in the SemPred module, meta feature encoding is performed in the training phase only. During inference, the meta features have been optimized and stored in advance, and thus the extracted features can be directly aggregated. In fact, the guidance of sparse objects can be considered as a self-distillation process [83], where the sparse features are the teacher and can transfer knowledge (high-quality features) to the student. With the guidance of sparse objects, the semantic response can be enhanced, and benefits the pseudo box generation in the following ILG module." }, { "figure_ref": [], "heading": "C. Instance Label Generation", "publication_ref": [ "b75", "b83" ], "table_ref": [], "text": "After obtaining the semantic response, the ILG module is designed to generate pseudo box annotations. The core of this module is to assign each pixel to its most likely object or background. Based on the assignment results, we can obtain the bounding box of each object by finding the circumscribed rectangle of the corresponding pixels.\nSpecifically, let L = {l 0 , l 1 , l 2 , ..., l L } denote the set of instances, where l 0 denotes background and {l 1 , l 2 , ..., l L } denote L objects. Each pixel p on the image will be assigned to an instance according to\nIns(p) = arg min l∈L {Cost(p, p l )} ,(1)\nwhere p l represents the point label of instance l that contains both location and instance information. Cost(p, p l ) denotes the cost between pixel p and point label p l . The core of the label assignment process in Eq. 1 is to find an instance with minimum cost for each pixel.\nThe cost calculation between pixel p and point label p l is formulated as a shortest path problem. Specifically, we formulate the cost between p and p l as the second curvilinear curve integral along a given path Γ ∈ {Γ 1 , ..., Γ n }. That is,\nCost(p, p l ) = min Γ∈{Γ1,...,Γn} Γ C sem (⃗ z) + λC edge (⃗ z) d⃗ z,(2)\nwhere C sem (•) and C edge (•) represent the semantic-aware neighbor cost and edge-aware neighbor cost, respectively, and λ is a hyper-parameter to balance these two terms [76]. Specifically, C sem (•) is the L 2 distance of the semantic response between two adjacent pixels. C edge (•) is the L 1 distance of the edge map (generated by Sobel operator [84]) between two adjacent pixels, which can help better distinguish the densely packed objects (see Sec. IV-C3). Note that, Cost(p, p 0 ) is manually set to a fixed threshold τ (τ = 0.5 in our method) to assign pixels that are \"far from\" all the instances to the background. Besides, since there is no analytical solution to the integral in Eq. 2, we use the Dijkstra's algorithm to obtain its numerical solution." }, { "figure_ref": [], "heading": "D. Losses", "publication_ref": [ "b84", "b84", "b75" ], "table_ref": [], "text": "In the proposed PLUG, the ILG module is parameter-free, and the training process is only performed on the SemPred module. The losses to train the SemPred module have three parts including positive loss, negative loss and color prior loss.\n1) Positive loss. Since point labels can provide accurate supervision on the annotated locations, we set these labeled pixels as positive samples, and design a positive loss to optimize the SemPred module to generate correct predictions on these positions. The positive loss is designed based on the standard focal loss [85]:\nL pos = - 1 N pos Npos j=1 C i=1 [y ji (1 -y ′ ji ) γ log(y ′ ji ) + (1 -y ji )y ′ γ ji log(1 -y ′ ji )],(3)\nwhere N pos and C denote the total number of positive samples and categories, respectively. y and y ′ represent the groundtruth category label and the prediction scores, respectively. We follow the general settings in [85] to set γ to 2.\n2) Negative loss. In PSOD, only objects are labeled by single points, while the background regions are not annotated. Consequently, single point annotations cannot provide sufficient supervision on background. In our method, we follow this basic setting in PSOD and propose an approach to provide supervision on the background regions. Specifically, we suppose that background pixels are dominant in amount in the unlabeled region, and then coarsely set all the unlabeled pixels as negative samples. Based on the coarse negative samples, we design a negative loss to enforce our model to better distinguish objects and background, i.e.,\nL neg = - 1 N neg Nneg j=1 C i=1 (1 -y ji )y ′ γ ji log(1 -y ′ ji ),(4)\nwhere N neg is the number of negative samples.\n3) Color prior loss. We follow [76] to introduce a color prior loss, which can encourage adjacent pixels with similar colors be classified to the same category, and enhance the prediction stability of our SemPred module. The color prior loss is formulated as\nL col = - 1 Z HW i=1 j∈N (i) A i,j log y ′ T i y ′ j .(5)\nwhere y ′ i , y ′ j denote the category prediction scores of the i th and j th pixels, respectively. A i,j is the color prior affinity, and is obtained by thresholding the pixel similarity computed in the LAB color space (with a threshold of 0.3). N (i) is the set of neighbor pixel indices of i. Z = HW i=1 j∈N (i) A i,j is the normalization factor.\nIn summary, the overall loss is the weighted summation of the above three losses, i.e.,\nL all = L pos + α 1 L neg + α 2 L col ,(6)\nwhere α 1 , α 2 are two hyperparameters to balance different terms. In this paper, α 1 and α 2 are set to N neg /N pos and 1, respectively. With the well designed loss function, our PLUG can be well optimized and generate pseudo bounding boxes in an effective manner." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [ "b20", "b86", "b9", "b87", "b88", "b89", "b88" ], "table_ref": [], "text": "In this section, we firstly introduce the datasets and implementation details, and then combine the proposed PLUG with Faster-RCNN [21] to develop a PSOD method (i.e., PLUG-Det). Afterwards, we compare PLUG-Det with image-level, point-level and box-level supervised object detection methods. Moreover, we conduct ablation studies and make deep analyses to validate the effectiveness of our method. Finally, we develop a PLUG-Seg network by combing PLUG with Mask-RCNN [87], and conduct experiments to show the potential of our method in single pointly supervised instance segmentation (PSIS).\nA. Datasets and Implementation Details 1) Datasets: To verify the effectiveness of our method, we conduct extensive experiments on the DOTA-v1.0 dataset [10], which contains 2806 large-scale RS images with 15 object categories, including plane (PL), baseball diamond (BD), bridge (BR), ground track field (GTF), small vehicle (SV), large vehicle (LV), ship (SH), tennis court (TC), basketball court (BC), storage tank (ST), soccer ball field (SBF), roundabout (RA), harbor (HB), swimming pool (SP) and helicopter (HC). Objects in the DOTA dataset are labeled with box annotations. Since the iSAID dataset [88] contains the corresponding mask labels of objects in the DOTA dataset, we randomly selected a point on the mask of each object as the groundtruth point label. We used the training set and validation set for model development and performance evaluation, respectively. Due to hardware memory limitation, we cropped the original images into 512×512 patches with 128 overlapped pixels, and used the cropped patches for training and inference. In the training phase, random flip was used for data augmentation.\n2) Implementation Details: We implemented our method based on the MMDetection [89] toolbox with an NVIDIA RTX 3090Ti GPU. The training of our PLUG-Det method consists of three stages: the training of PLUG, the inference of PLUG and the training of existing detector (e.g., Faster-RCNN). In the first stage, the learning rate was initially set to 0.001 and decreased by a factor of 0.1 at the 8 th and 11 th epoch, respectively. We trained our PLUG for totally 12 epochs with a batch size of 8. Besides, we used the stochastic gradient descent (SGD) algorithm [90] for optimization. In the second stage, pseudo boxes of the training set were obtained by performing inference using the trained PLUG. In this stage, the batch size was set to 1. In the third stage, we adopted existing detector by default without modifying its hyper-parameters. Taking Faster-RCNN with ResNet50 as an example, the learning rate was initially set to 0.005, and the optimizer was SGD with 1× training schedule. Other training settings were kept as the default values in MMDetection [89].\nThe training time of the three stages are 4.8, 6 and 3.1 hours, respectively. The total training time is the summation of the time spent in each stage, and is about 14 hours.\n3) Evaluation Metrics: We used mIoU between generated pseudo boxes and groundtruth boxes to evaluate the performance of PLUG. Besides, mIoU s , mIoU m and mIoU l were used as the indicators to evaluate the quality of pseudo boxes on small, medium and large objects, respectively. Moreover, we evaluated the performance of PLUG-Det and its variants by reporting the mAP 50 (averaged over IoU values with the threshold being set to 0.5) for all categories and the AP 50 for each category. Similarly, mAP s , mAP m and mAP l were used to evaluate the detection performance on small, medium and large objects, respectively." }, { "figure_ref": [ "fig_1" ], "heading": "B. Comparison to the State-of-the-art Methods", "publication_ref": [ "b10", "b11", "b11", "b85", "b49" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In this subsection, we use the pseudo boxes generated by different methods to train a Faster-RCNN detector, and compare the detection performance of our PLUG-Det with existing image-level supervised and single pointly supervised detection methods. Moreover, Faster-RCNN with groundtruth box-level supervision is also included to provide upper bound results for reference.\nTable I shows the AP 50 values achieved by different detection methods. It can be observed that image-level supervised detectors (i.e., WSDDN [11], OICR [12], OICR-FR [12]) achieve very low detection accuracy. Compared to those detectors, PSOD methods achieve better detection performance due to the extra coarse position and quantity information introduced by point annotations. Specifically, P2BNet-FR achieves an mAP 50 score of 0.156, and can further achieve a 0.029 Besides, our method can generalize to different downstream detectors. We additionally use the one stage detector FCOS [86] and the Transformer based detector Deformable DETR [50] to validate the generalization capability of our method. As shown in Table I, PLUG-FCOS and PLUG-Deformable DETR can achieve 0.360 and 0.322 in terms of mAP 50 , and are 66.2% and 55.8% of the performance of each fully supervised detectors, respectively. The consistent performance ratios compared to respective fully supervised detectors demonstrate the generality of our method.\nFigure 3 shows the qualitative results on eight typical scenes achieved by different detection methods. It can be observed that our PLUG-Det can achieve better detection performance than other state-of-the-art image-level supervised and single pointly supervised detectors, especially on challenging scenes. Specifically, image-level supervised detectors (e.g., OICR-FR) may bring false alarms (e.g., Scene C) and miss detection (e.g., Scene F) due to its insufficient supervision. Besides, single pointly supervised detector P2BNet-FR has worse scale and aspect ratio adaptability compared with our method. For example, the vehicles in Scene A with large aspect ratios cannot be correctly detected by P2BNet-FR, but can be better detected by our method." }, { "figure_ref": [ "fig_3", "fig_4", "fig_3" ], "heading": "C. Ablation Study", "publication_ref": [ "b80", "b81" ], "table_ref": [ "tab_2", "tab_2", "tab_3", "tab_3", "tab_4" ], "text": "In this section, we conduct ablation studies to validate the effectiveness of our method.\n1) Investigation of the Feature Extraction Module: We use ResNet [81] with FPN [82] as the feature extraction module of our PLUG. Here, we compare the performance of our feature extraction module with different backbones (i.e., ResNet18, ResNet50 and ResNet101). We first evaluate the quality of generated pseudo boxes on the training set. As shown in Table II, our PLUG can achieve mIoU scores of 0.531, 0.549 and 0.558 with ResNet18, ResNet50 and ResNet101 backbones, respectively. We also evaluate the downstream detection performance on the validation set. As shown in Table II predicted semantic response maps. As shown in Fig. 4, objects of different categories can be well distinguished from background, and the response regions basically fit object shapes. Second, we validate the category discrimination capability of the SemPred module by visualizing the variation of masked mean response1 on different category layers. As shown in Fig. 5, each object is only strongly activated on a single category layer. These results clearly demonstrate the effectiveness of the SemPred module on recognizing and classifying objects from backgrounds. Sparse feature guidance. In the SemPred module, the general representations of sparse objects are used to aggregate the extracted features from backbones. To validate the sparse feature guidance scheme, we replaced the SemPred module with a vanilla predictor (a Linear layer followed by a Sigmoid function), and developed a variant (i.e., \"vanilla\" in Table III ) of PLUG without the guidance of sparse objects. As shown in Table III, the mIoU score is improved from 0.497 to 0.549 when sparse feature guidance is performed, and the mAP 50 value of our PLUG-Det is also improved from 0.356 to 0.423 correspondingly. It demonstrates that the proposed sparse feature guidance scheme can improve the quality of generated pseudo boxes, and thus benefits to the downstream detection performance. Moreover, we compare the semantic response maps produced by our PLUG and its variant (vanilla and SemPred). We can draw the following conclusions from Fig. 6: • The sparse feature guidance scheme can improve the recognition capability of our PLUG on confusing background. As shown in Scene A, the plane (PL) and the boarding bridges are similar in color space. With the guidance of sparse features, our PLUG can better distinguish objects from background.\n• The sparse feature guidance scheme can improve the recognition capability of our PLUG on dense objects. For densely packed objects of the same category (e.g., Scenes B and C), some objects are weakly activated when sparse feature guidance is not performed. In contrast, by performing sparse feature guidance, the features of each object can be enhanced, and the intra-class instance recognition performance is improved. Besides, sparse feature guidance can also improve the recognition capability of our PLUG on densely packed objects of different categories (e.g., the ships (SH) and harbor (HB) in Scene G).\n• The sparse feature guidance scheme can enhance the capability of our PLUG to distinguish objects in different categories but with similar appearance. As shown in Scene E, the tennis court (TC) and basketball court (BC) have similar appearance, and our PLUG without sparse feature guidance cannot distinguish them and produces falsely mixed response. Since category-aware meta features are used to aggregate the extracted features, the enhanced features have stronger category characteristics. Consequently, our PLUG with sparse feature guidance can effectively handle this mixed response issue and can well distinguish similar objects.\nCross-category correlation of meta features. Meta features are the general representation of objects in different categories. Here, we visualize the cosine similarity map between each pair of meta features to investigate their correlation. As shown in Fig. 7, apart from the elements on the diagonal, there are still some pairs of meta features (e.g., large vehicle (LV) vs. small vehicle (SV), plane (PL) vs. helicopter (HC), basketball court (BC) vs. tennis court (TC)) highly correlated due to the similar appearance of the objects. This observation is consistent with the visualization results in Fig. 6, and can demonstrate the effectiveness of the usage of meta features.\n3) Effectiveness of the Edge-aware Neighbor Cost: In this subsection, we validate the effectiveness of the edgeaware neighbor cost in the ILG module. Figure 8 shows the likelihood maps P map = 1 -C map with and without using edge-aware neighbor cost on an example scene, where the values represent the likelihood of a pixel belonging to a specific instance. It can be observed that densely packed adjacent instance can not be well distinguished without using edge-aware neighbor cost. That is because, the semantic-aware neighbor cost encourages the labeled points diffusing to the adjacent semantic-similar areas, and tends to consider the densely packed objects as a single instance. When the edgeaware neighbor cost is introduced, the diffusion of labeled points can stop at the boundaries, and these densely packed objects can be better distinguished.\nNote that, the value of λ in Eq. 2 should be properly set to ensure preferable growth from point labels. We compare the quality of pseudo boxes and the detection performance with respect to different λ values. As shown in Table IV, when λ is set to 0.5, our PLUG can generate pseudo boxes of the highest quality, and our PLUG-Det can achieve the best detection performance. Consequently, we set λ to 0.5 to balance the semantic-aware and edge-aware neighbor cost.\n4) Effectiveness of Losses: In this subsection, we conduct ablation studies to validate the effectiveness of the proposed losses. As shown in Table V, our PLUG can only achieve an mIoU of 0.318 when the positive loss is used only. That is because, the background can not be considered in the training process, and thus degrades the recognition capability of our PLUG to distinguish objects and background. When the " }, { "figure_ref": [], "heading": "D. Analyses of the Selecting Strategy of Point Labels", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "In the preceding experiments, point labels were randomly selected from object masks. How will the locations of the selected points affect the performance? In this subsection, we implement three kinds of point labels, and conduct experiments to analyze their impacts on the quality of generated pseudo boxes and downstream detection performance. Specifically, we adopt three different labeling strategies, i.e., selecting the point in the center, selecting the point on the corner, and randomly selecting a point on the mask. Note that, since there is no clear definition about the corners of objects, we just selected the point (on the mask) that is farthest from the center point as its \"corner\" label. Objects with different point labels are shown in Fig. 9.\nTable VI shows the quality of pseudo boxes and the detection performance of our method with different point labels. It can be observed that our PLUG with center point labels can achieve the most superior results, which are 0.553 in terms of mIoU and 0.438 in terms of mAP 50 . Besides, when the randomly selected points are used, the performance is slightly decreased (0.549 and 0.427 in terms of mIoU and mAP 50 , respectively). Moreover, the corner labels result in a larger degree of performance degradation, in which mIoU and mAP 50 are decreased to 0.518 and 0.406, respectively.\nIt is worth noting that the performance of our method with corner point labels is inferior than that with center and random point labels. That is because, the edge-aware neighbor cost used in the ILG module hinders the pixel diffusion of corner points. Specifically, the edge-aware neighbor cost is utilized to help stopping the diffusion of labeled points at boundaries, and thus prevent the labeled points from spreading towards the background areas (see Sec. IV-C3). However, since the corner points are located on the boundaries of objects, the edge-aware cost may hinder the diffusion of the labeled points to the internal area of the object, as their paths pass through the edges. For example, as shown in the P map of the instance 6 in Fig. 8, the ILG module can recognize its correct regions with the semantic-aware cost only. However, when the edge-aware cost is introduced, the labeled points can only be diffused to background areas." }, { "figure_ref": [], "heading": "E. Extension to Rotated Object Detection", "publication_ref": [ "b25" ], "table_ref": [ "tab_7" ], "text": "In our method, the ILG module utilize semantic and edge information to assign pixels to its most likely object or background, and use the circumscribed rectangle of assigned pixels as pseudo boxes. Therefore, by further transforming the circumscribed rectangle to the one with the minimum area, our method can be easily extended to the task of rotated object detection. we conduct experiments to validate the effectiveness of our method on rotated object detection. Specifically, we use the modified PLUG to generate rotated pseudo boxes, and use ROITrans [26] as the downstream rotated detector to develop PLUG-ROITrans. The experimental results of our PLUG-ROITrans (under single point supervision) and the original ROITrans (under ground-truth rotated box supervision) are shown in Table VII. It can be observed that our PLUG-ROITrans can achieve 0.351 in terms of mAP 50 , which is 51.6% of the performance of fully supervised ROITrans. The results demonstrate the preliminary effectiveness of our method in pointly supervised rotated object detection in RS images." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7", "fig_7" ], "heading": "F. Further Analyses on Dense Objects", "publication_ref": [], "table_ref": [], "text": "As mentioned in Introduction, dense objects introduce challenges to discriminative feature extraction, and thus affect the quality of generated pseudo boxes. In this subsection, we conduct a series of experiments to analyze the influence of dense objects.\nFirst, we coarsely suppose that the density of objects are positively related to their numbers in an image patch (with same area). Then, we split the DOTA dataset into several subsets containing different number of objects, and quantitatively evaluate the quality of generated pseudo boxes with respect to the object density. Note that, we do not perform sparse feature guidance in our PLUG to better demonstrate the Second, considering that the number, adjacent distance and appearance of objects are the three key factors that influence the quality of pseudo boxes, we design specific experiments to quantitatively investigate the impact of the first two factors by keeping the object appearance unchanged. Specifically, we use the \"copy-and-paste\" strategy (see the sub-figures with blue boxes in Fig. 11(a)) to generate multiple identical objects with controllable density. As shown in Fig. 11(b) and 11(c), the quality of generated boxes degrades as the object density increases.\nFinally, we keep the density of the semantic response maps unchanged and investigate the influence of densely packed objects to the discriminative feature extraction. Specifically, we shift and fuse the single-object response to synthesize a pseudo dense-object response map. In this way, we build a control group with identical object density in the response maps but different feature representations in the feature extraction module. As shown in Fig. 11(b) and Fig. 11(c), the mIoU scores of the pseudo boxes generated from the control group are significantly higher than those obtained from the images with dense objects. The experimental results clearly validate that densely packed objects in RS images can hinder the discriminative feature extraction and thus degrade the quality of pseudo boxes. With our sparse feature guidance scheme, the mIoUs of generated pseudo labels in different density intervals are increased. The qualitative results demonstrate the effectiveness of our method in handling the densely packed objects." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "G. Extension to Instance Segmentation", "publication_ref": [ "b87", "b90", "b86", "b86", "b90", "b90", "b86", "b90" ], "table_ref": [], "text": "Since the ILG module in our PLUG produces instance label for each object from their point annotation, our PLUG can be easily extended to pointly supervised instance segmentation (PSIS). Specifically, we concatenate our PLUG with Mask-RCNN, and developed a PLUG-Seg network to achieve PSIS in RS images. Besides, we used the groundtruth mask labels in the iSAID dataset [88], and adopted the mask-level mAP 50 , mAP s , mAP m and mAP l as quantitative metrics for performance evaluation. We compare our PLUG-Seg with BoxInst [91] and Mask-RCNN [87], which use box-level and mask-level supervision for instance segmentation, respectively. We also followed these two methods [87], [91] to evaluate the performance of object detection and instance segmentation simultaneously. The experimental results are shown in Table VIII and Fig. 12.\nIt can be observed from Table VIII that our PLUG-Seg can achieve an mAP 50 of 0.435 for object detection and an mAP 50 of 0.406 for instance segmentation. With single point annotation for each instance, our PLUG-Seg can achieve 68%/81% and 66%/65% accuracy in object detection/instance segmentation as compared to box-level (i.e., BoxInst [91]) and masklevel (i.e., Mask-RCNN [87]) supervised methods, respectively. The qualitative results in Fig. 12 also demonstrate the promising performance of our PLUG-Seg. It is worth noting that our PLUG-Seg can achieve better performance than BoxInst [91] on scenes with complex backgrounds (e.g., the roundabout and small vehicles in Scene E and the bridge in Scene F). These experimental results demonstrate that single point annotation can provide sufficient supervision for instance segmentation." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a method to learn remote sensing object detection with single point supervision. In our method, a point label upgrader (PLUG) is designed to generate pseudo boxes from point labels. We also handle the dense object issue in remote sensing images by designing a sparse feature guided semantic prediction module. Experimental results validate the effectiveness and superiority of our method. In the future, we will further extend our method to generate rotated pseudo boxes from single point labels, and investigate more stable and efficient pseudo label generation schemes. We hope our study " } ]
Pointly Supervised Object Detection (PSOD) has attracted considerable interests due to its lower labeling cost as compared to box-level supervised object detection. However, the complex scenes, densely packed and dynamic-scale objects in Remote Sensing (RS) images hinder the development of PSOD methods in RS field. In this paper, we make the first attempt to achieve RS object detection with single point supervision, and propose a PSOD method tailored for RS images. Specifically, we design a point label upgrader (PLUG) to generate pseudo box labels from single point labels, and then use the pseudo boxes to supervise the optimization of existing detectors. Moreover, to handle the challenge of the densely packed objects in RS images, we propose a sparse feature guided semantic prediction module which can generate high-quality semantic maps by fully exploiting informative cues from sparse objects. Extensive ablation studies on the DOTA dataset have validated the effectiveness of our method. Our method can achieve significantly better performance as compared to state-of-the-art image-level and pointlevel supervised detection methods, and reduce the performance gap between PSOD and box-level supervised object detection. Code is available at https://github.com/heshitian/PLUG.
Learning Remote Sensing Object Detection with Single Point Supervision
[ { "figure_caption": "Fig. 3 .3Fig. 3. Qualitative results obtained by different object detection methods on the DOTA validation set. The correctly detected results are marked by yellow boxes, and the falsely detected results are marked by red boxes. Gradually darker colors represent stronger supervision.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 . 2 )Fig. 5 .425Fig. 4. The semantic response of images predicted by the SemPred module.Here, the layer of the corresponding category is visualized.", "figure_data": "", "figure_id": "fig_2", "figure_label": "425", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The heatmaps of specific response layers produced by our SemPred module with and without performing sparse feature guidance (SFG).", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The cosine similarities between different pairs of representations in meta features. Here, darker colors indicate larger values (i.e., higher similarity).", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .Fig. 9 .89Fig. 8. The likelihood maps generated by the ILG module with and without using the edge-aware neighbor cost. Note that, we visualize Pmap = 1 -Cmap for better visual analyses, where Cmap is the cost map for each labeled point, and the values on the cost map can represent the costs from each pixel to the labeled point. Consequently, Cmap ∈ H × W × N , where H and W are the height and width of images and N is the number of objects in the image. Based on Cmap, Pmap can represent the likelihood of each pixel belonging to a specific instance, and thus can more intuitively show the diffusion of labeled points.", "figure_data": "", "figure_id": "fig_5", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. The IoU of generated pseudo boxes in images with different numbers. Here, four exampled scenes are shown for visualization. Note that, the star indicates that the mean IoU of pseudo boxes is 0.520 in images with single object.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. Illustrations and analyses of the influence of densely packed objects to the quality of generated pseudo boxes.", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Qualitative results achieved by different instance segmentation methods on the DOTA dataset. The correctly detected results are marked by yellow boxes, and the falsely detected results are marked by red boxes. Gradually darker colors represent stronger supervision. The predicted instance masks are randomly colored.", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "PRECISION SCORES ACHIEVED BY DIFFERENT DETECTION METHODS ON THE DOTA DATASET. HERE, DEF-DETR REPRESENTS DEFORMABLE DETR.", "figure_data": "Method Supervision BackbonePLBDBRGTFSVLVSHCategories TCBCSTSBFRAHBSPHCmAP50WSDDN [11]ImageVGG16 0.003 0.009 0.000 0.005 0.000 0.001 0.000 0.003 0.000 0.000 0.013 0.000 0.010 0.010 0.000 0.004WSDDN [11]ImageResNet50 0.014 0.064 0.001 0.013 0.021 0.030 0.016 0.034 0.004 0.025 0.019 0.053 0.011 0.044 0.004 0.023OICR [12]ImageVGG16 0.007 0.100 0.000 0.116 0.037 0.101 0.023 0.089 0.000 0.056 0.145 0.000 0.042 0.036 0.000 0.050OICR [12]ImageResNet50 0.047 0.104 0.007 0.042 0.022 0.061 0.022 0.068 0.031 0.044 0.096 0.102 0.061 0.047 0.016 0.051OICR-FR [12]ImageResNet50 0.042 0.038 0.000 0.002 0.075 0.301 0.037 0.077 0.011 0.132 0.033 0.159 0.050 0.120 0.001 0.072FCOS [86]BoxResNet50 0.800 0.504 0.296 0.212 0.603 0.796 0.821 0.914 0.452 0.612 0.407 0.460 0.751 0.213 0.313 0.544def-DETR [50]BoxResNet50 0.799 0.576 0.377 0.491 0.600 0.772 0.843 0.924 0.414 0.624 0.457 0.396 0.721 0.455 0.324 0.577Faster-RCNN [21]BoxResNet50 0.850 0.665 0.435 0.587 0.588 0.831 0.833 0.933 0.493 0.634 0.590 0.589 0.791 0.534 0.373 0.648P2BNet-FR [19]PointResNet50 0.061 0.063 0.111 0.260 0.266 0.066 0.368 0.016 0.051 0.270 0.049 0.272 0.105 0.386 0.001 0.156P2BNet-FR* [19]PointResNet50 0.016 0.002 0.118 0.168 0.397 0.073 0.246 0.017 0.190 0.465 0.009 0.518 0.060 0.358 0.140 0.185PLUG-FCOS(ours)PointResNet50 0.353 0.340 0.226 0.111 0.296 0.685 0.603 0.874 0.246 0.455 0.192 0.468 0.349 0.171 0.039 0.360PLUG-def-DETR (ours)PointResNet50 0.250 0.398 0.241 0.166 0.288 0.614 0.547 0.795 0.090 0.383 0.160 0.345 0.227 0.353 0.047 0.322PLUG-FR (ours)PointResNet50 0.509 0.543 0.291 0.284 0.248 0.672 0.436 0.874 0.214 0.462 0.360 0.543 0.438 0.446 0.086 0.427* means that P2BNet is optimized in a two-stage cascaded manner.", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OF THE PSEUDO BOX QUALITY AND DETECTION PERFORMANCE ACHIEVED BY DIFFERENT BACKBONES. HERE, #PARAM REPRESENTS THE NUMBER OF PARAMETERS, AND FLOPS IS CALCULATED WITH A 512×512 INPUT IMAGE.", "figure_data": "backboneFLOPs#ParammIoUPseudo box quality mIoUs mIoUmmIoU lmAP 50Detection performance mAPs mAPmmAP lResNet1812.17 G13.37 M0.5310.5240.5510.5080.4120.3300.4570.262ResNet5024.88 G26.32 M0.5490.5390.5760.5330.4270.3290.4740.338ResNet10144.36 G45.31 M0.5580.5480.5840.5640.4360.3380.4900.384SceneResponsePLBDBRGTFSVScene Response Scene ResponseLVSHTCBCSTSBFRAHBSPHC", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "OF THE PSEUDO BOX QUALITY AND DETECTION PERFORMANCE ACHIEVED BY OUR PLUG WITH DIFFERENT SEMANTIC PREDICTION MODULES. NOTE THAT, THE VANILLA AND SEMPRED MODULE REPRESENT THE METHOD WITHOUT AND WITH PERFORMING SPARSE FEATURE GUIDANCE, RESPECTIVELY.", "figure_data": "semantic prediction modulePseudo box quality mIoU mIoUs mIoUmmIoU lDetection performance mAP 50 mAPs mAPmmAP lvanilla0.4970.4940.5120.4570.3560.2920.4010.215SemPred0.5490.5390.5760.5330.4270.3290.4740.338InputMaskvanillaSemPredInputMaskvanillaSemPredScene A Scene BScene EScene CScene FScene DScene G", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "OF THE PSEUDO BOX QUALITY AND DETECTION PERFORMANCE ACHIEVED BY OUR PLUG WITH DIFFERENT λ VALUES.BEST RESULTS ARE IN BOLD FACES.", "figure_data": "λPseudo box quality mIoU mIoUs mIoUm mIoU l mAP 50 mAPs mAPm mAP l Detection performance0 0.497 0.4730.5520.5480.405 0.305 0.461 0.3400.5 0.549 0.5390.5760.5330.427 0.329 0.474 0.3381.0 0.547 0.5410.5670.5280.425 0.327 0.467 0.3191.5 0.542 0.5360.5590.5230.426 0.322 0.475 0.3302.0 0.517 0.3890.5470.5520.422 0.328 0.473 0.335", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "OF THE PSEUDO BOX QUALITY AND DETECTION PERFORMANCE ACHIEVED BY OUR PLUG WITH DIFFERENT LOSSES.", "figure_data": "positiveLoss negativecolor priormIoUmAP 50✓0.3180.175✓✓0.4980.421✓✓✓0.5490.427negative loss is introduced, both the quality of pseudo boxesand the detection performance are significantly improved.Moreover, applying the color prior loss can further introducea 0.051 improvement of mIoU and a 0.006 improvement ofmAP 50 . The experimental results demonstrate the effectivenessof the proposed losses.", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "OF THE PSEUDO BOX QUALITY AND DETECTION PERFORMANCE ACHIEVED BY OUR PLUG WITH DIFFERENT POINT LABEL", "figure_data": "SELECTION STRATEGIES.SelectionPseudo box qualityDetection performanceStrategy mIoU mIoUs mIoUm mIoU l mAP 50 mAPs mAPm mAP lcorner0.518 0.488 0.586 0.589 0.406 0.306 0.464 0.427center0.553 0.520 0.629 0.606 0.438 0.316 0.493 0.504random 0.549 0.539 0.576 0.533 0.427 0.329 0.474 0.338", "figure_id": "tab_6", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "OF THE DETECTION PERFORMANCE ACHIEVED BY ROITRANS AND PLUG-ROITRANS. .088 0.503 0.302 0.292 0.248 0.661 0.368 0.806 0.285 0.502 0.400 0.365 0.259 0.176 0.009 0.351 ROITrans[26] Rotated Box ResNet50 0.798 0.671 0.500 0.736 0.713 0.851 0.885 0.906 0.551 0.693 0.620 0.651 0.676 0.578 0.366 0.680", "figure_data": "Method Supervision BackbonePLBDBRGTFSVLVSHCategories TCBCSTSBFRAHBSPHCmAP50PLUG-ROITrans ResNet50 0Scene A Point Scene BScene CScene D(object num: 1; mean IoU: 0.88)(object num: 20; mean IoU: 0.77)(object num: 70; mean IoU: 0.66)(object num: 171; mean IoU: 0.40)", "figure_id": "tab_7", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "RESULTS ACHIEVED BY DIFFERENT INSTANCE SEGMENTATION METHODS ON THE DOTA DATASET.", "figure_data": "modelsupervisionmAP 50object detection mAPs mAPmmAP lmAP 50instance segmentation mAPs mAPmmAP lMask-RCNN [87]Mask0.6590.5350.6700.6970.6230.4800.6620.682BoxInst [91]Box0.6430.5350.6330.6470.5030.3710.5430.582PLUG-Seg (ours)Point0.4350.3350.4810.3480.4060.2780.4910.340can draw attention to the research of single pointly supervisedremote sensing object detection.", "figure_id": "tab_8", "figure_label": "VIII", "figure_type": "table" } ]
Shitian He; Huanxin Zou; Yingqian Wang; Boyang Li; Xu Cao; Ning Jing
[ { "authors": "L Hou; K Lu; J Xue", "journal": "IEEE TIP", "ref_id": "b0", "title": "Refined one-stage oriented object detection method for remote sensing images", "year": "2022" }, { "authors": "G Cheng; J Han; P Zhou; D Xu", "journal": "IEEE TIP", "ref_id": "b1", "title": "Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection", "year": "2018" }, { "authors": "Z Huang; W Li; X.-G Xia; R Tao", "journal": "IEEE TIP", "ref_id": "b2", "title": "A general gaussian heatmap label assignment for arbitrary-oriented object detection", "year": "2022" }, { "authors": "T Kong; F Sun; H Liu; Y Jiang; L Li; J Shi", "journal": "IEEE TIP", "ref_id": "b3", "title": "Foveabox: Beyound anchor-based object detection", "year": "2020" }, { "authors": "B Liu; C Xu; Z Cui; J Yang", "journal": "IEEE TIP", "ref_id": "b4", "title": "Progressive context-dependent inference for object detection in remote sensing imagery", "year": "2022" }, { "authors": "W Li; W Wei; L Zhang", "journal": "IEEE TIP", "ref_id": "b5", "title": "Gsdet: Object detection in aerial images based on scale reasoning", "year": "2021" }, { "authors": "J Han; J Ding; J Li; G.-S Xia", "journal": "IEEE TGRS", "ref_id": "b6", "title": "Align deep features for oriented object detection", "year": "2021" }, { "authors": "Z Liu; L Yuan; L Weng", "journal": "ICPR", "ref_id": "b7", "title": "A high resolution optical satellite image dataset for ship recognition and some new baselines", "year": "2017" }, { "authors": "G Cheng; J Han; P Zhou", "journal": "ISPRS", "ref_id": "b8", "title": "Multi-class geospatial object detection and geographic image classification based on collection of part detectors", "year": "2014" }, { "authors": "G Xia; X Bai; J Ding", "journal": "", "ref_id": "b9", "title": "Dota: A large-scale dataset for object detection in aerial images", "year": "2018" }, { "authors": "H Bilen; A Vedaldi", "journal": "", "ref_id": "b10", "title": "Weakly supervised deep detection networks", "year": "2016" }, { "authors": "P Tang; X Wang; X Bai; W Liu", "journal": "", "ref_id": "b11", "title": "Multiple instance detection network with online instance classifier refinement", "year": "2017" }, { "authors": "P Tang; X Wang; S Bai; W Shen; X Bai; W Liu; A Yuille", "journal": "IEEE TPAMI", "ref_id": "b12", "title": "Pcl: Proposal cluster learning for weakly supervised object detection", "year": "2018" }, { "authors": "Z Zeng; B Liu; J Fu; H Chao; L Zhang", "journal": "", "ref_id": "b13", "title": "Wsod2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection", "year": "2019" }, { "authors": "X Zhang; Y Wei; J Feng; Y Yang; T S Huang", "journal": "", "ref_id": "b14", "title": "Adversarial complementary learning for weakly supervised object localization", "year": "2018" }, { "authors": "J Xie; C Luo; X Zhu; Z Jin; W Lu; L Shen", "journal": "", "ref_id": "b15", "title": "Online refinement of low-level feature based activation map for weakly supervised object localization", "year": "2021" }, { "authors": "D P Papadopoulos; J R Uijlings; F Keller; V Ferrari", "journal": "", "ref_id": "b16", "title": "Training object class detectors with click supervision", "year": "2017" }, { "authors": "Z Ren; Z Yu; X Yang; M.-Y Liu; A G Schwing; J Kautz", "journal": "Springer", "ref_id": "b17", "title": "Ufo 2: A unified framework towards omni-supervised object detection", "year": "2020" }, { "authors": "P Chen; X Yu; X Han; N Hassan; K Wang; J Li; J Zhao; H Shi; Z Han; Q Ye", "journal": "Springer", "ref_id": "b18", "title": "Point-to-box network for accurate object detection via single point supervision", "year": "2022" }, { "authors": "X Yu; P Chen; D Wu; N Hassan; G Li; J Yan; H Shi; Q Ye; Z Han", "journal": "", "ref_id": "b19", "title": "Object localization under single coarse point supervision", "year": "2022" }, { "authors": "S Ren; K He; R Girshick", "journal": "IEEE TPAMI", "ref_id": "b20", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2016" }, { "authors": "K Li; G Wan; G Cheng; L Meng; J Han", "journal": "ISPRS", "ref_id": "b21", "title": "Object detection in optical remote sensing images: A survey and a new benchmark", "year": "2020" }, { "authors": "X Yang; J Yang; J Yan; Y Zhang; T Zhang; Z Guo; X Sun; K Fu", "journal": "", "ref_id": "b22", "title": "Scrdet: Towards more robust detection for small, cluttered and rotated objects", "year": "2019" }, { "authors": "J Ma; W Shao; H Ye; L Wang; H Wang; Y Zheng; X Xue", "journal": "IEEE TMM", "ref_id": "b23", "title": "Arbitrary-oriented scene text detection via rotation proposals", "year": "2018" }, { "authors": "Y Jiang; X Zhu; X Wang; S Yang; W Li; H Wang; P Fu; Z Luo", "journal": "", "ref_id": "b24", "title": "R 2 cnn: Rotational region cnn for arbitrarily-oriented scene text detection", "year": "2018" }, { "authors": "J Ding; N Xue; Y Long; G.-S Xia; Q Lu", "journal": "", "ref_id": "b25", "title": "Learning roi transformer for oriented object detection in aerial images", "year": "2019" }, { "authors": "X Xie; G Cheng; J Wang; X Yao; J Han", "journal": "", "ref_id": "b26", "title": "Oriented r-cnn for object detection", "year": "2021" }, { "authors": "H Wei; Y Zhang; Z Chang; H Li; H Wang; X Sun", "journal": "ISPRS", "ref_id": "b27", "title": "Oriented objects as pairs of middle lines", "year": "2020" }, { "authors": "H Wei; Y Zhang; B Wang; Y Yang; H Li; H Wang", "journal": "IEEE TGRS", "ref_id": "b28", "title": "X-linenet: Detecting aircraft in remote sensing images by a pair of intersecting line segments", "year": "2020" }, { "authors": "J Yi; P Wu; B Liu; Q Huang; H Qu; D Metaxas", "journal": "", "ref_id": "b29", "title": "Oriented object detection in aerial images with box boundary-aware vectors", "year": "2021" }, { "authors": "P Zhao; Z Qu; Y Bu; W Tan; Q Guan", "journal": "IJRS", "ref_id": "b30", "title": "Polardet: A fast, more precise detector for rotated target in aerial images", "year": "2021" }, { "authors": "P Dai; S Yao; Z Li; S Zhang; X Cao", "journal": "IEEE TIP", "ref_id": "b31", "title": "Ace: Anchor-free corner evolution for real-time arbitrarily-oriented object detection", "year": "2022" }, { "authors": "X Yang; J Yan; Q Ming; W Wang; X Zhang; Q Tian", "journal": "", "ref_id": "b32", "title": "Rethinking rotated object detection with gaussian wasserstein distance loss", "year": "2021" }, { "authors": "X Yang; Y Zhou; G Zhang; J Yang; W Wang; J Yan; X Zhang; Q Tian", "journal": "", "ref_id": "b33", "title": "The kfiou loss for rotated object detection", "year": "2022" }, { "authors": "J Han; J Ding; N Xue; G.-S Xia", "journal": "", "ref_id": "b34", "title": "Redet: A rotation-equivariant detector for aerial object detection", "year": "2021" }, { "authors": "X Yang; J Yan; Z Feng; T He", "journal": "AAAI", "ref_id": "b35", "title": "R3det: Refined single-stage detector with feature refinement for rotating object", "year": "2021" }, { "authors": "W Qian; X Yang; S Peng; J Yan; Y Guo", "journal": "AAAI", "ref_id": "b36", "title": "Learning modulated loss for rotated object detection", "year": "2021" }, { "authors": "X Yang; J Yan", "journal": "Springer", "ref_id": "b37", "title": "Arbitrary-oriented object detection with circular smooth label", "year": "2020" }, { "authors": "X Yang; L Hou; Y Zhou; W Wang; J Yan", "journal": "", "ref_id": "b38", "title": "Dense label encoding for boundary discontinuity free rotation detection", "year": "2021" }, { "authors": "H Hu; J Gu; Z Zhang; J Dai; Y Wei", "journal": "", "ref_id": "b39", "title": "Relation networks for object detection", "year": "2018" }, { "authors": "C Deng; M Wang; L Liu; Y Liu; Y Jiang", "journal": "IEEE TMM", "ref_id": "b40", "title": "Extended feature pyramid network for small object detection", "year": "2021" }, { "authors": "F Yang; H Fan; P Chu; E Blasch; H Ling", "journal": "", "ref_id": "b41", "title": "Clustered object detection in aerial images", "year": "2019" }, { "authors": "C Li; T Yang; S Zhu; C Chen; S Guan", "journal": "CVPRW", "ref_id": "b42", "title": "Density map guided object detection in aerial images", "year": "2020" }, { "authors": "K Fu; Z Chang; Y Zhang; X Sun", "journal": "IEEE TGRS", "ref_id": "b43", "title": "Point-based estimator for arbitrary-oriented object detection in aerial images", "year": "2020" }, { "authors": "E Liu; Y Zheng; B Pan; X Xu; Z Shi", "journal": "IEEE TGRS", "ref_id": "b44", "title": "Dcl-net: Augmenting the capability of classification and localization for remote sensing object detection", "year": "2021" }, { "authors": "C Xu; J Wang; W Yang; H Yu; L Yu; G.-S Xia", "journal": "Springer", "ref_id": "b45", "title": "Rfla: Gaussian receptive field based label assignment for tiny object detection", "year": "2022" }, { "authors": "Y Zhu; J Du; X Wu", "journal": "IEEE TGRS", "ref_id": "b46", "title": "Adaptive period embedding for representing oriented objects in aerial images", "year": "2020" }, { "authors": "S Liu; L Zhang; H Lu; Y He", "journal": "IEEE TGRS", "ref_id": "b47", "title": "Center-boundary dual attention for oriented object detection in remote sensing images", "year": "2021" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "", "ref_id": "b48", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "X Zhu; W Su; L Lu; B Li; X Wang; J Dai", "journal": "", "ref_id": "b49", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" }, { "authors": "F Li; H Zhang; S Liu; J Guo; L M Ni; L Zhang", "journal": "", "ref_id": "b50", "title": "Dn-detr: Accelerate detr training by introducing query denoising", "year": "2022" }, { "authors": "L Dai; H Liu; H Tang; Z Wu; P Song", "journal": "IEEE TCSVT", "ref_id": "b51", "title": "Ao2-detr: Arbitraryoriented object detection transformer", "year": "2022" }, { "authors": "D Wang; Q Zhang; Y Xu; J Zhang; B Du; D Tao; L Zhang", "journal": "IEEE TGRS", "ref_id": "b52", "title": "Advancing plain vision transformer towards remote sensing foundation model", "year": "2022" }, { "authors": "C Fasana; S Pasini; F Milani; P Fraternali", "journal": "Remote Sensing", "ref_id": "b53", "title": "Weakly supervised object detection for remote sensing images: A survey", "year": "2022" }, { "authors": "G Cheng; J Yang; D Gao; L Guo; J Han", "journal": "IEEE TIP", "ref_id": "b54", "title": "High-quality proposals for weakly supervised object detection", "year": "2020" }, { "authors": "X Feng; J Han; X Yao; G Cheng", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b55", "title": "Progressive contextual instance refinement for weakly supervised object detection in remote sensing images", "year": "2020" }, { "authors": "Y Li; Y Zhang; X Huang; A L Yuille", "journal": "ISPRS journal of photogrammetry and remote sensing", "ref_id": "b56", "title": "Deep networks under scene-level supervision for multi-class geospatial object detection from remote sensing images", "year": "2018" }, { "authors": "F Shao; L Chen; J Shao; W Ji; S Xiao; L Ye; Y Zhuang; J Xiao", "journal": "Neurocomputing", "ref_id": "b57", "title": "Deep learning for weakly-supervised object detection and localization: A survey", "year": "2022" }, { "authors": "J R Uijlings; K E Van De Sande; T Gevers; A W Smeulders", "journal": "IJCV", "ref_id": "b58", "title": "Selective search for object recognition", "year": "2013" }, { "authors": "C L Zitnick; P Dollár", "journal": "Springer", "ref_id": "b59", "title": "Edge boxes: Locating object proposals from edges", "year": "2014" }, { "authors": "D Zhang; J Han; G Cheng; Z Liu; S Bu; L Guo", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b60", "title": "Weakly supervised learning for target detection in remote sensing images", "year": "2014" }, { "authors": "J Han; D Zhang; G Cheng; L Guo; J Ren", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b61", "title": "Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning", "year": "2014" }, { "authors": "X Feng; J Han; X Yao; G Cheng", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b62", "title": "Tcanet: Triple contextaware network for weakly supervised object detection in remote sensing images", "year": "2020" }, { "authors": "X Qian; C Li; W Wang; X Yao; G Cheng", "journal": "International Journal of Applied Earth Observation and Geoinformation", "ref_id": "b63", "title": "Semantic segmentation guided pseudo label mining and instance re-detection for weakly supervised object detection in remote sensing images", "year": "2023" }, { "authors": "B Wang; Y Zhao; X Li", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b64", "title": "Multiple instance graph learning for weakly supervised remote sensing object detection", "year": "2021" }, { "authors": "X Feng; X Yao; G Cheng; J Han", "journal": "", "ref_id": "b65", "title": "Weakly supervised rotationinvariant aerial object detection network", "year": "2022" }, { "authors": "G Wang; X Zhang; Z Peng; X Jia; X Tang; L Jiao", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b66", "title": "Mol: Towards accurate weakly supervised remote sensing object detection via multiview noisy learning", "year": "2023" }, { "authors": "X Yao; X Feng; J Han; G Cheng; L Guo", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b67", "title": "Automatic weakly supervised object detection from high spatial resolution remote sensing images via dynamic curriculum learning", "year": "2020" }, { "authors": "X Qian; Y Huo; G Cheng; X Yao; K Li; H Ren; W Wang", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b68", "title": "Incorporating the completeness and difficulty of proposals into weakly supervised object detection in remote sensing images", "year": "2022" }, { "authors": "A Bearman; O Russakovsky; V Ferrari; L Fei-Fei", "journal": "Springer", "ref_id": "b69", "title": "What's the point: Semantic segmentation with point supervision", "year": "2016" }, { "authors": "R Qian; Y Wei; H Shi; J Li; J Liu; T Huang", "journal": "AAAI", "ref_id": "b70", "title": "Weakly supervised scene parsing with point-based distance metric learning", "year": "2019" }, { "authors": "L Wu; L Fang; J Yue; B Zhang; P Ghamisi; M He", "journal": "IEEE TIP", "ref_id": "b71", "title": "Deep bilateral filtering network for point-supervised semantic segmentation in remote sensing images", "year": "2022" }, { "authors": "I H Laradji; N Rostamzadeh; P O Pinheiro; D Vazquez; M Schmidt", "journal": "", "ref_id": "b72", "title": "Proposal-based instance segmentation with point supervision", "year": "2020" }, { "authors": "B Cheng; O Parkhi; A Kirillov", "journal": "", "ref_id": "b73", "title": "Pointly-supervised instance segmentation", "year": "2022" }, { "authors": "M Liao; Z Guo; Y Wang; P Yuan; B Feng; F Wan", "journal": "", "ref_id": "b74", "title": "Attentionshift: Iteratively estimated part-based attention map for pointly supervised instance segmentation", "year": "2023" }, { "authors": "J Fan; Z Zhang; T Tan", "journal": "Springer", "ref_id": "b75", "title": "Pointly-supervised panoptic segmentation", "year": "2022" }, { "authors": "J Ribera; D Guera; Y Chen; E J Delp", "journal": "", "ref_id": "b76", "title": "Locating objects without bounding boxes", "year": "2019" }, { "authors": "Q Song; C Wang; Z Jiang; Y Wang; Y Tai; C Wang; J Li; F Huang; Y Wu", "journal": "", "ref_id": "b77", "title": "Rethinking counting and localization in crowds: A purely point-based framework", "year": "2021" }, { "authors": "X Ying; L Liu; Y Wang; R Li; N Chen; Z Lin; W Sheng; S Zhou", "journal": "", "ref_id": "b78", "title": "Mapping degeneration meets label evolution: Learning infrared small target detection with single point supervision", "year": "2023" }, { "authors": "B Li; Y Wang; L Wang; F Zhang; T Liu; Z Lin; W An; Y Guo", "journal": "", "ref_id": "b79", "title": "Monte carlo linear clustering with single-point supervision is enough for infrared small target detection", "year": "2023" }, { "authors": "K He; X Zhang; S Ren", "journal": "", "ref_id": "b80", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b81", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "J Gou; B Yu; S J Maybank", "journal": "IJCV", "ref_id": "b82", "title": "Knowledge distillation: A survey", "year": "2021" }, { "authors": "R O Duda; P E Hart", "journal": "Wiley", "ref_id": "b83", "title": "Pattern classification and scene analysis", "year": "1973" }, { "authors": "T Lin; P Goyal; R B Girshick; K He; P Dollár", "journal": "CoRR", "ref_id": "b84", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Z Tian; C Shen; H Chen", "journal": "", "ref_id": "b85", "title": "FCOS: fully convolutional onestage object detection", "year": "2019" }, { "authors": "K He; G Gkioxari; P Dollár", "journal": "", "ref_id": "b86", "title": "Mask r-cnn", "year": "2017" }, { "authors": "S Waqas Zamir; A Arora; A Gupta; S Khan; G Sun; F Shahbaz Khan; F Zhu; L Shao; G.-S Xia; X Bai", "journal": "CVPRW", "ref_id": "b87", "title": "isaid: A large-scale dataset for instance segmentation in aerial images", "year": "2019" }, { "authors": "K Chen; J Wang; J Pang; Y Cao; Y Xiong; X Li; S Sun; W Feng; Z Liu; J Xu; Z Zhang; D Cheng; C Zhu; T Cheng; Q Zhao; B Li; X Lu; R Zhu; Y Wu; J Dai; J Wang; J Shi; W Ouyang; C C Loy; D Lin", "journal": "", "ref_id": "b88", "title": "MMDetection: Open mmlab detection toolbox and benchmark", "year": "2019" }, { "authors": "H Robbins; S Monro", "journal": "AMS", "ref_id": "b89", "title": "A stochastic approximation method", "year": "1951" }, { "authors": "Z Tian; C Shen; X Wang; H Chen", "journal": "", "ref_id": "b90", "title": "Boxinst: High-performance instance segmentation with box annotations", "year": "2021" }, { "authors": "Shitian He; B E ", "journal": "", "ref_id": "b91", "title": "degree in electronic and information engineering from", "year": "1973" }, { "authors": "Yingqian Wang Received The; B E ", "journal": "College of Electronic Science and Technology, NUDT", "ref_id": "b92", "title": "degree in electrical engineering from Shandong University", "year": "2016" }, { "authors": "Boyang Li Received The; B E ", "journal": "National University of Defense Technology (NUDT)", "ref_id": "b93", "title": "degree in Mechanical Design manufacture and Automation from the Tianjin University", "year": "2017" }, { "authors": "Xu Cao; B E ", "journal": "National University of Defense Technology", "ref_id": "b94", "title": "degree in information engineering in", "year": "1990" } ]
[ { "formula_coordinates": [ 5, 107.14, 310.51, 192.88, 16.6 ], "formula_id": "formula_0", "formula_text": "Ins(p) = arg min l∈L {Cost(p, p l )} ,(1)" }, { "formula_coordinates": [ 5, 57.21, 450.56, 242.81, 27.45 ], "formula_id": "formula_1", "formula_text": "Cost(p, p l ) = min Γ∈{Γ1,...,Γn} Γ C sem (⃗ z) + λC edge (⃗ z) d⃗ z,(2)" }, { "formula_coordinates": [ 5, 340.46, 97.56, 222.58, 50.08 ], "formula_id": "formula_2", "formula_text": "L pos = - 1 N pos Npos j=1 C i=1 [y ji (1 -y ′ ji ) γ log(y ′ ji ) + (1 -y ji )y ′ γ ji log(1 -y ′ ji )],(3)" }, { "formula_coordinates": [ 5, 335.77, 338.55, 227.26, 31.4 ], "formula_id": "formula_3", "formula_text": "L neg = - 1 N neg Nneg j=1 C i=1 (1 -y ji )y ′ γ ji log(1 -y ′ ji ),(4)" }, { "formula_coordinates": [ 5, 361.39, 451.95, 201.64, 30.94 ], "formula_id": "formula_4", "formula_text": "L col = - 1 Z HW i=1 j∈N (i) A i,j log y ′ T i y ′ j .(5)" }, { "formula_coordinates": [ 5, 374.15, 596.45, 188.89, 9.81 ], "formula_id": "formula_5", "formula_text": "L all = L pos + α 1 L neg + α 2 L col ,(6)" } ]
10.18653/v1/D19-1165
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b13", "b6", "b28", "b3", "b36", "b21", "b34", "b25", "b36", "b10", "b5" ], "table_ref": [], "text": "Language models have become an essential tool in dealing with various tasks in the natural language processing (NLP) domain (Howard and Ruder, 2018;Devlin et al., 2019;Brown et al., 2020, inter alia). Notably, such models are extremely sample efficient, and can be employed to solve downstream tasks with a small set of labeled data. The conventional pipeline to build classifiers is to finetune language models to solve the task at hand. However, as the language models grow in number of parameters, finetuning them becomes more computationally expensive. Moreover, finetuning changes a model's parameters through gradient updates. Therefore, for each downstream task, a new * Work done during internship at Google DeepMind. model must be trained and stored. Fortunately, it has been shown that as models grow in size, they can learn \"in context\" (Radford et al., 2019;Brown et al., 2020). In-Context Learning (ICL) refers to prompting a language model with a few demonstrative examples. Such prompting bundles up a small set of input-label pairs into: a set of instructions, a few solved examples, and a single unlabeled example. The model is then asked to predict the label for that example.\nNotably, since ICL does not require any weight updates on the language model, a single model can be used to perform a variety of tasks, as long as they can be specified in natural language. While effective, there are some caveats to ICL that we elaborate next.\nFirst, zero-shot, or few-shot, performance of pretrained models on downstream tasks depends to a large extent on the way in which the prompt is formulated (Zhao et al., 2021;Liu et al., 2023).\nSecond, ICL may not always benefit from the demonstrative examples the way that it is expected to (Webson and Pavlick, 2022), e.g., flipping the labels in demonstrations rarely hurts the performance (Min et al., 2022). Furthermore, the number of demonstrative examples that ICL can benefit from is bounded by the language model constraint on the maximum length of input sequences.\nLastly, decoding the predicted label is neither accurate nor efficient. The decoder might be miscalibrated (Zhao et al., 2021) or suffer from surface form competition (Holtzman et al., 2021). Moreover, autoregressive decoding, with an attentionbased decoder takes O(N 2 ) time for generating a sequence of length N , which can be inefficient when decoding long sequences.\nTo address these limitations, we investigate an alternative workflow to ICL for building accurate and robust classifiers. We postulate that a primary cause of the aforementioned limitations, such as the sensitivity to the exact phrasing of the instruction and miscalibration, is that the model is forced to verbalize the prediction label. Therefore, we suggest to bypass the decoding step and directly probe the extracted representations from pretrained models. We hypothesize that given reasonable in-structions, the information that is needed to reliably perform the downstream task is encoded by the model in contextualized representations of the tokens. Notably, we contextualize the input by providing instructions. We therefore name our approach in-context probing (ICP). We demonstrate our proposed workflow and contrast it with ICL through an example in Fig. 2.\nThrough an extensive set of experiments on a diverse set of sentence classification tasks and different model sizes, we aim to answer the following research questions:\n• Q1: Is in-context probing more robust with respect to variations in instructions compared to ICL? (Section §6.1)\n• Q2: Can in-context probing perform classification tasks as accurately as ICL or even finetuning? (Sections §6.2, §6.4)\n• Q3: Is in-context probing sample efficient? (Section §6.3)\nWe find that in-context probing is significantly less sensitive to subtle changes in instructions compared to in-context learning. We further compare in-context probing with in-context learning on different sizes of FLAN-T5 models (Chung et al., 2022) in §6.2. Our results suggest that for larger models, ICP is on par or better than both ICL and calibrated ICL with less variance to instruction changes. For smaller model sizes, ICP significantly outperforms ICL. Furthermore, we empirically show that ICP is sample efficient, as it can generate competitive results to ICL (with significantly less variance to instructions) after training only on 40 annotated examples. Finally, comparing in-context probing with finetuning suggests that probing classifiers can be as accurate and robust as finetuned models, while using 4 to 6 orders of magnitude less trainable parameters." }, { "figure_ref": [], "heading": "Finetuning", "publication_ref": [ "b6" ], "table_ref": [], "text": "The typical workflow for adapting a language model to a new downstream task is finetuning. Specifically, a model is first pretrained on large text corpora in a self supervised manner. It is then finetuned on a downstream task by initializing with the pretrained weights, and adding a task-specific layer to be trained from scratch (Devlin et al., 2019)." }, { "figure_ref": [], "heading": "Parameter-Efficient Finetuning (PEFT)", "publication_ref": [ "b30", "b11", "b1", "b23", "b14", "b18", "b9" ], "table_ref": [], "text": "As language models grow in size, finetuning them for any given downstream task becomes increasingly expensive, if not infeasible. Hence, a growing body of research has focused on how to enable models to perform downstream tasks with reduced weight updates. Early works on PEFT, includes developing complementary network components such as adapters; trainable feed-forward networks inserted between the layers of pretrained models (Rebuffi et al., 2017;Houlsby et al., 2019;Bapna and Firat, 2019). Following works aim to reduce the number of trainable parameters (Mahabadi et al., 2021;Hu et al., 2022). One can also view prompt tuning (Lester et al., 2021) and prefix tuning (Li and Liang, 2021) through the lens of PEFT (He et al., 2022), where learnable parameters are added to the model's inputs or activations, and are trained on a downstream task. While PEFT methods reduce the memory and computation cost, compared to finetuning, they still do require training 0.1-15% of the models' parameters and assume access to models' weights." }, { "figure_ref": [], "heading": "Prompt-Based Finetuning", "publication_ref": [ "b29", "b32", "b8", "b26" ], "table_ref": [], "text": "An alternative approach to the finetuning workflow is to adapt the pretrained language model directly by autoregressively decoding the target output. For instance, in a classification task, one could prompt the language model by asking for the classification outcome, and tune the model's parameters with the goal of decoding the gold label. Notably, it has been shown that prompt-based finetuning of text-totext transformers, e.g., (Raffel et al., 2020), leads to competitive results in classification benchmarks such as SuperGlue (Wang et al., 2019).\nMoreover, prompt-based finetuning is extremely effective in low-to medium-resourced data regimes (Gao et al., 2021;Le Scao and Rush, 2021) and offers a decent out-of-distribution performance (Mosbach et al., 2023)." }, { "figure_ref": [ "fig_1" ], "heading": "In-Context Learning", "publication_ref": [ "b35", "b5" ], "table_ref": [], "text": "While effective, finetuning and storing large language models with billions of parameters is often impractical. In-context learning offers an alternative approach to learn new downstream tasks without any weight updates. The general strategy is to prompt the language model not only with the input, but also with the instructions required to solve the task and a few demonstrative examples of input-target pairs, all written in natural language.\nConsider the example in Fig. 2. The prompt consists of an instruction, two demonstrations, and an input. The language model is expected to learn the task from the provided context. The output label is obtained by autoregressively decoding the model prediction.\nInstruction Finetuning. To boost the performance of ICL, especially in zero-shot settings, finetuning the models with instruction is helpful. The goal of the instruction finetuning step is to teach the model to benefit the most from in-context learning. Importantly and as opposed to pattern-based finetuning, instruction finetuning is done only once over a diverse set of tasks with instructions, with the aim to teach the model how to follow instructions and transfer this learning to unseen tasks. Wei et al. (2022) show that after finetuning models with instructions, the zero-shot performance on unseen tasks improves significantly. Chung et al. (2022) further scale this approach and introduce and release FLAN-T5 checkpoints that achieve strong fewshot performance compared to larger models." }, { "figure_ref": [], "heading": "Contextual Calibration", "publication_ref": [ "b36", "b22", "b25", "b34" ], "table_ref": [], "text": "A major limitation of ICL is that its performance can depend crucially on the way instructions are formulated, the order of the demonstration examples, and the unbalanced distribution of labels. To reduce this dependency, Zhao et al. (2021) proposed calibrated ICL.\nCalibration works as follows: First, a contentfree prompt is created. This prompt includes the same instruction and demonstration examples as the original prompt, but the input data is removed. 1The main idea is that the model's bias toward predicting certain labels is revealed by its output for the content-free prompt.\nNext, the predicted probabilities of labels are calibrated by the predicted probabilities for the content-free prompt. Concretely, suppose the model predicts p(c|x), which shows the probability of the input being classified with label c. To calibrate this probability, we divide it by the probability of predicting the same label c for the content-free prompt: p(c|x) p(c|x null ) , where x null is obtained by removing the input data from the prompt x. The probabilities are then normalized via Softmax and the class with the maximum probability is predicted as the label.\nLimitations. Although ICL is an efficient alternative to finetuning, with competitive results, it has its own limitations. As mentioned earlier, and even after calibration, ICL is notoriously unstable (Lu et al., 2022), minor changes in the prompt can drastically affect the prediction results. Furthermore, as pointed out by (Min et al., 2022), ICL might be limited to learning the domain of input and the label, and not the exact mapping between the input and the label. Similarly, Webson and Pavlick (2022) questions the degree to which the effectiveness of ICL is derived from models understanding task instructions in ways analogous to humans.\nOur proposal departs from these two lines of work, i.e., finetuning and in-context learning. We provide a more efficient approach to finetuning which, as opposed to finetuning methods, does not require any knowledge about a models' architecture, can be trained with as few as a hundred examples, and only requires access to the last layer output of a model.\nThe main observation is that, similar to incontext learning, we can benefit from instructions to extract useful contextualized representations from large language models to perform a downstream task. However, and different to ICL, we make use of direct training signals when training our light-weight probes. We empirically show that in-context probing significantly reduces the performance dependency to the formulation of instructions. We elaborate on our probing approach in the next section." }, { "figure_ref": [], "heading": "Probing", "publication_ref": [ "b0", "b15" ], "table_ref": [], "text": "Probes, or diagnostic classifiers, are light-weight classifiers trained to predict a property of interest from representations extracted from a model (Alain and Bengio, 2017;Hupkes et al., 2018). Originally, probes were used as a diagnostic tool, i.e., to assess whether or not information about a property is encoded in the representations. In this section, we explain how to re-purpose probes from diagnostic tools to accurate and robust classifiers.\nGiven an input prompt sequence x = [x 1 , x 2 , . . . , x N ], which consists of an instruction and an input, a transformer-based encoder generates a sequence of representations R = [r 1 , r 2 , . . . , r N ] ∈ R N ×d , where d is the dimensionality of the encoder model.2 A probe is a function f θ which receives input representations R and predicts the index of the label m ∈ {1, 2, . . . , M }, where M is the number of classification labels. Next, we introduce two architectures that we evaluate for building probing classifiers." }, { "figure_ref": [], "heading": "Logistic Regression Probe.", "publication_ref": [ "b7", "b2", "b25" ], "table_ref": [], "text": "A logistic regression probe receives representations R for an input sequence x, and predicts the classification label as\nf θ (R) = argmax m {W T m ( N i=1 r i ) + b m } (1)\nWhere θ = (W ∈ R d×M , b ∈ R M ) are learned parameters of the probing model.\nAttentional Probe. Our attentional probe is a single attention layer (with one attention head). It recieves the sequence of contextualized representations and predicts the classification label as\nf θ (R) = argmax m {W T m ( i α i r i ) + b m } (2)\nWhere α i s are attention weights:\nα i = (Kr i ) T (Qr 0 )(3)\nThe learned parameters are θ = (K, Q, W, b), where K and Q are key and query matrices. r 0 is the fixed contextualized representation of the \"instruction\" token. 3 To train both probing models, we minimize the cross entropy loss. 5 Experimental Setup\nDataset. We experiment with 3 sentence classification tasks: (i) paraphrase detection (MQP; Mc-Creery et al., 2020) where the goal is to predict whether two medical questions are semantically equivalent or not, (ii) natural language inference (CLIMATE; Diggelmann et al., 2021) where the goal is to predict whether the evidence entails the statement about climate change, refutes it, or is neutral to it, (iii) hate speech detection (TWEETEVAL; Barbieri et al., 2020), where the goal is to detect whether hate speech is expressed in a given tweet or not. See Tab. 4 for datasets statistics.\nThe datasets are chosen based on multiple reasons. First, none of them are included in the instruction finetuning of the FLAN-T5 models. Second, all of them are quite low-resourced, with ≈ 10k training examples. Lastly, these datasets cover different types of classification tasks.\nModel. We experiment with different model sizes of the FLAN-T5 family, namely: FLAN-\nSMALL(80M), FLAN-BASE(250M), FLAN- LARGE(780M), FLAN-XL(3B),\nand FLAN-XXL(11B).\nProbing Model. For training probes and finetuning models, we use 30% of training examples for validation. We stop the training (or finetuning) process early, if the validation performance does not improve after 5 consequent epochs, and report the test-set performance of the model with the best performance on the validation set. See App. C for more details. Throughout the paper, when the architecture of the probe is not explicitly mentioned, the least complex probe, i.e., the logistic regression probe is used. We compare the performance of the two probes in §6.5. Following prior works (Min et al., 2022), we report macro F1 score for all the tasks." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In the following subsections we discuss the experiments and our findings. In §6.1 we evaluate the robustness of in-context probing. In §6.2 and §6.3 we assess the performance and sample efficiency of probes, respectively. In §6.4 we compare incontext probing with finetuning. Finally, in §6.5 we ablate the probing model's architecture." }, { "figure_ref": [ "fig_0", "fig_2", "fig_5" ], "heading": "Robustness", "publication_ref": [], "table_ref": [], "text": "To test the robustness of ICP and ICL with the change of instructions, we manually write 5 instructions per task (see Tab. 3). Notably, the instructions are not designed to be adversarial, but only to explain the task in different terms. We sample examples from the training set with 5 random seeds, and compare ICL with ICP on the 3 classification tasks. To implement ICL, we use these training examples as demonstrations in the prompt, and decode the classification label.\nFor ICP, we do not use any demonstrations, and only provide the model with instructions and the input. To reduce the clutter in the plots, we only visualize the performance of the methods using the number of training examples that lead to the best performance. That is an average of 3 demonstrations for the in-context learning method, 2 for calibrated ICL, and 170 training examples for the ICP approach (we will discuss the sample efficiency in §6.3).\nFirst, we compare the performance of the three methods (ICL, Calibrated ICL, and ICP) on the best-performing model in the mix, i.e., FLAN-XXL model. We visualize the variation of the performance with respect to the instructions in Fig. 1. We observe that calibration is only helpful in one task, i.e., MQP, where it reduces the standard deviation of performance with respect to instruction change. However, in CLIMATE and TWEETEVAL, calibration is not helpful and the performance of ICL changes significantly with the change of instructions.\nNext, we experiment with the effect of model size on the robustness of different methods in Fig. 3. In this plot we focus on the TWEETEVAL task. 4 Generally, we observe a similar trend across all model sizes: ICP performs consistently well, irre-4 Similar trends can be observed in other tasks and the plots for all tasks and model sizes can be found in Fig. 7. spective of the choice of instruction, but the performance of ICL and calibrated ICL fluctuates quite significantly. The exception to this trend is the smallest model (FLAN-SMALL), where we don't see a high standard deviation in the performance, when employing ICL and calibrated ICL methods. However, this is possibly because ICL consistently performs poorly compared to ICP. We will expand on this result in the next section by offering a broader view on the performance of the methods across models and tasks." }, { "figure_ref": [], "heading": "Overall Performance", "publication_ref": [], "table_ref": [], "text": "We now look more holistically at the performance of in-context probing compared to in-context learning, across different tasks and model sizes. For each model size and task, we look at the performance of the 3 methods using 5 different seeds of sampling the training examples. We further show the standard deviation of the performance when varying the instructions. Same as the previous experiment, for all of the methods we pick the number of training examples to be the one that results in the highest performance. As results in Fig. 4 suggest, for larger models, in-context probing achieves competitive results to ICL with significantly less variance.\nImportantly, for smaller models, the performance of the in-context learning approach is usually close to random predictions. in-context probing, however, is performing significantly better than random, and is even on par with the ICL scores of larger models. This highlights the usefulness of in-context probing when resources are limited and only smaller models can be employed. " }, { "figure_ref": [ "fig_3", "fig_6", "fig_4" ], "heading": "Sample Efficiency", "publication_ref": [], "table_ref": [], "text": "Part of the effectiveness of in-context probing can be attributed to using more training signals compared to ICL. In this experiment, we look into probes' sample efficiency. Specifically, we define the sample efficiency of ICP as the number of samples needed for in-context probing to achieve a performance on par, or better than ICL. We vary the sample size between 0 to 200. 5 The number of used training samples is limited to 200 for two reasons: first to evaluate and contrast these two methods in a realistic, low-resourced setup, where the budget for annotation is extremely limited, and 5 In-context learning method, however, can only roughly benefit from less than 10 examples, and longer inputs are truncated due to the limited context window of the models. second to reduce the training time and assess the practicality of our in-context probing approach.\nAs previous experiments suggest, the performance of ICL is highly dependent on the instruction. For the purpose of this experiment, we plot the average performance of different methods on the 5 instructions. As in previous experiments, we sample training data (or demonstration examples) with 5 different seeds, and visualize the standard deviation in the performance in Fig. 5.\nWe observe that for TWEETEVAL, probing larger models (e.g., FLAN-XXL) with only 40 training examples performs competitive to ICL. As discussed earlier, ICL is not very effective on smaller models, therefore, with as few as 40 training examples, ICP outperforms ICL significantly (see FLAN-BASE and FLAN-SMALL results). The same conclusion holds for other tasks, see Fig. 8 for further experiments.\nICP vs. Probing. We investigate the role of contextualization in probe's performance by comparing the sample efficiency of in-context probing to probing. Different from ICP, where we contextualize the input with the instruction, with probing we only probe the input's representation. We compare probing and ICP on FLAN-XXL across the three different tasks in Fig. 6. 6 In TWEETEVAL and CLIMATE tasks, both ICP and probing reach the decoding baselines with using only 40 training samples. Interestingly, in the MQP task, ICP is significantly more sample efficient than probing; it matches the performance of decoding baselines with 40 training samples, while probing does the same with 80 samples." }, { "figure_ref": [], "heading": "In-Context Probing vs. Finetuning", "publication_ref": [], "table_ref": [], "text": "We compare in-context probing with prompt-based finetuning. To ensure a fair comparison, we set the training budget to 200 examples and provide a head-to-head comparison between finetuning and ICP in Tab. 1. We use the same set of 5 instructions to finetune models with prompts. Therefore, for each task and model size, we finetune the model 5 times, each time with a unique instruction. We measure the average performance and the standard deviation of these 5 training runs on the test set and compare that to measurements from probing models obtained in the same experimental setup.\nWe observe that finetuning and ICP are comparably robust to the change in instructions they are trained with. This is reflected in low standard deviation across the 5 runs. Importantly, ICP is competitive with finetuning across different model sizes and tasks. This is particularly promising, given that the probes have up to 6 orders of magnitude less trainable parameters compared to their corresponding finetuned models, and therefore are faster to train and need less memory. Table 2: Comparing F1 Macro (%) of attentional probe (ATT.) to logistic regression probe (REG.) suggests that adding complexity to the probe does not improve the performance on the classification tasks." }, { "figure_ref": [], "heading": "Logistic Regression vs. Attentional Probe", "publication_ref": [], "table_ref": [], "text": "In this experiment we investigate the effect of the probes' complexity on the performance. To this end, we compare the performance of the logistic regression (REG.) probe with the attentional probe (ATT.) that has more trainable parameters. Tab. 2 shows the results on FLAN-XXL model and Tab. 5\n6 To reduce the clutter in the figure, we only visualize the maximum F1 score of decoding and calibrated decoding methods.\nincludes the results on all FLAN-T5 models. We find that logistic regression probes are as effective as attentional probes, while using less compute." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "With the current progress, and growing accessibility, LLMs are increasingly used to solve a wide range of classification tasks. Often in impromptu and low-data-budget scenarios. While in-context learning offers an efficient option to deploy LLMs on new tasks, it is unstable: Our results confirm that instructions formulation can significantly affect results. We propose in-context probing as a systematic approach to reduce instability. Our probing method is extremely simple and only requires access to the output representations of LLM's last layer.\nReal-world classification tasks often rely only on a few hundred annotated examples between training and validation. Our experiments suggest that even in such settings, in-context probing produces better classifiers on top of smaller models. Furthermore, probing achieves competitive results to ICL in larger models, while being significantly more robust to variations in the instructions.\nWe hypothesize that probing may be useful also in the development of some of the larger models, to estimate headroom and sensitivity to instruction phrasings in classification tasks. We leave this for future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose in-context probing as an alternative to in-context learning. We train light-weight classifiers on top of contextualized representations extracted from FLAN-T5 encoders on 3 sentence classification tasks. Our experiments suggest that incontext probing is significantly more stable to variations of instructions compared to ICL and calibrated ICL. With as few as a hundred examples, probing achieves competitive results to ICL on large models, while outperforming ICL on smaller ones." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Decoder-Only Models. All experiments are done on FLAN-T5 family of encoder-decoder models. In-context probing can be applied to decoderonly models by probing the representations extracted from the last decoder layer. Further ex-periments are needed to analyse the effectiveness of in-context probing of decoder-only models.\nProbing Larger Language Models. It is not obvious how the results in this paper transfer to models with trillions of parameters. Since access to such models are limited through APIs, we do not provide probing experiments on larger models." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "We do not believe the approach presented here further amplifies biases that are already present in the datasets and large language models. Therefore, we foresee no ethical concerns in this work. " }, { "figure_ref": [], "heading": "B Dataset Statistics", "publication_ref": [ "b25", "b19" ], "table_ref": [], "text": "We follow the choice of Min et al. (2022) and select 3 diverse datasets. We use the Huggingface version of the data (Lhoest et al., 2021). We use these datasets to train and evaluate the performance of models on classification tasks, which is in line with the intended use of the data. See Tab. 4 for dataset statistics." }, { "figure_ref": [], "heading": "C Experimental Details", "publication_ref": [ "b27", "b16" ], "table_ref": [], "text": "Logistic Regression Probe. We use scikit-learn (Pedregosa et al., 2011) implementation of logistic regression. The tolerance for stopping criteria is set to 1e -4 . We further adjust the class weights by their frequency in the training data, i.e., we set the weight parameter to \"balanced\" in scikit-learn.\nAttentional Probe. We implement a one layer attention with one attention head. We apply a grid search to find the best learning rate, which is set to 5e -6 for FLAN-XXL, FLAN-XL, and FLAN-LARGE models, and 5e -5 for FLAN-BASE and FLAN-SMALL models. We use Adam optimizer (Kingma and Ba, 2015). " }, { "figure_ref": [], "heading": "D Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Sascha Rothe, Jannis Bulian, Christian Buck, and Nicola De Cao for technical support and fruitful discussions throughout the project. We further thank Fernando Pereira, Wojciech Gajewski, Michelle Chen Huebscher and Lierni Sestorain Saralegui for providing feedback on a preliminary draft of the project." }, { "figure_ref": [], "heading": "A Task Instructions", "publication_ref": [], "table_ref": [], "text": "In Tab. 3, we document the instruction templates that are used throughout the paper. The ordering of the instructions are arbitrary. " } ]
Large language models are able to learn new tasks in context, where they are provided with instructions and a few annotated examples. However, the effectiveness of in-context learning is dependent on the provided context, and the performance on a downstream task can vary considerably, depending on the instruction. Importantly, such dependency on the context can surface in unpredictable ways, e.g., a seemingly more informative instruction might lead to a worse performance. In this paper, we propose an alternative approach, which we term In-Context Probing (ICP). Similar to in-context learning, we contextualize the representation of the input with an instruction, but instead of decoding the output prediction, we probe the contextualized representation to predict the label. Through a series of experiments on a diverse set of classification tasks, we show that in-context probing is significantly more robust to changes in instructions. We further show that ICP performs competitive or superior to finetuning and can be particularly helpful to build classifiers on top of smaller models, with less than a hundred training examples.
In-Context Probing: Toward Building Robust Classifiers via Probing Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: Comparing the robustness of in-context learning and in-context probing to the change in instructions. The x-axis shows different instructions formulations, and the y-axis shows F1 Macro score. Traces visualize the mean and standard deviation of the performance with 5 different seeds for sampling the training examples (or demonstrations for ICL). ICP performance is significantly more robust to variations in the instructions compared to ICL and Calibrated ICL in CLIMATE and TWEETEVAL tasks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The figure demonstrates and compares in-context probing and in-context learning workflows. The example is from the medical question pairs dataset (McCreery et al., 2020). In-context probing receives contextualized representations of instruction and the input, and directly predicts the classification label.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Comparing the robustness of the three approaches on TWEETEVAL task across different model sizes. We observe that the performance of ICP is close or better than ICL, while being significantly more robust to the change of instructions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparing the performance of in-context learning and in-context probing with respect to number of training examples. Traces show the mean and standard deviation of the performance with 5 different seeds for the TWEETEVAL task. In all model sizes, training probes on only 40 examples performs on par or better than ICL and calibrated ICL.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparing the sample efficiency of ICP to probing FLAN-XXL model. While both are generally efficient, ICP is significantly more sample efficient in MQP task.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparing the robustness of different approaches to the change of instructions across models and tasks. Across all of the tasks and model sizes, in-context probing offers competitive performance to ICL and is more robust to instruction change.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Measuring samples efficiency of ICP in different tasks and models. In all models and tasks, ICP matches the performance of ICL and calibrated ICL after only training on a hundred training examples.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Comparing the performance of ICL, Calibrated ICL, and In-Context Probing (ICP). Error bars show the variance when using different instructions. ICP consistently performs better or on par with ICL and with lower variance across different model sizes and tasks. For smaller-sized models, ICP significantly outperforms both ICL and Calibrated ICL in CLIMATE and TWEETEVAL tasks.", "figure_data": "ICLICL-CalibratedICPRandommodel=flan-xxlmodel=flan-xlmodel=flan-largemodel=flan-basemodel=flan-small80F1 Macro (%)40 60200mqpclimate tweetevalmqpclimate tweetevalmqpclimate tweetevalmqpclimate tweetevalmqpclimate tweetevalTaskFigure 4: In this figure, traces show the mean performanceacross 5 different samples of training data (ordemonstration examples) and shades demonstratethe standard deviation of performance with re-spect to the selected samples. In general, we seethat probes' performance is significantly more ro-bust to the change of the instructions. Interest-ingly, ICL performance decreases to a close tochance accuracy with minor changes in the in-struction on CLIMATE task. For example, one ofthe instructions that leads to a low accuracy is:Based on the evidence, can we concludethat the claim is definitely supported?Answer with only one of the followingoptions: yes, supported | no, refuted| not enough info, which is not substantiallydifferent to the instruction that leads to the bestresult: Given the evidence, is the claimdefinitely supported, refuted, or thereis not enough info? Answer with onlyone of the following options: supports| refutes | not enough info.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Instruction used for each task. All instructions aim to describe the task. However, one can see different levels of expressiveness used in the instructions.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Dataset Statistics", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparing the performance of in-context probing (ICP) to finetuning (FT) with 200 training examples. The standard deviation (std) shows the performance change with respect to change of instructions used for finetuning or training the probe. While probes have significantly less trainable parameters, they perform competitive (or superior) to finetuning, with comparable standard deviation across different instructions.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Afra Amini; Massimiliano Ciaramita
[ { "authors": "Guillaume Alain; Yoshua Bengio", "journal": "", "ref_id": "b0", "title": "Understanding intermediate layers using linear classifier probes", "year": "2017" }, { "authors": "Ankur Bapna; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Simple, scalable adaptation for neural machine translation", "year": "2019" }, { "authors": "Francesco Barbieri; Jose Camacho-Collados; Luis Espinosa-Anke; Leonardo Neves", "journal": "", "ref_id": "b2", "title": "TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification", "year": "2020" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b5", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Thomas Diggelmann; Jordan Boyd-Graber; Jannis Bulian; Massimiliano Ciaramita; Markus Leippold", "journal": "", "ref_id": "b7", "title": "CLIMATE-FEVER: A dataset for verification of real-world climate claims", "year": "2021" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "", "ref_id": "b9", "title": "Towards a unified view of parameter-efficient transfer learning", "year": "2022" }, { "authors": "Ari Holtzman; Peter West; Vered Shwartz; Yejin Choi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Surface form competition: Why the highest probability answer isn't always right", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b11", "title": "Parameter-efficient transfer learning for NLP", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b12", "title": "", "year": "" }, { "authors": "Jeremy Howard; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Universal language model fine-tuning for text classification", "year": "2018" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b14", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Dieuwke Hupkes; Sara Veldhoen; Willem Zuidema", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b15", "title": "Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure", "year": "2018" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Le Teven; Alexander Scao; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "How many data points is a prompt worth", "year": "2021" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Quentin Lhoest; Albert Villanova Del Moral; Yacine Jernite; Abhishek Thakur; Suraj Patrick Von Platen; Julien Patil; Mariama Chaumond; Julien Drame; Lewis Plu; Joe Tunstall; Mario Davison; Gunjan Šaško; Bhavitvya Chhablani; Simon Malik; Brandeis; Le Teven; Victor Scao; Canwen Sanh; Nicolas Xu; Angelina Patry; Philipp Mcmillan-Major; Sylvain Schmid; Clément Gugger; Théo Delangue; Lysandre Matussière; Stas Debut; Pierric Bekman; Thibault Cistac; Victor Goehringer; François Mustar; Alexander Lagunas; Thomas Rush; Wolf", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Datasets: A community library for natural language processing", "year": "2021" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Comput. Surv", "ref_id": "b21", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Yao Lu; Max Bartolo; Alastair Moore; Sebastian Riedel; Pontus Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Fantastically ordered prompts and where to find them: Overcoming fewshot prompt order sensitivity", "year": "2022" }, { "authors": "Rabeeh Karimi Mahabadi; James Henderson; Sebastian Ruder", "journal": "", "ref_id": "b23", "title": "Compacter: Efficient low-rank hypercomplex adapter layers", "year": "2021" }, { "authors": "Clara H Mccreery; Namit Katariya; Anitha Kannan; Manish Chablani; Xavier Amatriain", "journal": "Association for Computing Machinery", "ref_id": "b24", "title": "Effective transfer learning for identifying similar questions: Matching user questions to covid-19 faqs", "year": "2020" }, { "authors": "Sewon Min; Xinxi Lyu; Ari Holtzman; Mikel Artetxe; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Rethinking the role of demonstrations: What makes in-context learning work?", "year": "2022" }, { "authors": "Marius Mosbach; Tiago Pimentel; Shauli Ravfogel; Dietrich Klakow; Yanai Elazar", "journal": "", "ref_id": "b26", "title": "Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation", "year": "2023" }, { "authors": "Fabian Pedregosa; Gaël Varoquaux; Alexandre Gramfort; Vincent Michel; Bertrand Thirion; Olivier Grisel; Mathieu Blondel; Peter Prettenhofer; Ron Weiss; Vincent Dubourg; Jake Vanderplas; Alexandre Passos; David Cournapeau; Matthieu Brucher; Matthieu Perrot; Édouard Duchesnay", "journal": "Journal of Machine Learning Research", "ref_id": "b27", "title": "Scikit-learn: Machine learning in python", "year": "2011" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b28", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b29", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Hakan Sylvestre-Alvise Rebuffi; Andrea Bilen; Vedaldi", "journal": "", "ref_id": "b30", "title": "Learning multiple visual domains with residual adapters", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b31", "title": "", "year": "" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "", "ref_id": "b32", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Albert Webson; Ellie Pavlick", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Do promptbased models really understand the meaning of their prompts", "year": "2022" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b35", "title": "Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Zihao Zhao; Eric Wallace; Shi Feng; Dan Klein; Sameer Singh", "journal": "PMLR", "ref_id": "b36", "title": "Calibrate use: Improving few-shot performance of language models", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 321.79, 418.34, 203.35, 33.71 ], "formula_id": "formula_0", "formula_text": "f θ (R) = argmax m {W T m ( N i=1 r i ) + b m } (1)" }, { "formula_coordinates": [ 4, 316.61, 555.43, 208.53, 24.58 ], "formula_id": "formula_1", "formula_text": "f θ (R) = argmax m {W T m ( i α i r i ) + b m } (2)" }, { "formula_coordinates": [ 4, 372.18, 612.1, 152.96, 13.13 ], "formula_id": "formula_2", "formula_text": "α i = (Kr i ) T (Qr 0 )(3)" }, { "formula_coordinates": [ 5, 71.14, 566.41, 219.79, 21.12 ], "formula_id": "formula_3", "formula_text": "SMALL(80M), FLAN-BASE(250M), FLAN- LARGE(780M), FLAN-XL(3B)," } ]
10.18653/v1/2022.naacl-industry.24
2023-11-05
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b13", "b45", "b12", "b23", "b37", "b36", "b49", "b21", "b52", "b6", "b35", "b14", "b48", "b33", "b15" ], "table_ref": [], "text": "Large language models (LLMs) such as GPT-3 can answer open-domain questions without access to external knowledge or any task-specific training examples. However, LLMs are prone to hallucinate (Bang et al., 2023), while using a convincing and confident tone. This may cause significant harm as people increasingly accept LLMs as a knowledge source (Goddard, 2023;Weiser, 2023). An entity linker is used to link entities in the user query to their unique ID in Wikidata; e.g. \"A Bronx Tale\" is linked to entity ID \"Q1130705\". The query and entity linker outputs are fed to the WikiSP semantic parser to produce a modified version of SPARQL, where property IDs (e.g. \"P915\") are replaced by their unique string identifiers (e.g. \"film-ing_location\"). If applying the query to Wikidata fails to return a result, we default to GPT-3, labeling the result as a GPT-3 guess. Returned answers are presented in the context of the query, so the user can tell if the answer is acceptable; if not, we also show the guess from GPT-3. Here WikiSP mistakenly uses \"filming_location\" instead of \"narrative_location\"; the user detects the mistake, thumbs down the answer, and the GPT-3 answer is provided. In contrast, traditional knowledge base question answering (KBQA) is grounded with a given knowledge base. Semantic parsing (SP) has been widely used to tackle this challenging task, where the questions are first parsed into a logical form and then executed to retrieve answers from the knowledge base. It has better interpretability than GPT-3 and other information-retrieval-based approaches (Dong et al., 2015;Miller et al., 2016;Sun et al., 2018Sun et al., , 2019) ) where answers are predicted directly.\nTo handle large knowledge bases, previous SPbased approaches tend to use a multi-stage pipeline of sub-tasks, starting with extracting the relevant subgraph based on entities detected in the questions (Yih et al., 2015;Luo et al., 2018). Such an approach struggles with questions that have a large search space and fails to understand questions that refer to information missing in the knowledge graph. Having to retrieve the relevant subgraphs to create the logical form conflates query resolution with semantic parsing, rendering classical query optimization inapplicable.\nEnd-to-end seq2seq translation, on the other hand, has mainly been used on schemas of relatively small relational databases (Yu et al., 2018;Xu et al., 2020a,b) and web APIs (Campagna et al., 2017;Su et al., 2017). To handle large knowledge graphs, recent work proposed retrieving (1) information on linked entities, (2) exemplary logical forms relevant to the query (Gu et al., 2021;Ye et al., 2022), and (3) schemas as context to semantic parsing (Shu et al., 2022). Others use induction or iterative methods to generate complex logical forms (Cao et al., 2022b;Gu and Su, 2022)." }, { "figure_ref": [], "heading": "Few-Shot Seq2Seq Semantic Parsing", "publication_ref": [ "b16", "b27", "b20", "b0", "b25", "b1", "b40", "b39" ], "table_ref": [], "text": "This paper investigates how we can leverage large language models (LLMs) to create seq2seq neural semantic parsers for large knowledge bases such as Wikidata.\nPretrained with the internet corpora, LLMs are already familiar with the syntax of formal query languages such as SQL (Hu et al., 2022;Poesia et al., 2022;Li et al., 2023;An et al., 2023;Nan et al., 2023;Arora et al., 2023). When given simple SQL schemas, they can perform zero-shot semantic parsing of simple natural language queries into formal queries. Unlike Freebase, the KB used in most of the KBQA semantic parsing research, Wikidata does not have a pre-defined schema, making it a much harder problem. It has 150K domains, 3K applicable properties, and 107M entities, each of the properties and entities are uniquely identified with PIDs and QIDs, respectively. While zero-shot LLMs can generate SPARQL queries for the easiest and most common questions, they do not know all the PIDs and QIDs, and nor is it possible to include them in a prompt. This paper presents WikiSP, a few-shot sequence-to-sequence semantic parser for Wikidata that translates a user query, along with results from an entity linker, directly into SPARQL queries. To handle the 100M+ entities in Wikidata, we train the parser to use either the entity linker results or a mention in the query; to handle the 150K domains and 3K applicable properties, we modify SPARQL to use domain and property names instead of their unique QIDs and PIDs, respectively. We fine-tune a LLaMA (Touvron et al., 2023) with a few-shot training set along with the instructions used to finetune Alpaca (Taori et al., 2023)." }, { "figure_ref": [], "heading": "A New Dataset: WikiWebQuestions", "publication_ref": [ "b5", "b41", "b30", "b50" ], "table_ref": [], "text": "Most of the widely-used high-quality benchmarks for KBQA are based on Freebase (Bollacker et al., 2008) which has been shut down since 2015. With outdated knowledge, it is hard to compare the results with modern LLMs such as GPT-3, since answers have changed over time for most of the questions. Wikidata, despite being the largest and most popular knowledge base nowadays, has very few datasets annotated with SPARQL queries; they are either extremely small (Usbeck et al., 2017) or synthetic (Saha et al., 2018).\nWe migrated the popular WebQuestionsSP (Yih et al., 2016) benchmark from Freebase to Wikidata, with updated SPARQL and up-to-date answers from the much larger Wikidata." }, { "figure_ref": [ "fig_0" ], "heading": "Complementing Large Language Models", "publication_ref": [], "table_ref": [], "text": "Trained on Wikipedia and all of the internet, LLMs can answer many questions directly. Unfortunately, the user cannot tell if the answers are correct, thus requiring them to fact-check every answer.\nUnlike humans, GPT-3 always sounds definitive even when they are wrong by providing specific and plausible facts. For example, on the question \"what is the biggest country in Europe by population?\", GPT-3 answers \"Germany\", when the answer is \"Russia\". Or, on the question, \"where does the name Melbourne come from?\" GPT-3 answers \"Melbourne comes from the Latin word 'melburnum' meaning 'blackburn' or 'blackbird'.\", but in reality, Melbourne is named after William Lamb, 2nd Viscount Melbourne. It is not possible to tell when GPT-3's answers are wrong, and every answer needs to be fact-checked.\nSemantic parsers can be used to complement LLMs as they are interpretable; their results are grounded in Wikidata, which we assume to be correct. It is possible for semantic parsers to misunderstand a query, but by providing the answer in the context of the query, the user can spot the error.\nWe propose getting the best of both worlds by answering the question with WikiSP if possible. Otherwise, we report GPT-3's guesses by prefacing it with: \"GPT-3 guesses that\" (Figure 1). In this way, the user can have full confidence with the answers from the former, while also benefiting from the latter. It is easier for users to fact-check an answer than trying to find the answer." }, { "figure_ref": [ "fig_1" ], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "WikiWebQuestions, a high-quality semantic parsing dataset for Wikidata, migrated from the popular WebQuestions dataset for Freebase.\nWikiSP, a few-shot Seq2Seq semantic parser by fine-tuning LLaMA with a few shot training set. We improve the learnability of SPARQL queries by replacing the IDs of properties and domains with their unique names; we tolerate errors in entity linking by accepting mentions in the queries as entities. We establish a first, strong baseline of 76% and 65% answer accuracy for the dev set and test set of our new WikiWebQuestions benchmark, respectively. We also demonstrate that our method surpasses the state of the art for QALD-7 wikidata set by 3.6% in F1 score.\nWe improve GPT-3's trustworthiness by first returning interpretable results from semantic parser and backing it up with GPT-3 guesses. WikiSP can provide verifiable results for WikiWebQuestions 76% of the time and improves the guesses by GPT-3, resulting in errors only 4% of the time (Figure 2)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "KBQA", "publication_ref": [ "b49", "b50", "b21", "b17", "b10", "b48", "b15", "b33", "b12", "b23", "b37", "b36", "b22", "b31", "b43", "b42", "b51", "b50" ], "table_ref": [], "text": "The KBQA task aims to make large knowledge bases accessible by natural language. One common approach is semantic parsing where a natural language query is translated into a formal logical form, which is then executed to retrieve an answer from the knowledge base. To handle large KBs, one method is to formulate SP as a multi-staged search problem by retrieving entities and expanding the graphs according to the relationships between their properties and the query (Yih et al., 2015(Yih et al., , 2016;;Luo et al., 2018). Lan and Jiang (2020) add constraints to the staged query graph generation method. Another popular method is to use seq2seq models obtained by fine-tuning pretrained language models. Das et al. (2021) first find other queries that contain semantically similar subparts, and construct a new logical form by combining the similar subparts of the found queries. Ye et al. (2022) search over the KB based on predefined rules to derive a set of candidate logical forms, rank them, and generate the final logical form. Cao et al. (2022b) first generate a \"sketch\" program and then fill in its arguments. Gu and Su (2022) use dynamic program induction to generate query structures. Based on a user query, Shu et al. (2022) retrieve entities, example logical forms, and related schema. Unlike FreeBase, Wikidata does not have a fixed schema.\nAnother approach to KBQA is based on graph retrieval (Dong et al., 2015;Miller et al., 2016;Sun et al., 2018Sun et al., , 2019;;Mavromatis and Karypis, 2022;Sen et al., 2021;Vivona and Hassani, 2019;Verga et al., 2021). It predicts the answers directly within the subgraph extracted based on the topic entity in the question. Yu et al. (2023) combine semantic parsing with retrieval and achieve the state-ofthe-art on the WebQuestionsSP dataset (Yih et al., 2016). However, retrieval-based methods cannot handle entire categories of questions, such as questions with no available answer and questions like \"the tallest mountain\" where no entities are mentioned by name. They have poor interpretability and do not support query optimization." }, { "figure_ref": [], "heading": "KBQA Benchmarks", "publication_ref": [ "b4", "b50", "b38", "b29", "b29", "b7", "b50", "b18", "b28", "b26", "b16", "b27", "b20", "b0", "b25", "b1" ], "table_ref": [], "text": "Most of the early KBQA benchmarks are based on Freebase (Berant et al., 2013;Yih et al., 2016;Talmor and Berant, 2018). Recently, new benchmarks have been created for Wikidata (Cao et al., 2022a;Saha et al., 2019). However, these benchmarks are created using rule-based synthesis or paraphrases, which are easier for semantic parsers. CSQA collects human-written questions for single triples and constructs complex questions using fixed rules with very limited natural language variety (Saha et al., 2019). KQA Pro first synthesizes queries with canonical natural language and then crowdsources human paraphrases (Cao et al., 2022a). Campagna et al. (2019) show that a model can achieve significantly higher accuracy over paraphrased data compared to real-world data even for untrained queries. Thus, we base our WikiWebQuestions dataset on WebQuestionsSP (Yih et al., 2016), where data are collected from real-world users using the Google Suggest API. et al. (2021) show the promise of few-shot prompting LLMs for semantic parsing. They use constrained decoding to enforce the syntax of the formal language, and achieve comparable results with a smaller fine-tuned BART model (Lewis et al., 2020) on datasets with small database schemas. Rubin et al. (2022) fine-tune a small retriever to obtain the most relevant few-shot examples to use for each input. Niu et al. (2023) use a few-shot prompted Codex model to break down the natural language input to make the task easier for a smaller semantic parser. LLMs have also been applied to semantic parsing on relational databases (Hu et al., 2022;Poesia et al., 2022;Li et al., 2023;An et al., 2023;Nan et al., 2023;Arora et al., 2023). The schemas used in these projects are very small when compared to Wikidata." }, { "figure_ref": [], "heading": "LLMs for Semantic Parsing", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Shin", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Entity Linking", "publication_ref": [ "b2", "b19" ], "table_ref": [], "text": "Entity linking involves finding the named entities in a query, and linking them to the corresponding entities in the knowledge graph so that the query can be executed using the proper entities as reference points. The current state-of-the-art entity linking model on the WebQuestionsSP dataset is ReFinED (Ayoola et al., 2022). They use a bidirectional transformer on the query to predict the most likely mentions of named entities within a query, and then combine that information with embeddings computed over every entity in the knowledge base to predict which entity the mention is most likely to be referring to. Prior to ReFinED, the state-of-the-art was ELQ (Li et al., 2020). They similarly generate embeddings for each entity in the knowledge base, and then use the predicted mentions of entities combined with these predicted embeddings to generate likely entities." }, { "figure_ref": [], "heading": "Semantic Parsing for Wikidata", "publication_ref": [], "table_ref": [], "text": "Wikidata is the largest public knowledge base with over 12 billion facts represented by subjectpredicate-object triples using 100+ million entities and 10,000 properties. 3,000 of the properties are useful for answering natural language questions, whereas the rest are used to link data in Wikidata with external library catalogs and database IDs.\nEntities and properties are given unique identifiers, QIDs and PIDs, respectively. For example, the fact that Joe Biden is the president of the US can be represented as a triple (Q6279, P39, Q11696), where P39 is the PID for property position held, Q6279 and Q11696 are QIDs for Joe Biden and the president of the United States, respectively." }, { "figure_ref": [], "heading": "Query Format", "publication_ref": [], "table_ref": [], "text": "Unlike relational databases and Freebase, Wikidata has no predefined domains or types. Any entity can have an arbitrary set of properties. However, even though Wikidata is property-based, all named entities have one or more instance of properties to some domain entity; domain entities are organized into a hierarchy with the subclass of property.\nNote that the names of domain entities and properties are unique. Non-domain entities, on the other hand, can be ambiguous. For example, \"Lincoln\" can refer to the president, a car brand, a sparrow, an aircraft, and many different cities.\nWe posit that it is impossible for LLMs to memorize the QIDs and PIDs for domains and properties. We modify the format of SPARQL queries to use the more mnemonic property name, instead of its PID. Similarly, we use entity names for domains. For example, the original SPARQL for the query \"What car models does GM make?\" is SELECT DISTINCT ?x WHERE { ?x wdt:P31/wdt:P279* wd:Q3231690. ?x wdt:P176 wd:Q81965. } This says that we are seeking x, where x is transitively either an instance of (wdt:P31) or a subclass of (wdt:P279) of an automobile model (wd:Q3231690), and x has General Motors (wd:Q81965) as the manufacturer (wdt:P176). Note wdt is the prefix for Wikidata property, and wd is for Wikidata entity.\nWith our modification, the query becomes:\nSELECT DISTINCT ?x WHERE { ?x wdt:instance_of/wdt:subclass_of* wd:automobile_model. ?x wdt:manufacturer wd:Q81965. }\nFor non-domain entity QIDs, we also accept a string in lieu of a QID in case of entity linking errors. At inference time, we use simple heuristics to resolve the string to a QID before applying the query. For example, \"wd:Q81965\" in the query may be replaced with \"wd:GM\". See Section 3.2.2 for more details.\nNormally, we refrain from changing standard query notations since LLMs have been pretrained on them. However, we posit that learning this new syntax is much easier than learning the PIDs and QIDs. Our experimentation with few-shot prompting suggests that LLMs can easily adjust to this format." }, { "figure_ref": [], "heading": "Entity Linking", "publication_ref": [], "table_ref": [], "text": "Linking entities for WikiWebQuestions is particularly difficult. First, since the dataset is collected from real-world questions without prompting the users for more information, users tend to refer to their entities of interest without using their full names. Second, the questions are generally short with very limited context, making it harder to disambiguate among entities with similar names. Lastly, many QIDs in Wikidata are used to represent terms not generally known as \"named entities\". For example, domain entities are often ignored by entity linker models, as in \"What is the biggest country in Europe by population?\", both \"country\" (Q6256) and \"Europe\" (Q46) are required to construct the correct SPARQL, but entity linkers only provide \"Europe\" and ignore \"country\"." }, { "figure_ref": [], "heading": "Semantic Parsing with Entity Linking", "publication_ref": [ "b2" ], "table_ref": [], "text": "To handle ambiguous entities, we use an entity linker to first find the domain names and QIDs of the entities mentioned in the text. We train a semantic parser that accepts users' input along with the results produced by the entity linker.\nFormally, given a user input T , and a set of entity linker results ⟨e, q⟩, where e is the name (default label) Wikidata gives to an entity and q is its QID, the semantic parser produces the semantic parse of T in our modified SPARQL format.\nFor the example above, the SOTA ReFinED entity linker (Ayoola et al., 2022) returns {⟨General Motors, Q81965⟩}. Unfortunately, it misses the entity automobile model (Q3231690), a term not usually considered to be an entity." }, { "figure_ref": [], "heading": "Recovering from Entity Linker Errors", "publication_ref": [], "table_ref": [], "text": "We want our semantic parser to be able to recover from mistakes by an entity linker. That is, the semantic parser should use entity linking when it is helpful, but it should still try to predict the right logical form when the linker fails.\nThe semantic parser is trained to accept, along with the user query, an optional set of potentially useful QIDs from the entity linker. We include samples where some of the supplied linked entities are not used in the gold answer, as well as samples where there are missing linked entities. For the latter, we use mentions in the original query in lieu of the QIDs. At inference time, we use the mentions to look up the QIDs in Wikidata. If multiple matches exist, the most popular entity is returned. An example is shown in Appendix A.\nWith the above example where the entity linker misses \"automobile model\", the semantic parser is likely to predict \"car model\" by copying from the user query. We search \"automobile model\" among aliases in domains to find the correct QID. This design allows the model to potentially recover from entity-linking failures." }, { "figure_ref": [], "heading": "WikiWebQuestions (WWQ) Dataset", "publication_ref": [ "b49", "b50" ], "table_ref": [], "text": "Despite being the most popular large knowledge base for a long time, existing benchmarks on Wikidata with labeled SPARQL queries are unfortunately either small or of low quality. On the other hand, benchmarks over the deprecated Freebase still dominate the KBQA research with betterquality data. For example, the WebQuestions (Yih et al., 2015) dataset was collected by using Google Search API instead of human paraphrasing or synthesis. As a result, it is much more natural and truly reflects the real-world questions users may ask. This dataset is later annotated with SPARQL over Freebase, named WebQuestionsSP (Yih et al., 2016). Examples with no legitimate SPARQL to retrieve answers from Freebase are dropped. In total, WebQuestionsSP consists of 3098 examples in the training set and 1639 in the test set.\nWe migrated WebQuestionsSP, the best collection of natural language questions over a general knowledge graph, from Freebase to Wikidata, with the help of an automatic tool we developed, based on Google's entity mapping2 and Wikidata's relation mapping3 . About 60% of the dataset was automatically converted. One of the authors of this paper, who did not participate in model tuning, manually converted those instances that failed to convert automatically." }, { "figure_ref": [], "heading": "Migrating WebQuestionsSP to Wikidata", "publication_ref": [], "table_ref": [], "text": "Here are the major decisions we made in migrating WebQuestionsSP dataset to Wikidata. While much bigger, Wikidata does not necessarily contain all the information available in Freebase. For example, it lacks countries' trade partners, hence we drop all such questions from the WebQuestionsSP dataset.\nIf multiple paths can lead to the correct answer, we choose the path that provides the most complete answers and has the best availability among entities in the same domain. For example, when asking for books written by an author X, we can either search for books whose author is X or find notable works of X that are books. While the latter is more efficient, the property notable works is not always available for all authors and it often does not provide a complete list. Thus, we annotate such examples using the former representation.\nWe also cleaned up the original dataset. The dataset contained questions like \"who does Ronaldinho play for now in 2011?\". We drop the appended year as it conflicts with \"now\" in the utterance, and it would refer to the live information in Wikidata.\nIn total, we dropped 9% of the examples from WebQuestionsSP and created a training, dev, and test set of 2431, 454, and 1431 samples, respectively. Given that Wikidata has 100 million entities and 3,000 useful properties for answering questions, the training data set is woefully inadequate and can be considered as a \"fewshot\" training set at best." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [], "table_ref": [], "text": "This section discusses the implementation details of the entity linker and the WikiSP semantic parser." }, { "figure_ref": [], "heading": "Entity Linking", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "We use ReFinED (Ayoola et al., 2022) for entity linking, which is the current state of the art for WebQuestionsSP. As discussed before, Wikidata treats many common terms such as \"country\" as named entities and assigns them QIDs. To fine-tune ReFinED to learn such terms, we add the question and entity pairs from the training set of WikiWeb-Questions to the data used to train ReFinED's questions model.\nWe run 10 epochs of finetuning using the default hyperparameters suggested by Ayoola et al. (2022). For each identified entity, we provide the mention in the original utterance, the QID, as well as its domain in plain text. The information is appended to the utterance before being fed into the neural semantic parsing model." }, { "figure_ref": [], "heading": "The WikiSP Semantic Parser", "publication_ref": [ "b40", "b39", "b44" ], "table_ref": [], "text": "We prepare the training data with entities provided by fine-tuned ReFinED. Comparing with the gold entities, ReFinED provides extra entities in 215 cases, while missing at least one entity in 137 cases. When ReFinED failed to produce the correct entities, we replace the missing QIDs in the logical form with the corresponding mention of the entity in the question. During evaluation, if a mention of an entity is predicted by the model, we look up the QID using the Wikidata \"wbsearchentities\" API4 .\nWe fine-tune LLaMA with 7B parameters because it has been shown to perform well despite its relatively small size (Touvron et al., 2023). We include the Alpaca (Taori et al., 2023) instruction following data, which was derived using the selfinstruct (Wang et al., 2023) method, in our training data. The training data samples in WikiWebQuestion start with the following instruction: \"Given a Wikidata query with resolved entities, generate the corresponding SPARQL. Use property names instead of PIDs.\". We concatenate the resolved entities and the user utterance together as input. We up-sample the WikiWebQuestion fewshot set 5 times and train for 3 epochs using 2e-5 learning rate and 0.03 warmup ratio." }, { "figure_ref": [], "heading": "Executing Queries on Wikidata", "publication_ref": [], "table_ref": [], "text": "SPARQL queries are used to retrieve answers from the Wikidata SPARQL endpoint 5 . Since Wikidata EM F1 WikiSP (ours) 65.5 71.9 is actively being updated, the gold SPARQL can be easily re-executed to acquire up-to-date answers, allowing the benchmark to compare with forthcoming iterations of large language models." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we evaluate WikiSP on WikiWeb-Questions and demonstrate how it can be used to complement large language models such as GPT-3." }, { "figure_ref": [], "heading": "Semantic Parser Results", "publication_ref": [ "b51" ], "table_ref": [ "tab_0" ], "text": "We evaluate our model with two different answer accuracy metrics: (1) exact match (EM): the percentage of examples where the answers of the predicted SPARQL exactly match the gold answers, and (2) Macro F1 score (F1): the average F1 score for answers of each example. The evaluation results are shown in Table 1. Our approach achieves a 65.5% exact match accuracy and a 71.9% F1 score on the WWQ dataset.\nAs a reference, the current state-of-the-art result on the original WebQuestionsSP dataset for Freebase is 78.8% F1 (Yu et al., 2023). The result was obtained with a combination of semantic parsing and retrieval. The WikiWebQuestions dataset is slightly different, as discussed above. More significantly, unlike Freebase, Wikidata does not have a fixed schema and ours is an end-to-end, seq2seq semantic parser." }, { "figure_ref": [], "heading": "Ablation Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Entity Linking", "publication_ref": [], "table_ref": [], "text": "Our first ablation study evaluates the need for entity linking with ReFinED, by replacing it with simply using the LLM to detect entities as mentions. In this experiment, all entity IDs in the training data are replaced by their mentions; during inference, we map the predicted entities to their actual QIDs according to Section 3.2.2.\nThe results show that replacing the neural entity linker with just using mentions reduces the exact match by 9.1% and the F1 score by 9.3%. This suggests that entity linking is important." }, { "figure_ref": [], "heading": "Allowing Mentions as Entities", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "Our logical form is designed to recover from entity linking errors by allowing entities be specified by a mention, as an alternative to a QID. Our ablation study on this feature tested two training strategies: ReFinED. The entity linker tuples are produced by fine-tuned ReFinED, which may be missing entities in the gold target. The data show that generating unseen QIDs is needed for missing entities.\nOracle. The entity linker tuples are exactly all the entities used in the gold. The model would only encounter missing QIDs at test time when ReFinED fails to generate all the necessary QIDs.\nThe answer accuracy of the model using entity linked tuples from ReFinED (\"No mentions, trained with ReFinED\" in Table 2) lags by 2.3% when compared against our best model. The model using Oracle (\"No mentions, trained with Oracle entities\" in Table 2) lags by 3.4%. These results indicate that allowing mentions is useful for recovering from entity linking errors." }, { "figure_ref": [], "heading": "Names vs. IDs for Properties & Domains", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Our logical form replaces PIDs with property names, and domain-entity QIDs with the domain names. Here we evaluate the effectiveness of this query format. We compare our approach with the original SPARQL where all properties and entities are represented with PIDs and QIDs. Our ablation study shows that our representation with property names and domain names improves the answer accuracy by 2.0% (Table 2). This shows that LLMs can adapt to changes in query notation with finetuning, and it is easier to learn names than remembering random IDs. If we did not allow mentions in the predicted logical form, the replacement of QIDs with their names is likely to be more significant." }, { "figure_ref": [ "fig_1" ], "heading": "Complementing GPT-3", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "LLMs like GPT-3 can answer many questions on general knowledge correctly; however, they may also hallucinate. WWQ is representative of popular questions, so we expect GPT-3 to perform well. We use text-davinci-002 with the temperature set to 0 to evaluate GPT-3's performance on WWQ.\nOn the dev set of WWQ, GPT-3 answers 66.4% of the questions correctly and provides incomplete answers to 26.5% of the questions. For example, when asked \"What does Obama have a degree in?\", GPT-3 correctly identifies President Obama's political science degree, but fails to mention his law degree. In total, GPT-3 gives wrong answers to 7.1% of the questions.\nFor this dev set, we can give definitive answers to 75.6% of the questions with WikiSP (Table 2). For the rest of the questions (24.4%), accounting for the overlap between the GPT-3 and our semantic parser's results, the percentages of guessing correctly, incompletely, and incorrectly are at 15.2%, 5.5%, and 3.7%, respectively (Figure 2).\nIn summary, the combination of GPT-3 and WikiSP makes it possible to give a definitive, correct, and complete answer three quarters of the time for the dev set. Users can also benefit from GPT-3's guesses the rest of the time at a 3.7% error rate, which is about half of the original error rate." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [], "table_ref": [], "text": "We analyzed the 111 examples in the WWQ dev set where the model failed." }, { "figure_ref": [], "heading": "Acceptable Alternative Results (18.0%)", "publication_ref": [], "table_ref": [], "text": "Our analysis shows that 18.0% of the \"errors\" can actually be deemed to be correct.\nReasonable alternate answers (11.7%). In 11.7% of the cases, the model predicts an alternative interpretation to the question and returns a reasonable answer that is different from the gold. For example, the gold for question \"what did Boudicca do?\" uses the position held property, while the model predicts occupation property. Both are considered valid answers to the question.\nReasonable alternative SPARQL but no answer was retrieved (6.3%). In another 6.3% of cases, the model predicts a reasonable alternative SPARQL, but the SPARQL returns no answer. Sometimes, since the information for the \"correct\" property is missing, the question is represented with a similar property. For example, since residence property is missing for Patrick Henry, the gold SPARQL for \"where did Patrick Henry live?\" uses place of birth instead, while our model predicts residence." }, { "figure_ref": [], "heading": "Errors in Entity Linking (35.1%)", "publication_ref": [], "table_ref": [], "text": "The biggest source of errors is entity linking. Entity linker failed to provide the correct entities in 35.1% of the failed examples. While WikiSP can potentially recover from missing entities, it cannot recover from incorrect entities. This is especially common for character roles, as some character roles have different entities for books and movies or even different series of movies. Sometimes WikiSP located the correct mention from the question, but the lookup failed. For example, the model located the mention of the event \"allied invasion of France\" in question \"where did the allied invasion of France take place?\", but failed to find the corresponding entity from Wikidata by the name." }, { "figure_ref": [], "heading": "Errors Beyond Entity Linking", "publication_ref": [ "b41", "b30" ], "table_ref": [], "text": "Semantic parsing in Wikidata is challenging as there are no predefined schemas, and there are 150K domains and 3K applicable properties. Some representative mistakes include the following:\nWrong property (17.1%). 17.1% of the errors are caused by predicting the wrong property. Some of the examples require background knowledge to parse. For example the answer of the question \"what did martin luther king jr do in his life?\" should return the value of movement, while the model predicts occupation. Properties are a challenge in Wikidata because as illustrated here which property to predict depends on the entity itself.\nMissing domain constraint (5.4%). Another common problem is missing the domain constraint. For example, the model correctly identifies that property shares border with should be used for question \"what countries are around Egypt?\". However, it does not limit the answer to countries only, thus extra entities are returned.\n7 Experiment with QALD-7\nFor another evaluation of WikiSP, we apply our model on Task 4 from QALD-7 (Usbeck et al., 2017) dataset. QALD-7 is part of the QALD (Question Answering over Linked Data) which is a series of challenges started in 2011 known for their complex, manually created questions. It mainly focuses on DBpedia, but QALD-7's Task 4 is engineered for Wikidata. The task includes 100 train examples, which we use to fine-tune our model and 50 test examples. There is no dev set.\nWe choose QALD-7 as it is a manually crafted dataset with complex questions. We avoid datasets built on synthetic or human-paraphrased data, such as CSQA (Saha et al., 2018) and KQA-Pro (Cao et al., 2022a). As they have limited natural language variety between the training and evaluation data, models can get artificially high accuracy. For example, a simple BART based model can achieve over 90% accuracy on KQA-Pro even without an entity linking module (Cao et al., 2022a).\nThe QALD-7 test set provides both the SPARQL queries as well as the answers. To double-check the correctness of the QALD-7 dataset, we applied the 50 gold queries of the test set to Wikidata and found that 4 did not return an answer. We hypothesize that the discrepancy is caused by the change in Wikidata structure/quantity of information. We evaluate WikiSP by comparing the answers where possible, and by comparing the generated SPARQL syntactically otherwise.\nFor this experiment, we use the same hyperparameters and data format as described in Section 5.3. In addition to the training data for WikiSP, we also include the QALD-7 train samples, upsampled 20 times." }, { "figure_ref": [], "heading": "QALD-7 Results", "publication_ref": [ "b11", "b41", "b11" ], "table_ref": [ "tab_2" ], "text": "Our model achieves 38% accuracy on the QALD-7 dataset and outperforms the F1 score of the stateof-the-art WDAqua (Diefenbach et al., 2017) by 3.6%, as shown in Table 3. Note that WDAqua is based on retrieval, whereas WikiSP is based on sequence-to-sequence semantic parsing. QALD-7 (Usbeck et al., 2017) reports WDAqua as the winner of the leaderboard with 55.2 F1, however the authors of WDAqua reported 40.0 F1 in their papers (Diefenbach et al., 2017)." }, { "figure_ref": [], "heading": "Complementing GPT-3 on QALD-7", "publication_ref": [], "table_ref": [], "text": "Similar to WWQ, we also assess the combination of GPT with WikiSP on QALD-7 as shown in Fig- ure 3. The GPT model used was \"text-davinci-002\". Since there is no validation set and the test set is already very small, one of the authors who was not involved in training or finetuning the model evaluated GPT-3 on the test set.\nGPT-3 is fully accurate on 62% of the questions, 20% incomplete, and 18% wrong. With our approach, we can provide 38% verifiably good answers from WikiSP; the guesses of GPT-3 get an additional 34% correct, 16% incomplete, and only 12% wrong." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "We did not conduct error analysis on the performance of QALD-7 as it has no dev set. The author evaluating GPT-3 noted that the test set of QALD-7 is much more complicated than the training data (of just 100 samples), with most of the queries containing multiple properties. This explains the lower accuracy of WikiSP on QALD-7 when compared to WikiWebQuestions, which has a few-shot training data set with a similar distribution as the test set. This result suggests that the performance of WikiSP depends heavily on a good few-shot training data for fine-tuning the LLMs. We hypothesize that we can increase the performance of WikiSP in handling less popular questions with a better, possibly synthesized, training dataset." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have created a new high-quality benchmark, WikiWebQuestions, for large knowledge-base question answering. The dataset is based on the popular WebQuestionsSP dataset with natural questions, annotated with SPARQL for Wikidata.\nWe establish a first, strong baseline of 65% answer accuracy and 72% F1 score for WikiWeb-Questions. This is achieved by fine-tuning LLaMA with a few-shot training data set using a SPARQL query format modified for semantic parsing.\nWe show that we can reduce the hallucination of large language models like GPT-3 by grounding it with a semantic parser. For the dev set of Wiki-WebQuestions, this combination approach provides useful information for 96% of the questions in the dev set of the benchmark. More importantly, it generates verifiable answers for 76% of the questions." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b24" ], "table_ref": [], "text": "While applications of large language models seem to expand every day, this paper mainly focuses on factoid question answering. Long-form text generation, for example, is outside the scope of the experiments of this paper, but the methodology described here may be extended to this setting in the future. Even though knowledge bases are an important source of facts, a large portion of the knowledge available in digital form (e.g. Wikipedia, news articles, etc.), is not organized into knowledge bases. As such, the results of this paper can be considered complementary to the larger body of fact-checking research based on free text.\nOur semantic parser can be used to verify answers from LLMs. However, this additional round of running the semantic parser and querying Wikidata increase the response latency, which may be noticeable by end-users of such systems.\nAll of our datasets and experiments are conducted for English. Expanding to other languages, while possible (Moradshahi et al., 2020) are outside the scope of this work.\nOur experiments were performed using GPT-3 (davinci-002) as that was what we had access to when we started the project. Undoubtedly, the later LLMs will produce better results. Nonetheless, the need to have verifiable results based on live database accesses will remain." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "LLMs are used by millions of people everyday. We hope that this line of work will help make them more reliable for everyone, mitigating some of their potential downsides, and giving users access to more accurate information. Our use of Wikidata will enable future researchers and developers to connect their systems with a large, diverse and live knowledge graph that is updated every day. We do not anticipate any harm resulting from the methods introduced in this work.\nWe did not crowdsource any datasets for this paper, as the questions are converted from a previous dataset and all the re-annotation and analysis is done by the authors.\nTo conduct experiments in this paper, we used an estimated total of 60 NC96ads-A100 GPU hours on Microsoft Azure. Each finetuning experiment takes roughly 3 hours, and we conducted roughly 20 experiments to arrive at the results in this paper." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is supported in part by the National Science Foundation, the Alfred P. Sloan Foundation, the Verdant Foundation, Microsoft Azure AI credit, KDDI, JPMorgan Chase, and the Stanford Human-Centered Artificial Intelligence (HAI) Institute. We also thank the reviewers for their valuable comments and suggestions." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "data, and model are available at https://github.com/" }, { "figure_ref": [], "heading": "A Examples of Recovering from Entity", "publication_ref": [], "table_ref": [], "text": "Linking Errors\nHere, we illustrate our proposal of using entity mentions to recover from entity linking errors. In the training set, we have the following example:\n• Query: What year did giants win the world series? • Original Gold SPARQL:\nSELECT DISTINCT ?x WHERE { ?y wdt:sports_season_of_league_or_competition wd:Q265538; wdt:winner wd:Q308966; wdt:point_in_time ?x. }\n• Gold Entity linker result:\nWorld Series (QID Q265538), San Francisco Giants (QID Q308966);\n• ReFinED result: San Francisco Giants (QID Q308966);\nHere, the ReFinED entity linker model fails to identify the \"World Series\" entity. Our proposal of mentions gives the semantic parser a chance to recover from entity linker failures. To train the parser to generate mentions, our training includes samples like this:\n• Query: what year did giants win the world series? • ReFinED result: San Francisco Giants (QID Q308966);\n• Gold target: SELECT DISTINCT ?x WHERE { ?y wdt:sports_season_of_league_or_competition; wd:world_series; wdt:winner wd:Q308966; wdt:point_in_time ?x. }\nThe gold query mentions \"world_series\". At inference time, our heuristics use the predicted mention to look up the actual Wikidata entity. For example, if wd:world_series is predicted at inference time, our heuristics maps it back to wd:Q265538." } ]
While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a highquality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. This paper presents a few-shot sequence-tosequence semantic parser for Wikidata. We modify SPARQL to use the unique domain and property names instead of their IDs. We train the parser to use either the results from an entity linker or mentions in the query. We fine-tune LLaMA by adding the few-shot training data to that used to fine-tune Alpaca. Our experimental results demonstrate the effectiveness of this methodology, establishing a strong baseline of 76% and 65% answer accuracy in the dev and test sets of WikiWeb-Questions, respectively. By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96% of the questions in dev. We also show that our method outperforms the state-of-the-art for the QALD-7 Wikidata dataset by 3.6% in F1 score. 1
Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata
[ { "figure_caption": "Figure 1 :1Figure1: An Overview of WikiSP. An entity linker is used to link entities in the user query to their unique ID in Wikidata; e.g. \"A Bronx Tale\" is linked to entity ID \"Q1130705\". The query and entity linker outputs are fed to the WikiSP semantic parser to produce a modified version of SPARQL, where property IDs (e.g. \"P915\") are replaced by their unique string identifiers (e.g. \"film-ing_location\"). If applying the query to Wikidata fails to return a result, we default to GPT-3, labeling the result as a GPT-3 guess. Returned answers are presented in the context of the query, so the user can tell if the answer is acceptable; if not, we also show the guess from GPT-3. Here WikiSP mistakenly uses \"filming_location\" instead of \"narrative_location\"; the user detects the mistake, thumbs down the answer, and the GPT-3 answer is provided.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Distribution of correct, incomplete, and incorrect answers for the WikiWebQuestions dev set, when GPT-3 is used alone and when combined with WikiSP.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Distribution of correct, incomplete, and incorrect answers for the QALD-7 test set, when GPT-3 is used alone and when combined with WikiSP.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Results of WikiSP on the WWQ test set.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation results of WikiSP on the WWQ dev set.", "figure_data": "EM F1", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation results of WikiSP on QALD-7 Task 4 and comparison with prior work.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Silei Xu; Shicheng Liu; Theo Culhane; Elizaveta Pertseva; Meng-Hsi Wu; Sina J Semnani; Monica S Lam
[ { "authors": "Shengnan An; Bo Zhou; Zeqi Lin; Qiang Fu; Bei Chen; Nanning Zheng; Weizhu Chen; Jian-Guang Lou", "journal": "", "ref_id": "b0", "title": "Skill-based few-shot selection for in-context learning", "year": "2023" }, { "authors": "Aseem Arora; Shabbirhussain Bhaisaheb; Harshit Nigam; Manasi Patwardhan; Lovekesh Vig; Gautam Shroff", "journal": "", "ref_id": "b1", "title": "Adapt and decompose: Efficient generalization of text-to-sql via domain adapted leastto-most prompting", "year": "2023" }, { "authors": "Tom Ayoola; Shubhi Tyagi; Joseph Fisher; Christos Christodoulopoulos; Andrea Pierleoni", "journal": "", "ref_id": "b2", "title": "ReFinED: An efficient zero-shot-capable approach to end-to-end entity linking", "year": "2022" }, { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b3", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "Jonathan Berant; Andrew Chou; Roy Frostig; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Semantic parsing on Freebase from question-answer pairs", "year": "2013" }, { "authors": "Kurt Bollacker; Colin Evans; Praveen Paritosh; Tim Sturge; Jamie Taylor", "journal": "Association for Computing Machinery", "ref_id": "b5", "title": "Freebase: A collaboratively created graph database for structuring human knowledge", "year": "2008" }, { "authors": "Giovanni Campagna; Rakesh Ramesh; Silei Xu; Michael Fischer; Monica S Lam", "journal": "ACM Press", "ref_id": "b6", "title": "Almond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant", "year": "2017" }, { "authors": "Giovanni Campagna; Silei Xu; Mehrad Moradshahi; Richard Socher; Monica S Lam", "journal": "Association for Computing Machinery", "ref_id": "b7", "title": "Genie: A generator of natural language semantic parsers for virtual assistant commands", "year": "2019" }, { "authors": "Shulin Cao; Jiaxin Shi; Liangming Pan; Lunyiu Nie; Yutong Xiang; Lei Hou; Juanzi Li; Bin He; Hanwang Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "a. KQA pro: A dataset with explicit compositional programs for complex question answering over knowledge base", "year": "2022" }, { "authors": "Shulin Cao; Jiaxin Shi; Zijun Yao; Xin Lv; Jifan Yu; Lei Hou; Juanzi Li; Zhiyuan Liu; Jinghui Xiao", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Program transfer for answering complex questions over knowledge bases", "year": "2022" }, { "authors": "Rajarshi Das; Manzil Zaheer; Dung Thai; Ameya Godbole; Ethan Perez; Jay Yoon Lee; Lizhen Tan; Lazaros Polymenakos; Andrew Mccallum", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Casebased reasoning for natural language queries over knowledge bases", "year": "2021" }, { "authors": "Dennis Diefenbach; Kamal Singh; Pierre Maret", "journal": "Springer", "ref_id": "b11", "title": "Wdaqua-core0: A question answering component for the research community", "year": "2017" }, { "authors": "Li Dong; Furu Wei; Ming Zhou; Ke Xu", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Question answering over Freebase with multi-column convolutional neural networks", "year": "2015" }, { "authors": "Jerome Goddard", "journal": "The American Journal of Medicine", "ref_id": "b13", "title": "Hallucinations in chatgpt: A cautionary tale for biomedical researchers", "year": "2023" }, { "authors": "Yu Gu; Sue Kase; Michelle Vanni; Brian Sadler; Percy Liang; Xifeng Yan; Yu Su", "journal": "ACM", "ref_id": "b14", "title": "Beyond i.i.d.: Three levels of generalization for question answering on knowledge bases", "year": "2021" }, { "authors": "Yu Gu; Yu Su", "journal": "International Committee on Computational Linguistics", "ref_id": "b15", "title": "ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering", "year": "2022" }, { "authors": "Yushi Hu; Chia-Hsuan Lee; Tianbao Xie; Tao Yu; Noah A Smith; Mari Ostendorf", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Incontext learning for few-shot dialogue state tracking", "year": "2022" }, { "authors": "Yunshi Lan; Jing Jiang", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Query graph generation for answering multi-hop complex questions from knowledge bases", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Belinda Z Li; Sewon Min; Srinivasan Iyer; Yashar Mehdad; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Efficient one-pass end-to-end entity linking for questions", "year": "2020" }, { "authors": "Jinyang Li; Binyuan Hui; Ge Qu; Binhua Li; Jiaxi Yang; Bowen Li; Bailin Wang; Bowen Qin; Rongyu Cao; Ruiying Geng; Nan Huo; Xuanhe Zhou; Chenhao Ma; Guoliang Li; Kevin C C Chang; Fei Huang; Reynold Cheng; Yongbin Li", "journal": "", "ref_id": "b20", "title": "Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls", "year": "2023" }, { "authors": "Kangqi Luo; Fengli Lin; Xusheng Luo; Kenny Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Knowledge base question answering via encoding of complex query graphs", "year": "2018" }, { "authors": "Costas Mavromatis; George Karypis", "journal": "", "ref_id": "b22", "title": "ReaRev: Adaptive reasoning for question answering over knowledge graphs", "year": "2022" }, { "authors": "Alexander Miller; Adam Fisch; Jesse Dodge; Amir-Hossein; Antoine Karimi; Jason Bordes; Weston", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Key-value memory networks for directly reading documents", "year": "2016" }, { "authors": "Mehrad Moradshahi; Giovanni Campagna; Sina Semnani; Silei Xu; Monica Lam", "journal": "", "ref_id": "b24", "title": "Localizing open-ontology QA semantic parsers in a day using machine translation", "year": "2020" }, { "authors": "Linyong Nan; Yilun Zhao; Weijin Zou; Narutatsu Ri; Jaesung Tae; Ellen Zhang; Arman Cohan; Dragomir Radev", "journal": "", "ref_id": "b25", "title": "Enhancing few-shot text-tosql capabilities of large language models: A study on prompt design strategies", "year": "2023" }, { "authors": "Yilin Niu; Fei Huang; Wei Liu; Jianwei Cui; Bin Wang; Minlie Huang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Bridging the Gap between Synthetic and Natural Questions via Sentence Decomposition for Semantic Parsing", "year": "2023" }, { "authors": "Gabriel Poesia; Alex Polozov; Vu Le; Ashish Tiwari; Gustavo Soares; Christopher Meek; Sumit Gulwani", "journal": "", "ref_id": "b27", "title": "Synchromesh: Reliable code generation from pre-trained language models", "year": "2022-04-25" }, { "authors": "Ohad Rubin; Jonathan Herzig; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Learning to retrieve prompts for in-context learning", "year": "2022" }, { "authors": "Amrita Saha; Ghulam Ahmed Ansari; Abhishek Laddha; Karthik Sankaranarayanan; Soumen Chakrabarti", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b29", "title": "Complex program induction for querying knowledge bases in the absence of gold programs", "year": "2019" }, { "authors": "Amrita Saha; Vardaan Pahuja; Mitesh Khapra; Karthik Sankaranarayanan; Sarath Chandar", "journal": "", "ref_id": "b30", "title": "Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph", "year": "2018" }, { "authors": "Priyanka Sen; Armin Oliya; Amir Saffari", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Expanding end-to-end question answering on differentiable knowledge graphs with intersection", "year": "2021" }, { "authors": "Richard Shin; Christopher Lin; Sam Thomson; Charles Chen; Subhro Roy; Emmanouil Antonios Platanios; Adam Pauls; Dan Klein; Jason Eisner; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Constrained language models yield few-shot semantic parsers", "year": "2021" }, { "authors": "Yiheng Shu; Zhiwei Yu; Yuhan Li; Börje Karlsson; Tingting Ma; Yuzhong Qu; Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "TIARA: Multi-grained retrieval for robust question answering over large knowledge base", "year": "2022" }, { "authors": "Daniil Sorokin; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Modeling semantics with gated graph neural networks for knowledge base question answering", "year": "2018" }, { "authors": "Yu Su; Ahmed Hassan Awadallah; Madian Khabsa; Patrick Pantel; Michael Gamon; Mark Encarnacion", "journal": "", "ref_id": "b35", "title": "Building natural language interfaces to web apis", "year": "2017" }, { "authors": "Haitian Sun; Tania Bedrax-Weiss; William Cohen", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "PullNet: Open domain question answering with iterative retrieval on knowledge bases and text", "year": "2019" }, { "authors": "Haitian Sun; Bhuwan Dhingra; Manzil Zaheer; Kathryn Mazaitis; Ruslan Salakhutdinov; William Cohen", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Open domain question answering using early fusion of knowledge bases and text", "year": "2018" }, { "authors": "Alon Talmor; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "The web as a knowledge-base for answering complex questions", "year": "2018" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b39", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b40", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ricardo Usbeck; Axel-Cyrille Ngonga Ngomo; Bastian Haarmann; Anastasia Krithara; Michael Röder; Giulio Napolitano", "journal": "Springer", "ref_id": "b41", "title": "7th open challenge on question answering over linked data (qald-7)", "year": "2017" }, { "authors": "Pat Verga; Haitian Sun; Livio Baldini Soares; William Cohen", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Adaptable and interpretable neural MemoryOver symbolic knowledge", "year": "2021" }, { "authors": "Salvatore Vivona; Kaveh Hassani", "journal": "", "ref_id": "b43", "title": "Relational graph representation learning for open-domain question answering", "year": "2019" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Self-instruct: Aligning language models with self-generated instructions", "year": "2023" }, { "authors": "Benjamin Weiser", "journal": "The New York Times", "ref_id": "b45", "title": "Here's what happens when your lawyer uses chatgpt", "year": "2023" }, { "authors": "Silei Xu; Giovanni Campagna; Jian Li; Monica S Lam", "journal": "Association for Computing Machinery", "ref_id": "b46", "title": "a. Schema2qa: High-quality and low-cost q&a agents for the structured web", "year": "2020" }, { "authors": "Silei Xu; Sina Semnani; Giovanni Campagna; Monica Lam", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "AutoQA: From databases to QA semantic parsers with only synthetic training data", "year": "2020" }, { "authors": "Xi Ye; Semih Yavuz; Kazuma Hashimoto; Yingbo Zhou; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "RNG-KBQA: Generation augmented iterative ranking for knowledge base question answering", "year": "2022" }, { "authors": "Wen-Tau Yih; Ming-Wei Chang; Xiaodong He; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Semantic parsing via staged query graph generation: Question answering with knowledge base", "year": "2015" }, { "authors": "Wen-Tau Yih; Matthew Richardson; Chris Meek; Ming-Wei Chang; Jina Suh", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "The value of semantic parse labeling for knowledge base question answering", "year": "2016" }, { "authors": "Donghan Yu; Sheng Zhang; Patrick Ng; Henghui Zhu; Alexander Hanbo Li; Jun Wang; Yiqun Hu; William Wang; Zhiguo Wang; Bing Xiang", "journal": "", "ref_id": "b51", "title": "Decaf: Joint decoding of answers and logical forms for question answering over knowledge bases", "year": "2023" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman; Zilin Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task", "year": "2018" } ]
[]
10.18653/v1/2021.findings-emnlp.410
2024-01-31
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b40", "b54", "b41", "b17", "b15", "b39", "b13", "b27", "b9", "b43", "b0", "b30", "b22", "b33", "b32", "b20", "b33", "b37", "b32", "b6", "b56", "b30", "b46" ], "table_ref": [], "text": "Given a document or multiple documents in a source language (e.g., English), cross-lingual summarization (Wang et al., 2022a) aims to generate a summary in a different target language (e.g., Czech or German). It enables the rapid dissemination of relevant content across speakers of other languages. For instance, providing summaries of English news articles to Czech or German speakers; or making available to English speakers the content of product and service descriptions in foreign languages.\nRecent years have seen tremendous progress in abstractive summarization (Rush et al., 2015;Zhang et al., 2020) thanks to advances in neural network models and the availability of large-scale datasets (Sandhaus, 2008;Hermann et al., 2015;Grusky et al., 2018). While initial efforts have focused on English, more recently, with the advent of cross-lingual representations (Ruder et al., 2019) and large pre-trained models (Devlin et al., 2019;Liu et al., 2020), research on multilingual summarization (i.e., building monolingual summarization systems for different languages) has also gained momentum (Chi et al., 2020;Scialom et al., 2020;Aharoni et al., 2022).\nCross-lingual summarization faces the compounded challenge of having to tackle difficulties relating to both monolingual summarization (e.g., long inputs and outputs, hallucinations; Maynez et al. 2020) and machine translation (e.g., data imbalance, alignment across languages; Koehn and Knowles 2017). Recent work has shown that introducing an intermediate content planning step is helpful for summarization in English, resulting in higher quality summaries, especially in terms of faithfulness (Narayan et al., 2021(Narayan et al., , 2022;;Huot et al., 2023). In this work, we argue that content planning also has the potential for producing higher quality outputs for cross-lingual summarization. In particular, it provides a way of sharing task-specific knowledge across languages, while formalizing important aspects of the summarization task: identifying salient content in the source documents, organizing this information in a meaningful order, and standardizing it across different source and target language pairs. We present µPLAN, a cross-lingual summarization method that uses content planning as a crosslingual bridge (Figure 1). Building upon previous work (Narayan et al., 2021), we express our content plans as entity chains, i.e., ordered sequences of salient entities. Although more elaborate plan representations have been proposed in the literature (Wang et al., 2022b;Puduppully et al., 2022;Narayan et al., 2022), entities are a natural choice for our task for two reasons. They can mitigate hallucinations in generated summaries which are commonly related to entities (Cao et al., 2022;Zhao et al., 2020;Maynez et al., 2020) and are well-suited as a bridge across languages, thanks to the availability of multilingual knowledge bases (e.g., DBpedia) which represent entities in different languages. An interesting question for our summarization task is which language to use for the content plan, given that the source document and target summary are in different languages. We employ a multilingual knowledge base to align the entities across languages, which allows us to canonically transpose the plan to different languages without the use of machine translation.\nWe use a Transformer-based encoder-decoder model (Vaswani et al., 2017) that first encodes the document in the source language and then decodes to generate an intermediate plan representation and the summary in the target language conditioned on the plan and the input. We evaluate our method on the XWikis dataset (Perez-Beltrachini and Lapata, 2021), a cross-lingual abstractive summarization dataset derived from Wikipedia 2 articles aligned across four different languages (English, Czech, French, and German). We augment the training data for fine-tuning by annotating each target summary with its corresponding content plan.\nWe investigate two distinct cross-lingual tasks, namely from English to other languages (EN → ALL) and from other languages to English (ALL → EN). We demonstrate that models finetuned with our planning objective outperform regular generated summaries both in terms of ROUGE and faithfulness on the XWikis dataset across all language pairs, in both settings. Given the scarcity of cross-lingual datasets, we also investigate zero-2 https://www.wikipedia.org/ shot cross-lingual transfer to new language pairs and demonstrate that µPLAN models outperform comparison systems without planning components.\nOur contributions can be summarized as follows: (a) we introduce a training objective for crosslingual abstractive summarization that uses entity planning as a bridge between languages. Using automatic and human evaluation, we show that it yields better quality summaries and more effective zero-shot transfer to new language pairs than nonplanning baselines; and (b) we leverage a multilingual knowledge base to annotate the training data with plans, thus transposing entity names to their canonical designation in all languages, avoiding errors induced by mistranslation altogether. This strategy enables the mapping of entities that do not have an equivalent name in the target language to fully-localized paraphrases." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b44", "b43", "b57", "b7", "b34", "b25", "b5", "b16", "b14", "b36", "b31", "b38", "b28", "b33", "b32", "b33", "b19", "b3", "b2", "b11", "b21", "b8", "b47", "b52" ], "table_ref": [], "text": "Cross-lingual Summarization A key challenge in cross-lingual summarization is the scarcity of training data. Indeed, while creating large-scale multilingual summarization datasets has proven feasible (Straka et al., 2018;Scialom et al., 2020), naturally occurring documents in a source language paired with summaries in different target languages are rare. For this reason, existing cross-lingual approaches create large-scale synthetic data using machine translation (Zhu et al., 2019;Cao et al., 2020;Ouyang et al., 2019).\nCross-lingual benchmarks include WikiLingua (Ladhak et al., 2020), a dataset derived from multilingual how-to guides, which are relatively short and their summaries limited to brief instructional sentences. CrossSum (Bhattacharjee et al., 2021) contains over a million article and summary samples, aligned from the multilingual XL-Sum (Hasan et al., 2021) dataset, but the summaries are limited to one or two sentences. Fatima and Strube (2021) propose a Wikipedia-based cross-lingual dataset, but it only includes the English to German language direction. We work with XWikis (Perez-Beltrachini and Lapata, 2021), a cross-lingual dataset derived from Wikipedia with long input documents and long target summaries across four languages: English, Czech, French, and German. We compare these datasets in Appendix A.\nContent Plans for Summarization The idea of breaking down the generation task into smaller steps through a separate planning stage has proven helpful for data-to-text generation (Puduppully et al., 2019;Moryossef et al., 2019;Puduppully and Lapata, 2021;Liu and Chen, 2021) and lately for summarization and long-form question answering (Narayan et al., 2021(Narayan et al., , 2022)). Our work is closest to Narayan et al. (2021) who show that an intermediate planning step conceptualized as a sequence of salient entities could yield more faithful and entity-specific summaries. Herein, we explore whether content plans can serve as a cross-lingual bridge and enable task transfer between languages.\nZero-shot Cross-lingual Transfer A substantial portion of the work on zero-shot cross-lingual transfer has focused on classification tasks (Hu et al., 2020), such as XNLI (Artetxe and Schwenk, 2019), part-of-speech tagging, dependency parsing, named entity recognition (Ansell et al., 2021), and question answering (Conneau et al., 2020). Some recent work has also investigated generative tasks in the zero-shot setting. Johnson et al. (2017) show that by prepending a special token to the input text to indicate the target language of the translation, models learn to perform implicit bridging between language pairs unseen during training. Chen et al. (2021) perform zero-shot cross-lingual machine translation, by using parallel data in only one language pair and leveraging a multilingual encoder to support inference in other languages. Vu et al. (2022) study how to fine-tune language models on only one language to perform zero-shot cross-lingual summarization in other languages, by adding unlabeled multilingual data. Whitehouse et al. (2022) use Wikidata to improve zero-shot cross-lingual transfer for code-switching in a number of entity-centric downstream tasks. We also resort to Wikidata to obtain a canonical designation of entities across languages, however, the use of plans as a cross-lingual bridge for summarization is new to our knowledge.\n3 Plans as a Cross-Lingual Bridge" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b33" ], "table_ref": [], "text": "We formalize the cross-lingual abstractive summarization task as follows: Given an input document d in a source language SRC, generate a summary s in target language TGT. We model this as p(s|d).\nFor the content planning objective, our goal is to teach the model to first generate a content plan c for the summary as p(c|d), before generating the summary itself as p(s|c, d). Following Narayan et al. (2021), instead of modeling p(c|d) and p(s|c, d) separately, we train the model to generate the concatenated plan and summary sequence c; s. As a result, the model first generates the content plan c and then continues to generate the summary s conditioned on both c and d. In the following section, we describe how we annotate the data with content plans for this planning objective." }, { "figure_ref": [], "heading": "Content Plans", "publication_ref": [ "b33" ], "table_ref": [], "text": "Similarly to Narayan et al. (2021), we formulate the content plan as an ordered sequence of entities. Figure 2 illustrates our annotation process. We annotate each example with its corresponding content plan by extracting salient entities, i.e., entities that are important to mention when summarizing.\nWe extend this paradigm by linking each entity to its entry in a multilingual knowledge base. This way we obtain a canonical designation of each entity, removing morphology and selecting the most common designation out of multiple aliases. The knowledge base also provides disambiguation when it is needed. We use entity names in the content plans, instead of knowledge base indices, in order to leverage the natural language capabilities of pretrained language models.\nWe then use the inter-language information from the knowledge base to pivot content plans across languages. For each entity, we obtain its canonical designation in both the language of the source document and the language of the target summary. We provide an example of the multilingual mappings in our annotated content plans in Figure 2.\nThis strategy enables the mapping of entities that do not have an equivalent name in the target language to fully-localized names. And the model learns to generate a content plan of localized entities, avoiding errors induced by translation.\nFinally, we compose the content plan as a sequence of canonical entity names, each expressed in pairs in both the source and target language (Table 1). We designate the planning objective using these cross-lingual content plans as µPLAN." }, { "figure_ref": [], "heading": "Summarization Tasks", "publication_ref": [], "table_ref": [], "text": "We next define the summarization tasks considered in this work, and our assumptions about the crosslingual training data being available." }, { "figure_ref": [], "heading": "Cross-Lingual Tasks", "publication_ref": [ "b47" ], "table_ref": [], "text": "In what follows, let L be the set of all languages, SRC the language of the source document, and TGT the language of the target summary. We denote the cross-lingual data as D SRC→TGT , e.g., D EN→CS for Czech summaries aligned with English inputs. Analogously, we denote the monolingual data as D LANG , e.g., D CS for Czech summaries with Czech inputs.\nHerein, we investigate two specific cross-lingual tasks: (a) from English to other languages and (b) from other languages to English, which we denote as EN → ALL and ALL → EN, respectively. The EN → ALL task is the main focus of our work. The task is particularly interesting because it would make a large amount of English information available to speakers of other languages but also challenging since it involves a cross-lingual summarization model that can generate fluent text in many languages. We define the data for the EN → ALL task as:\nD EN→ALL = D EN ∪ TGT∈L-{EN} D EN→TGT ,\nand for the ALL → EN, task as:\nD ALL→EN = D EN ∪ SRC∈L-{EN} D SRC→EN .\nNote that both tasks have access to monolingual EN data. For models that do not use an intermediate planning step, each data example is a document and summary pair (d, s). For µPLAN models, each data example also includes a content plan, (d, c; s).\nZero-Shot Cross-Lingual Tasks Given the scarcity of cross-lingual datasets, we investigate whether µPLAN can help with zero-shot crosslingual transfer to new language pairs. For each target language TGT, we perform zero-shot transfer experiments on the EN → ALL task by holding out the EN → TGT cross-lingual data during fine-tuning. We then evaluate performance on the EN → TGT test data. To ensure that the model maps the language token to the correct language and to prevent catastrophic forgetting of the TGT language during fine-tuning (Vu et al., 2022), we include TGT monolingual summarization data in the fine-tuning data mixture, under the assumption that monolingual data is easier to come by than crosslingual data. We denote this zero-shot cross-lingual transfer task as EN → TGT ZS and define as:\nD EN→TGTZS = D EN ∪ D TGT ∪ L∈L-{EN,TGT} D EN→L .\nFor greater generalization, we could use unlabeled monolingual data (without summaries), however, we leave this to future work.\n4 Experimental Setup" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The XWikis dataset (Perez-Beltrachini and Lapata, 2021) was created from Wikipedia articles under the assumption that the body and lead paragraph constitute a document-summary pair. Crosslingual document-summary instances were derived by combining lead paragraphs and articles' bodies from language-aligned Wikipedia titles. Although XWikis covers only four languages, English (EN), Czech (CS), German (DE), and French (FR), the dataset creation procedure is general and applicable to any languages represented in Wikipedia.\nTable 2 shows the number of data samples for each language pair. Note that the EN → TGT language pairs are not parallel between all languages. Cross-lingual language pairs in the ALL → EN setting have separate training, validation and test splits, but in the EN → ALL setting there are only training and validation splits. Therefore, for all the EN → ALL cross-lingual language pairs, we separate the validation split into two, taking the first 250 examples for validation and the rest for testing.\nThe XWikis dataset provides the input documents as a list of section titles and paragraphs that constitute the body of the Wikipedia article to summarize. We format the input documents by concatenating the titles and paragraphs, marking each title with an end-of-title token EOT and each paragraph with an end-of-paragraph token EOP. We prepend the source language code and target language code to the input document for each cross-lingual document and summary pair.\nSince the XWikis dataset is derived from Wikipedia, we annotate the plans by extracting all the entities from the reference summaries that have embedded hyperlinks. We then exclude the ones that correspond to phonetic pronunciations. For each of the remaining hyperlinks, we query the Wikidata knowledge base3 to extract the ID of the entity (e.g., 'Q844837') corresponding to the hyperlink URL (e.g., https://en.wikipedia.org/ wiki/Southern_California). Querying Wikidata again for this entity ID allows us to retrieve its canonical name in different languages (e.g., 'Southern California' in English, or 'Südkalifornien' in German; see Figure 2). The XWikis dataset was generated from a 2016 Wikipedia data dump and we used one from 2023 for extracting the hyperlinks from the summaries. Therefore, for articles that went through significant changes between 2016 and 2023, the pages were not aligned and we did not annotate these examples with content plans. This problem affects about 4.5% of the training data. We create a filtered version of the training data that excludes these examples with missing content plans." }, { "figure_ref": [], "heading": "Comparison Models", "publication_ref": [ "b34", "b48", "b25", "b23", "b47", "b10", "b53", "b29" ], "table_ref": [ "tab_5" ], "text": "We demonstrate µPLAN on both the EN → ALL and ALL → EN tasks and compare it with a number of different modeling approaches.\nMachine Translation A common approach is to adopt a machine translation-based pipeline which can be used in two ways: (a) first translate the original document into the target language and then summarize the translated document or (b) first summarize the original document and then translate the summary (Ouyang et al., 2019;Wan et al., 2010;Ladhak et al., 2020). We denote the former approach as Translate-train (TR train ) and the latter as Translate-test (TR test ). We perform machine translation with Google Translate. Previous work (Kramchaninova and Defauw, 2022;Vu et al., 2022) has highlighted various limitations with these approaches such as dependence on the quality of available machine translation systems in a given language and in turn the availability of high-quality parallel data, a potential misalignment of the data after translation, and translationese artifacts (Clark et al., 2020).\nEnd-to-end Summarization This approach, which we denote as E2E, directly fine-tunes a multilingual pretrained model on the cross-lingual data (Perez-Beltrachini and Lapata, 2021). It does not incorporate a planning component, but avoids the potential error propagation problem of machine translation pipeline systems.\nµPLAN Variants We experiment with different plan formulations to establish which type of plan performs well as a cross-lingual bridge. The language of the source document being different from the language of the target summary raises the question of which language to use for the content plans. In the default µPLAN setup, entities in the plan are expressed in pairs, with their canonical name in both the language of the source document and the language of the target summary. In addition, we explore two alternatives: (a) entity names only in the source language and (b) entity names only in the target language. Table 3 presents examples of different language plans. Moreover, we experiment with the internal constitution of the plans: we provide the length of the gold plan during training [LENGTH], and shuffle entities to investigate the importance of the sequence order [SHUFFLE]. Since the quality of the plan annotations is dependent on the quality of the entity linking, we also investigate the impact of partially corrupted gold plans, by dropping a portion of the plan entities at random during training. We denote these experiments as [CORRUPT20] and [CORRUPT30], in which we drop 20% and 30% of the entities, respectively.\nModel Training All baselines and µPLAN variants are based on the mT5 model (Xue et al. 2021; XL 3.7B parameters) which we finetune with maximum input and output sequence lengths of 2,048 and 256 tokens, respectively. Our models are finetuned on Cloud TPU v3 with a learning rate of 0.002, a batch size of 128, up to 80,000 steps, evaluating every 1,000 steps. We select the best checkpoints by measuring ROUGE-L (see Section 5.1 for details) on 250 examples of the validation split for each language pair and take the best unweighted average across all language pairs. Note on LLMs We performed few-shot experiments with LLMs, however, these were consistently inferior to our fine-tuned systems confirming the observations of Maynez et al. (2023). It is particularly challenging to learn to plan and summarize simply from a few examples. We report LLM experiments (1-shot, no planning) in Appendix E." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [ "b26", "b0", "b24", "b53", "b32", "b42", "b53", "b12", "b45" ], "table_ref": [ "tab_6", "tab_6", "tab_7", "tab_5", "tab_7", "tab_5", "tab_7", "tab_8", "tab_9" ], "text": "We automatically evaluate system output along the dimensions of summary relevance, summary faithfulness, and content plan relevance. For summary relevance, we use ROUGE (Lin, 2004) to compare system-generated summaries with gold-standard ones. Since the availability of word tokenizers differs for non-English languages, we follow Aharoni et al. (2022) and compute ROUGE with a Senten-cePiece tokenizer (Kudo and Richardson, 2018) trained on mC4 (Xue et al., 2021).\nIn terms of summary faithfulness, following Honovich et al. ( 2022 sifier that predicts whether the input document supports the output summary. In line with previous work (Narayan et al., 2022;Schuster et al., 2022), we split the summary into sentences for a more fine-grained evaluation. We predict the entailment of each sentence and average the entailment scores. We use an mT5-XXL model (Xue et al., 2021) trained on XNLI (Conneau et al., 2018), a multilingual NLI dataset. There are currently no cross-lingual datasets for NLI, however our preliminary analysis reported in Appendix B shows that an XNLI-trained mT5 model works well in predicting cross-lingual entailment. It has the added benefit of avoiding potential error propagation from introducing a machine translation step in the evaluation process (e.g., translating the document or the summary in English). Finally, we evaluate plan relevance, by comparing generated content plans against goldstandard ones. Specifically, we compute F1 scores on the entities in the predicted summaries against the corresponding reference entities.\nPlanning outperforms translation-based approaches Table 4 presents an overview of our results for the EN → ALL and ALL → EN tasks.\nWe report results on the filtered data, as we observed little difference overall between filtered and non-filtered training samples (results with nonfiltered training data are provided in Appendix D). Moreover, for the sake of brevity, we only present ROUGE-L results, however see Appendix C for additional metrics. We see that µPLAN consistently outperforms both the translation-based approaches and the non-planning baseline (E2E) in terms of ROUGE-L and XNLI scores on both EN → ALL and ALL → EN tasks. Note that TR train is the overall winner according to XNLI in the ALL → EN task. We hypothesize that the higher XLNI scores for TR train are to some extent an artifact of translation and the XNLI model. Indeed, machine translation tends to drop information during the translation process, which biases TR train towards higher XNLI scores. The other reason is that the XNLI model itself has been trained on more English data and just works better in this setting as it is faced with a simpler monolingual task (both the input document and summary are in English). Previous work (Perez-Beltrachini and Lapata, 2021) has focused on ALL → EN tasks using mBART50 (Tang et al., 2020) and E2E models; they report an average ROUGE-L of 32.76 for the same language pairs shown in Table 4 (last row).\nBest plans include entities in source and target language We compare different types of plan formulations on the EN → ALL task and report our results in Table 5. Mixed language plans that contain entities in both the source and target language, which is the default µPLAN setting, deliver better results than plans with entities in only one language (marked here as SRC and TGT). Table 3 shows some plans generated by µPLAN under these different settings and compares them to the gold ones.\nPredicted and gold plans have similar length, measured by the number of entities in the plan (6 on average). We also find that gold and predicted plans have overlapping but not identical entities (the F1 score is around 0.4; see Tables 5 and3). However, we do not expect perfect overlap; gold summaries in XWikis are derived from lead paragraphs in Wikipedia articles, and as a result some of the entities in the gold plans might not even appear in the source document. This is corroborated by XNLI scores which are lower for oracle summaries compared to machine-generated ones. Providing information about the length of the gold plan during training, reported as LENGTH, does not affect the results very much and actually yields slightly lower metrics than the default µPLAN setup. The SHUFFLE metrics, for which the entity order is shuffled, are similar to the default setup. This result indicates that the order of the entities does not matter much for planning the summary generation.\nThe experiments with corrupted entity plans mimic the effects of an imperfect entity linking. At training time, we drop a percentage of the entities in the plan at random, denoted as CORRUPT20 and CORRUPT30, for 20% and 30%, respectively. We observe that µPLAN is robust to some degree of noise in the plan annotation process, as there is only a slight decrease in ROUGE-L and XNLI scores as the percentage of corruption increases.\nOracle plans show there is room for improvement For comparison, we report results when models have access to oracle content plans, which we denote as oracle. At inference time, the encoder first encodes the source document, while the decoder gets the gold plan as a forced prompt before generating the summary. These oracle experiments provide an upper bound of how µPLAN models would perform in a best case scenario. In Table 5, we see that the oracle metrics are higher by a wide margin, of around 10 ROUGE-L points, from the best predicted results. This behavior is expected and shows that models can correctly generate summaries from plans in the target language but also from aligned English plans. Moreover, these results confirm that µPLAN's mixed language plans provide additional information that models can leverage effectively. While ROUGE-L scores are much better, we note that oracle plan experiments obtain lower XNLI scores overall. This behavior is somewhat expected since the XWikis dataset was created by associating the leading paragraph of a Wikipedia page with the body of the article. Perez-Beltrachini and Lapata (2021) verified whether the lead paragraph constitutes a valid summary, by asking native speakers to ascertain for each sentence in the summary whether it was supported by the document. Overall, human judges viewed the summaries as an acceptable (but not perfect) overview of the Wikipedia document, with 60%-78% of the summary sentences being supported by the document, depending on language pairs. Planning enables zero-shot transfer Table 6 shows the results of our zero-shot cross-lingual transfer experiments. We observe that µPLAN delivers higher ROUGE-L and XNLI scores when evaluated on an unseen language pair. This indicates that an intermediate planning step helps transfer task knowledge to new language pairs. Planning enables domain transfer In addition to these zero-shot cross-lingual transfer experiments, we extend our analysis to zero-shot domain transfer by applying the trained models on data from another domain. For this experiment, we select the CrossSum dataset (Bhattacharjee et al 2021), a cross-lingual dataset with article-summary pairs derived from news articles. While Cross-Sum summaries are much shorter than the XWikis ones and do not necessarily call for an intermediate planning step for content selection and organization, previous experiments show that µPLAN brings improvements in faithfulness that might benefit CrossSum as well. We run inference on the test splits of CrossSum with the E2E and µPLAN models trained on the XWikis corpus and report results in Table 7. We observe that the µPLAN model yields much better XNLI scores for comparable ROUGE-L scores, compared to the E2E model without planning. ROUGE-L scores are overall low for both models because for many language pairs, the models exhibit catastrophic forgetting due to the mismatch of languages between the CrossSum and the XWikis datasets. When inspecting the EN→ FR direction, which is present in both XWikis and CrossSum, we observe that µPLAN brings improvements in both ROUGE-L and XNLI scores." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "In addition to automatic metrics, we also conducted a judgment elicitation study.\nSpecifically, we compared µPLAN, against the E2E system, and reference summaries. Bilingual raters were shown a document, alongside two summaries and were asked to provide pairwise references along the following dimensions: Coherence (is the summary easy to understand and grammatically correct?), Accuracy (is all the information in the summary attributable to the original text?), and Informativeness (does the summary capture important information from the original text?). We recruited 178 annotators (all native speakers) and elicited preferences for 100 summaries (test set) per language pair (EN → CS, EN → DE, EN → FR). Appendix F showcases our instructions and examples of summaries our annotators rated.\nWe present aggregate results in more accurate and informative (p < 0.05 using a Wilcoxon signed-rank test). Interestingly, our raters find µPLAN summaries on par with gold summaries across all dimensions (differences between them are not significant)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b4", "b33", "b32", "b20" ], "table_ref": [], "text": "In this work we present µPLAN, an approach to cross-lingual summarization that uses an intermediate planning step as a cross-lingual bridge. Since hallucinations and mistranslations in cross-lingual summarization are often tied to incorrect entities, we formulate the content plan as a sequence of entities expressing salient content and how it should be presented. Evaluation on the XWikis dataset demonstrates that this planning objective achieves state-of-the-art performance in EN → ALL and ALL → EN settings and enables zero-shot crosslingual transfer to new language pairs.\nIn this work, we use the embedded hyperlinks in Wikipedia articles to extract salient entities and align them on the Wikidata knowledge base. With recent entity annotation systems such as REFINED (Ayoola et al., 2022), the same operation can be applied on out-of-domain data, including the multilingual alignment of the entity names. Unlike latent variable-based intermediate representations, our content plans are interpretable (they are expressed in natural language) and can be easily edited, e.g., by filtering the entities at inference time or with a human in the loop (Narayan et al., 2021(Narayan et al., , 2022;;Huot et al., 2023). Using forced prompting methods as described in the oracle experiments, would also allow us to localize entity names at inference time from a knowledge base. In the future, we plan to explore the task transfer capabilities of µPLAN in low-resource settings as we cannot realistically expect to have large-scale cross-lingual data on all possible language pairs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "An ethical consideration with generative language models is the problem of misinformation. While the work we present here makes a step towards improving the faithfulness and factual consistency of text generation systems, it is important to note that current systems are still far from perfect in this respect. They can make mistakes and thus their output should be checked and used with caution. A Cross-lingual Summarization Datasets to English and apply an NLI model trained on an English corpus. The second one is the multilingual NLI setting, which we denote as XNLI-m. For the cross-lingual language pairs, we translate the English document or summary such that both document and summary are in the same language (which is either the source or target language, depending on whether it is the EN → ALL or ALL → EN task). We then apply a multilingual NLI model. The last setting is the cross-lingual setting, which we denote as XNLIx. In this setting, we do not use translation, and directly apply the multilingual NLI model to the cross-lingual data." }, { "figure_ref": [], "heading": "C Experimental Results", "publication_ref": [], "table_ref": [ "tab_17" ], "text": "In Table 11 we present the full set of ROUGE scores for the EN → ALL and ALL → EN tasks. " }, { "figure_ref": [], "heading": "D Effects of Filtered Training Data", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E Few-shot Prompting of LLMs", "publication_ref": [ "b49", "b1" ], "table_ref": [ "tab_19" ], "text": "LLMs have demonstrated promising results in few-shot settings for cross-lingual summarization (Wang et al., 2023). In Table 13, we report 1-shot results obtained using PaLM 2 (Anil et al., 2023), a 340B parameter LLM. We perform 1-shot experiments for all language pairs in the EN → ALL and ALL → EN tasks. For each language pair, the prompt is formulated as follows:\nFrom a document in [source language], write a summary in [target language].\n(1) Document: [example document] Summary: [example summary]\n(2) Document: [document] Summary:\nThe example document and summary are taken from the training splits. We truncate the input documents at 2000 tokens to fit within the model's maximum sequence input length. We limit the experiments to the 1-shot setting, since more than one data example exceeds the maximum sequence length.\nThese 1-shot LLM experiments underperformed overall compared to our finetuned baselines. The ROUGE-L scores are lower than both the E2E and µPLAN models and the NLI scores are much lower than all models. In the EN → CS task, the model often generated outputs in English instead of Czech. These results highlight some of the challenges of learning cross-lingual summarization from just a few examples.\nWhile the few-shot setting has its limitations, fine-tuning large language models (LLMs) is com- " }, { "figure_ref": [], "heading": "F Human Evaluation Study", "publication_ref": [], "table_ref": [ "tab_22", "tab_23" ], "text": "Figure 3 presents the experimental instructions used in our human elicitation study. To recruit our participants, we screened their language skills to determine whether they are native speakers, their education level and country of residence as well as origin. In addition, we created a screener test to de-termine the raters' suitability for the task. In total, we recruited 178 annotators across four languages. Our annotators were paid adequately by our suppliers adhering to the supplier code of conduct. Tables 15 and16 show examples of the summaries rated by our participants (gold-standard references or output generated by µPLAN and the E2E systems).\nHill of Tara (https://en.wikipedia.org/wiki/Hill_of_Tara) " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "In this task, you will be asked to read a web article in English and rate and compare different summaries of that article in another language. The summary outlines what the article is about, to get a reader interested in its content. Your job is to evaluate how helpful each summary would be to a user.\nA good summary should have the below properties: • Answer each of the below Yes/No questions about the summary:\ni.\n[Coherent] Is the summary easy to understand and grammatically correct?\nii.\n[Accurate] Is ALL the information in the summary attributable to the original text?\niii.\n[Informative] Does the summary capture interesting / relevant information from the original text?\n2.\nRate which summary is better using the side-by-side (SxS) rating scale." }, { "figure_ref": [], "heading": "Instructions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Hass avocado", "publication_ref": [], "table_ref": [], "text": "History.\nAll commercial, fruit-bearing Hass avocado trees have been grown from grafted seedlings propagated from a single tree that was grown from a seed bought by Rudolph Hass in 1926 from A. R. Rideout of Whittier, California. At the time, Rideout was getting seeds from any source he could find, even restaurant food scraps. The cultivar this seed came from is not known and may already have been cross-pollinated when Hass bought it. In 1926, at his 1.5-acre grove at 430 West Road, La Habra Heights, California, Hass planted three seeds he had bought from Rideout, which yielded one strong seedling. After trying and failing at least twice to graft the seedling with branches from Fuerte avocado trees (the leading commercial cultivar at the time), Hass thought of cutting it down but a professional grafter named Caulkins told him the young tree was sound and strong, so he let it be. " } ]
Cross-lingual summarization aims to generate a summary in one language given input in a different language, allowing for the dissemination of relevant content among different language speaking populations. The task is challenging mainly due to the paucity of cross-lingual datasets and the compounded difficulty of summarizing and translating. This work presents µPLAN, an approach to cross-lingual summarization that uses an intermediate planning step as a cross-lingual bridge. We formulate the plan as a sequence of entities capturing the summary's content and the order in which it should be communicated. Importantly, our plans abstract from surface form: using a multilingual knowledge base, we align entities to their canonical designation across languages and generate the summary conditioned on this cross-lingual bridge and the input. 1 Automatic and human evaluation on the XWikis dataset (across four language pairs) demonstrates that our planning objective achieves state-of-theart performance in terms of informativeness and faithfulness. Moreover, µPLAN models improve the zero-shot transfer to new crosslingual language pairs compared to baselines without a planning component.
µPLAN: Summarizing using a Content Plan as Cross-Lingual Bridge
[ { "figure_caption": "Figure 1 :1Figure 1: Source document and content plan in English; target summaries in Czech, German, and French.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "The Hass avocado was first grown and sold by Southern California mail carrier and amateur horticulturist Rudolph Hass, who also gave it his name.", "figure_data": "SummaryKnowledge BaseQ5679460Q844837Q668687Q48803Q5679460ENDEContentHass avocadoSouthern CaliforniaHass AvocadoSüdkalifornienPlansUnited States Postal ServicehorticultureUnited States Postal ServiceGartenbauRudolph HassRudolph HassFigure 2: Plan annotation on an example summary (salient entities highlighted in yellow). After pivoting on theknowledge base, corresponding canonical entities in English are shown in the bottom left. Most times they matchthe surface form in the summary (in red), other times they have the same root (in green) but they could differ greatlywhen entities need disambiguation (in blue). The aligned German content plan is shown in the bottom right.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "FR CALET est un observatoire spatial développé par le Japon et installé en 2015 à bord de la Station spatiale internationale. Cet instrument analyse les rayons cosmiques et le rayonnement gamma à haute énergie avec comme objectif principal l'identification des éventuelles signatures de la matière noire. Summaries with annotated plans. Same color denotes alignment between entities in the plan and summary. Plans are entities in the language of the source document and (diacritic &) the language of the target summary.", "figure_data": "SummaryPlanEN → CS Richard Dagobert Brauer byl německý matematik žijícíGerman Empire & Německé císařství | mathe-v USA. Pracoval zejména v oblastech abstraktní algebrymatician & matematik | United States of Amer-a teorie čísel. Je také zakladatelem modulární teorieica & Spojené státy americké | algebra & alge-reprezentací.bra | number theory & teorie číselspace observatory & télescope spatial | Japan& Japon | International Space Station & sta-tion spatiale internationale | cosmic radiation &rayonnement cosmique | gamma ray & rayongamma | dark matter & matière noireDE → EN The TKS spacecraft (\"Transport Supply Spacecraft\",Hauptverwaltung für Raketen und Artillerie &GRAU index 11F72) was a Soviet spacecraft conceivedGRAU | Sowjetunion & Soviet Union | Raum-in the late 1960s for resupply flights to the military Al-schiff & spacecraft | Almas & Almazmaz space station.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Number of data samples in the XWikis dataset and splits considered in this work. New splits for the EN → ALL language pairs are marked by † .", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of generated and gold content plans for different source and target languages.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "ROUGE-L and XNLI results per language pair and overall for the EN → ALL and ALL → EN tasks. Systems significantly different from µPLAN are underlined (using paired bootstrap resampling; p < 0.05).", "figure_data": "), we employ an entailment clas-", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of different µPLAN plan formulations (including oracles) on the EN → ALL task.", "figure_data": "ROUGE-L XNLIF1µPLAN38.3047.390.40µPLAN SRC38.1447.720.41µPLAN TGT37.9747.370.40µPLAN LENGTH37.0945.710.37µPLAN SHUFFLE38.0146.250.40µPLAN CORRUPT20 38.3447.460.33µPLAN CORRUPT30 38.1746.550.30µPLAN oracle48.2840.831.00µPLAN oracle SRC47.9641.221.00µPLAN oracle TGT48.1340.841.00", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Zero-shot cross-lingual transfer results.", "figure_data": "ROUGE-LXNLIE2EµPLANE2EµPLANEN → CS ZS 15.1018.6434.9539.04EN → DE ZS 17.5019.1845.5148.80EN → FR ZS 18.5423.6145.5145.96", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "., ALL 9.15 9.33 31.38 43.53 EN → FR 22.03 23.10 33.39 47.63 Zero-shot domain transfer results (CrossSum).", "figure_data": "ROUGE-LXNLIE2E µPLANE2E µPLAN", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "", "figure_data": "(see Ap-pendix F for detailed analysis). µPLAN summariesare as coherent as E2E summaries but significantly", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Human evaluation results aggregated over three language pairs (EN → CS, EN → DE, EN → FR); statistically significant differences are underlined.", "figure_data": "", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Number of languages (Lang), average numberof document-summary pairs (Pairs), average summary(SumL) and document (DocL) length in terms of numberof tokens for different cross-lingual datasets.", "figure_id": "tab_13", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Table 10 compares different ways of computing NLI. It is computed on the summaries generated by the baseline E2E model on the EN → ALL and ALL → EN tasks. The first setting, denoted as ANLI, is the English setting, for which we translate the non-English document (ALL → EN) or summary (EN → ALL)", "figure_data": "", "figure_id": "tab_15", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Entailment metrics on English, multilingual, and cross-lingual settings.", "figure_data": "Table 12 compares the results obtained with thefiltered and non-filtered training data. Overall, theresults are similar, which is expected since the dif-ference in the number of training samples is rela-tively small.", "figure_id": "tab_16", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "ROUGE-1 and ROUGE-2 results per language pair and overall for the EN → ALL and ALL → EN tasks.", "figure_data": "EN → ALLALL → ENROUGE-LXNLIROUGE-1 / 2 / LXNLIE2E44.54 / 28.57 / 37.4042.7543.54 / 23.44 / 33.7937.58filtered44.87 / 28.65 / 37.5641.7743.51 / 23.55 / 33.9237.87", "figure_id": "tab_17", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Comparison of cross-lingual summarization results obtained with filtered and non-filtered training data.", "figure_data": "ROUGE-LXNLIEN → EN36.3736.87EN → CS28.6431.90EN → DE32.8331.68EN → FR39.9334.40EN → ALL34.4433.71ROUGE-LXNLIEN → EN36.3736.87CS → EN26.2729.00DE → EN34.9732.68FR → EN30.3924.44ALL → EN32.0030.75", "figure_id": "tab_18", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "One-shot prompting results with PaLM 2 per language pair and overall for the EN → ALL and ALL → EN tasks.", "figure_data": "putationally expensive, and not suited for studieswith many experiments.", "figure_id": "tab_19", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "La colline de Tara (en irlandais : \"Cnoc na Teamhrach\", « colline des rois ») est une colline du comtéde Dublin en République d'Irlande. µPLAN La colline de Tara (en irlandais \"Cnoc na Teamhrach\", « colline des rois ») est une colline située à l'est de Dublin, en Irlande. C'était autrefois le haut lieu des rois d'Irlande. Dans la mythologie celtique irlandaise, elle était la capitale des Tuatha Dé Danann. Reference Tara est un site archéologique d'Irlande dans le comté de Meath. Dans la mythologie celtique irlandaise, Tara est la capitale mythique de l'Irlande, située dans la cinquième province de Mide, dans le centre du pays : c'est la « colline des rois » (). est un rocher de l'Eifel, situé dans la commune de Waimes, près de Reichenstein/Monschau, en Belgique. µPLAN Le Richelsley est une formation rocheuse située dans la commune de Waimes, dans la province de Liège, en Région wallonne, en Belgique, près de la frontière allemande, à proximité de l'abbaye de Reichenstein/Monschau. Le Richelsley est surtout connu pour sa grande croix de six mètres de haut, qui a été érigée en 1890 par le prêtre Gerhard Joseph Arnoldy, qui travaillait de 1869 à 1914 à Kalterherberg et était le bâtisseur de l'actuelle cathédrale d'Eifel. Le roman \"Das Kreuz im Venn\" de Clara Viebig a fait du Richelsley un lieu de pèlerinage. Reference Le rocher de Richelsley est un important rocher isolé situé à l'est de la Belgique dans les Hautes Fagnes et faisant partie de la commune de Waimes. Sur ce rocher, se dresse une croix appelée Kreuz im Venn.", "figure_data": "Richelsley (https://en.wikipedia.org/wiki/Richelsley)", "figure_id": "tab_20", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Examples of system output (E2E, µPLAN) and gold-standard reference for the EN → FR setting. Only title and url are shown for input Wikipedia article, for the sake of brevity. Carduus\") sind eine Pflanzengattung in der Familie der Korbblütler (Asteraceae). Die etwa 90 bis 127 Arten sind fast weltweit verbreitet. µPLAN Die Stiele (\"Carduus\") sind eine Pflanzengattung in der Unterfamilie Carduoideae innerhalb der Familie der Korbblütler (Asteraceae). Die etwa 90 bis 127 Arten sind in den gemäßigten Gebieten der Nordhal. Reference Die Ringdisteln (\"Carduus\") sind eine Pflanzengattung in der Familie der Korbblütler (Asteraceae). Die etwa 90 Arten sind ursprünglich in Eurasien und Afrika verbreitet. Francesco Satolli (https://en.wikipedia.org/wiki/Francesco_Satolli) E2E Francesco Kardinal Satolli (* 21. Juli 1839 in Marsciano, Provinz Perugia, Italien; † 8. Januar 1910 in Rom) war ein Kardinal der Römischen Kirche. µPLAN Francesco Kardinal Satolli (* 21. Juli 1839 in Marsciano; † 8. Januar 1910 in Rom) war ein italienischer Geistlicher und Kurienkardinal. Reference Francesco di Paola Kardinal Satolli (* 21. Juli 1839 im Marsciano in der Nähe von Perugia; † 8. Januar 1910 in Rom) war ein italienischer Theologe, Kardinal und der erste Apostolische Delegat in die Vereinigten Staaten.", "figure_data": "", "figure_id": "tab_21", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Examples of system output (E2E, µPLAN) and gold-standard reference for the EN → DE setting. Only title and url are shown for input Wikipedia article, for the sake of brevity.Legendre's formula (https://en.wikipedia.org/wiki/Legendre%27s_formula)E2ELegendreova věta je matematické tvrzení z oblasti teorie čísel, které umožňuje počítat součty čísel a rozdělení jejich mocnin: Tedy například pokud \"n\" je kladné a \"p\" je velké číslo, pak µPLAN Legendrova věta je matematické tvrzení z oblasti teorie čísel, které říká, že pro každé prvočíslo \"p\" a každé přirozené číslo \"n\" existuje největší mocninná mocnina \"p\", která dělí \"n\". Tedy Reference Legendreův vzorec (také De Polignacův vzorec) dovoluje vypočítat nejvyšší exponent u prvočísla formula_1, kde formula_1 umocněné na tento exponent ještě dělí číslo formula_3 (faktoriál přirozeného čísla formula_4). Jedná se v podstatě o výpočet p-adické valuace čísla formula_3. Stack overflow (https://en.wikipedia.org/wiki/Stack_overflow) E2E Přetečení na zásobníku () je v informatice označení pro speciální případ přetečení na zásobníku. Přetečení na zásobníku nastává v důsledku umělého vyčerpání úložného prostoru programem. V důsledku toho dojde k přepsání obsahu na zásobník, což způsobí vznik segmentation fault. µPLAN Přetečení na zásobníku () je v informatice situace, kdy je v počítači program, který používá zásobník operační paměti, více než je v ní uložen. Přetečení na zásobníku vzniká většinou v důsledku dlouhých procesů, které konají nadbytečné údaje. Nejčastější příčinou přetečení na zásobníku je nekonečná rekurze. V programech s více vlákny je přetečení na zásobníku běžné. Reference Přetečení zásobníku je situace v programování, kdy dojde k pokusu uložit na zásobník volání více dat, než kolik se tam vejde. Velikost tohoto zásobníku je obvykle předem dána při startu programu v závislosti na architektuře systému, překladači, množství volné paměti atp. Když se program pokusí posunout vrchol zásobníku mimo vymezenou pamět', mluvíme o přetečení zásobníku. To má obvykle za následek pád programu.", "figure_data": "", "figure_id": "tab_22", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Examples of system output (E2E, µPLAN) and gold-standard reference for the EN → CZ setting. Only title and url are shown for input Wikipedia article, for the sake of brevity.", "figure_data": "", "figure_id": "tab_23", "figure_label": "16", "figure_type": "table" } ]
Fantine Huot; Joshua Maynez; Chris Alberti; Reinald Kim; Amplayo Priyanka; Agrawal Constanza Fierro; Shashi Narayan; Mirella Lapata
[ { "authors": "Roee Aharoni; Shashi Narayan; Joshua Maynez; Jonathan Herzig; Elizabeth Clark; Mirella Lapata", "journal": "", "ref_id": "b0", "title": "mface: Multilingual summarization with factual consistency evaluation", "year": "2022" }, { "authors": "Rohan Anil; Andrew M Dai; Orhan Firat; Melvin Johnson; Dmitry Lepikhin; Alexandre Passos; Siamak Shakeri; Emanuel Taropa; Paige Bailey; Zhifeng Chen", "journal": "", "ref_id": "b1", "title": "Palm 2 technical report", "year": "2023" }, { "authors": "Alan Ansell; Maria Edoardo; Jonas Ponti; Sebastian Pfeiffer; Goran Ruder; Ivan Glavaš; Anna Vulić; Korhonen", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "MAD-G: Multilingual adapter generation for efficient cross-lingual transfer", "year": "2021" }, { "authors": "Mikel Artetxe; Holger Schwenk", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b3", "title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond", "year": "2019" }, { "authors": "Tom Ayoola; Shubhi Tyagi; Joseph Fisher; Christos Christodoulopoulos; Andrea Pierleoni", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Re-FinED: An efficient zero-shot-capable approach to end-to-end entity linking", "year": "2022" }, { "authors": "Abhik Bhattacharjee; Tahmid Hasan; Uddin Wasi; Yuan-Fang Ahmad; Yong-Bin Li; Rifat Kang; Shahriyar", "journal": "", "ref_id": "b5", "title": "Crosssum: Beyond englishcentric cross-lingual abstractive text summarization for 1500+ language pairs", "year": "2021" }, { "authors": "Meng Cao; Yue Dong; Jackie Cheung", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization", "year": "2022" }, { "authors": "Yue Cao; Hui Liu; Xiaojun Wan", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Jointly learning to align and summarize for neural crosslingual summarization", "year": "2020" }, { "authors": "Guanhua Chen; Shuming Ma; Yun Chen; Li Dong; Dongdong Zhang; Jia Pan; Wenping Wang; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Zero-shot cross-lingual transfer of neural machine translation with multilingual pretrained encoders", "year": "2021" }, { "authors": "Zewen Chi; Li Dong; Furu Wei; Wenhui Wang; Xian-Ling Mao; Heyan Huang", "journal": "AAAI Press", "ref_id": "b9", "title": "Cross-lingual natural language generation via pre-training", "year": "2020" }, { "authors": "Jonathan H Clark; Eunsol Choi; Michael Collins; Dan Garrette; Tom Kwiatkowski; Vitaly Nikolaev; Jennimaria Palomaki", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "year": "2020" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Alexis Conneau; Ruty Rinott; Guillaume Lample; Adina Williams; Samuel Bowman; Holger Schwenk; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "XNLI: Evaluating crosslingual sentence representations", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mehwish Fatima; Michael Strube", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "A novel Wikipedia based dataset for monolingual and crosslingual summarization", "year": "2021" }, { "authors": "Max Grusky; Mor Naaman; Yoav Artzi", "journal": "", "ref_id": "b15", "title": "Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies", "year": "2018" }, { "authors": "Tahmid Hasan; Abhik Bhattacharjee; Md Saiful Islam; Kazi Samin; Yuan-Fang Li; Yong-Bin Kang; M Sohel Rahman; Rifat Shahriyar", "journal": "", "ref_id": "b16", "title": "Xl-sum: Largescale multilingual abstractive summarization for 44 languages", "year": "2021" }, { "authors": "Karl Moritz Hermann; Tomas Kocisky; Edward Grefenstette; Lasse Espeholt; Will Kay; Mustafa Suleyman; Phil Blunsom", "journal": "", "ref_id": "b17", "title": "Teaching machines to read and comprehend", "year": "2015" }, { "authors": "Inc Or Curran Associates; Roee Honovich; Jonathan Aharoni; Hagai Herzig; Doron Taitelbaum; Vered Kukliansy; Thomas Cohen; Idan Scialom; Avinatan Szpektor; Yossi Hassidim; Matias", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "TRUE: Re-evaluating factual consistency evaluation", "year": "2022" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "PMLR", "ref_id": "b19", "title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "year": "2020" }, { "authors": "Fantine Huot; Joshua Maynez; Shashi Narayan; Reinald Kim Amplayo; Kuzman Ganchev; Annie Priyadarshini Louis; Anders Sandholm; Dipanjan Das; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Text-blueprint: An interactive platform for plan-based conditional generation", "year": "2023" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b21", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Philipp Koehn; Rebecca Knowles", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Six challenges for neural machine translation", "year": "2017" }, { "authors": "Alina Kramchaninova; Arne Defauw", "journal": "European Association for Machine Translation", "ref_id": "b23", "title": "Synthetic data generation for multilingual domainadaptable question answering systems", "year": "2022" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Faisal Ladhak; Esin Durmus; Claire Cardie; Kathleen Mckeown", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization", "year": "2020" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yinhan Liu; Jiatao Gu; Naman Goyal; Xian Li; Sergey Edunov; Marjan Ghazvininejad; Mike Lewis; Luke Zettlemoyer", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b27", "title": "Multilingual denoising pretraining for neural machine translation", "year": "2020" }, { "authors": "Zhengyuan Liu; Nancy Chen", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Controllable neural dialogue summarization with personal named entity planning", "year": "2021" }, { "authors": "Joshua Maynez; Priyanka Agrawal; Sebastian Gehrmann", "journal": "", "ref_id": "b29", "title": "Benchmarking large language model capabilities for conditional generation", "year": "2023" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Amit Moryossef; Yoav Goldberg; Ido Dagan", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Step-by-step: Separating planning from realization in neural data-to-text generation", "year": "2019" }, { "authors": "Shashi Narayan; Joshua Maynez; Reinald Kim Amplayo; Kuzman Ganchev; Annie Louis; Fantine Huot; Dipanjan Das; Mirella Lapata", "journal": "", "ref_id": "b32", "title": "Conditional generation with a question-answering blueprint", "year": "2022" }, { "authors": "Shashi Narayan; Yao Zhao; Joshua Maynez; Gonçalo Simões; Vitaly Nikolaev; Ryan Mcdonald", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b33", "title": "Planning with learned entity prompts for abstractive summarization", "year": "2021" }, { "authors": "Jessica Ouyang; Boya Song; Kathy Mckeown", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "A robust abstractive system for cross-lingual summarization", "year": "2019" }, { "authors": "Laura Perez-Beltrachini; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Models and datasets for cross-lingual summarisation", "year": "2021" }, { "authors": "Ratish Puduppully; Li Dong; Mirella Lapata", "journal": "", "ref_id": "b36", "title": "Data-to-text generation with content selection and planning", "year": "2019" }, { "authors": "Ratish Puduppully; Yao Fu; Mirella Lapata", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b37", "title": "Data-to-text generation with variational sequential planning", "year": "2022" }, { "authors": "Ratish Puduppully; Mirella Lapata", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b38", "title": "Data-totext generation with macro planning", "year": "2021" }, { "authors": "Sebastian Ruder; Ivan Vulić; Anders Søgaard", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b39", "title": "A survey of cross-lingual word embedding models", "year": "2019" }, { "authors": "Alexander M Rush; Sumit Chopra; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "A neural attention model for abstractive sentence summarization", "year": "2015" }, { "authors": "Evan Sandhaus", "journal": "", "ref_id": "b41", "title": "The New York Times Annotated Corpus", "year": "2008" }, { "authors": "Tal Schuster; Sihao Chen; Senaka Buthpitiya; Alex Fabrikant; Donald Metzler", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Stretching sentence-pair NLI models to reason over long documents and clusters", "year": "2022" }, { "authors": "Thomas Scialom; Paul-Alexis Dray; Sylvain Lamprier; Benjamin Piwowarski; Jacopo Staiano", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "MLSUM: The multilingual summarization corpus", "year": "2020" }, { "authors": "Milan Straka; Nikita Mediankin; Tom Kocmi; Zdeněk Žabokrtský; Vojtěch Hudeček; Jan Hajič", "journal": "European Language Resources Association (ELRA", "ref_id": "b44", "title": "SumeCzech: Large Czech news-based summarization dataset", "year": "2018" }, { "authors": "Yuqing Tang; Chau Tran; Xian Li; Peng-Jen Chen; Naman Goyal; Vishrav Chaudhary; Jiatao Gu; Angela Fan", "journal": "", "ref_id": "b45", "title": "Multilingual translation with extensible multilingual pretraining and finetuning", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b46", "title": "Attention is all you need", "year": "2017" }, { "authors": "Tu Vu; Aditya Barua; Brian Lester; Daniel Cer; Mohit Iyyer; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Overcoming catastrophic forgetting in zero-shot cross-lingual generation", "year": "2022" }, { "authors": "Xiaojun Wan; Huiying Li; Jianguo Xiao", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Cross-language document summarization based on machine translation quality prediction", "year": "2010" }, { "authors": "Jiaan Wang; Yunlong Liang; Fandong Meng; Zhixu Li; Jianfeng Qu; Jie Zhou", "journal": "", "ref_id": "b49", "title": "Crosslingual summarization via chatgpt", "year": "2023" }, { "authors": "Jiaan Wang; Fandong Meng; Duo Zheng; Yunlong Liang; Zhixu Li; Jianfeng Qu; Jie Zhou; ; ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b50", "title": "A survey on cross-lingual summarization", "year": "2022" }, { "authors": "Ye Wang; Xiaojun Wan; Zhiping Cai", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Guiding abstractive dialogue summarization with content planning", "year": "2022" }, { "authors": "Chenxi Whitehouse; Fenia Christopoulou; Ignacio Iacobacci", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "EntityCS: Improving zero-shot cross-lingual transfer with entity-centric code switching", "year": "2022" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter Liu", "journal": "", "ref_id": "b54", "title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b55", "title": "", "year": "" }, { "authors": "Zheng Zhao; Shay B Cohen; Bonnie Webber", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Reducing quantity hallucinations in abstractive summarization", "year": "2020" }, { "authors": "Junnan Zhu; Qian Wang; Yining Wang; Yu Zhou; Jiajun Zhang; Shaonan Wang; Chengqing Zong", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "TRtrain TRtest E2E µPLAN TRtrain TRtest E2E µPLAN", "year": "1225" } ]
[ { "formula_coordinates": [ 4, 325.49, 584.88, 179.57, 22.62 ], "formula_id": "formula_0", "formula_text": "D EN→ALL = D EN ∪ TGT∈L-{EN} D EN→TGT ," }, { "formula_coordinates": [ 4, 325.83, 642.38, 178.89, 22.62 ], "formula_id": "formula_1", "formula_text": "D ALL→EN = D EN ∪ SRC∈L-{EN} D SRC→EN ." }, { "formula_coordinates": [ 5, 70.88, 449.77, 218.24, 22.62 ], "formula_id": "formula_2", "formula_text": "D EN→TGTZS = D EN ∪ D TGT ∪ L∈L-{EN,TGT} D EN→L ." } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b8", "b0", "b7", "b1" ], "table_ref": [], "text": "Recently, a prominent model for image segmentation called the Segment Anything model (SAM) has been introduced [6]. SAM serves as a robust foundation model for effectively segmenting 2D images in various scenarios. However, during the segmentation of 2D RGB images, it has been observed that SAM primarily relies on texture information, such as color, leading to over-segmentation results. Consequently, the challenge lies in finding a way to obtain segmentation results that incorporate more geometric information through the utilization of SAM.\nTo address this issue, we draw inspiration from the remarkable ability of humans to identify objects by visualizing depth maps. We first map a depth map (R H×W ) to the RGB space (R H×W ×3 ) by a colormap function and then feed the rendered depth image into SAM. Compared to RGB images, the rendered depth image ignores the texture information and focuses on the geometry information, as shown in Fig. 1. Notably, it is worth mentioning that while previous SAM-based projects [9] like SSA [1], Anything-3D [8], and SAM 3D [2] primarily employ RGB images as inputs, we are the first to utilize SAM for directly segmenting rendered depth images. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b5", "b6" ], "table_ref": [], "text": "Segment Anything Model (SAM). The Segment Anything model (SAM) [6] is a recently developed large Vision Transformer (ViT)-based model. SAM has been trained on an extensive visual corpus known as SA-1B. The training process on this large-scale dataset has endowed SAM with the remarkable ability to perform zero-shot segmentation on diverse styles of 2D images.\nOpen-Vocabulary Semantic Segmentation (OVSeg). Given the class candidates in the text format, Open-Vocabulary Semantic Segmentation (OVSeg) [7] can segment an image into semantic regions even if the categories are not seen during training." }, { "figure_ref": [ "fig_0" ], "heading": "Segment Any RGBD", "publication_ref": [ "b4" ], "table_ref": [], "text": "In this paper, we propose Segment Any RGBD (SAD) that leverages both SAM and OVSeg to achieve semantic segmentation results utilizing the geometry information derived from depth maps. The overview of SAD is shown in Fig. 2. The process can be divided into the following parts:\nRendering depth maps. We notice that depth maps tend to emphasize geometry information over texture information when compared to RGB images, as visually depicted in Fig. 1. Capitalizing on this characteristic, our approach involves initially utilizing a colormap function [5] to render the depth maps to the RGB space. We try different colormaps such as Viridis, Gray, Plasma, Cividis, and Purples, as shown in Fig. 1. Consequently, the rendered depth maps are employed as inputs for SAM. Segmentation with SAM. Following the rendering process, we apply SAM to the rendered depth images to generate initial SAM masks. It is worth noting that these initial SAM masks are classagnostic and still over-segmented, as illustrated in the Fig. 1.\nSemantic segmentation with OVSeg. By employing RGB images as input and leveraging text prompts, OVSeg exhibits the ability to generate coarse masks that encompass significant semantic information. These coarse semantic segmentation masks serve a dual role: firstly, they assist in guiding the clustering process of the over-segmented parts within the SAM masks, and secondly, they provide crucial category insights that contribute to refining the fine-grained SAM results.\nSemantic voting. For each pixel in the SAM mask, we first find its corresponding predicted class from the OVSeg mask. Subsequently, we assign the class of each segment based on the majority class of pixels contained within it. Following this, we can proceed to cluster adjacent segments that belong to the same class.\nFinally, the semantic segmentation results can be projected to 3D-world based on the depth map for stereoscopic visualizations. This projection enables a comprehensive understanding and visual representation of the segmented results in their spatial context." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Comparison with RGB Image Input", "publication_ref": [], "table_ref": [], "text": "We compare the proposed rendered depth image input with the RGB image input. RGB images predominantly capture texture information, while depth images primarily contain geometry information. As a result, RGB images tend to be more vibrant and colorful compared to the rendered depth images. Consequently, SAM produces a larger number of masks for RGB inputs compared to depth inputs, as illustrated in Fig. 3. The utilization of rendered depth images mitigates the issue of over-segmentation in SAM. For instance, when examining the table, the RGB images segment it into four distinct parts, with one part being misclassified as a chair in the semantic results (indicated by yellow circles in Fig. 3), while it is accurately classified in the depth image. It is also important to note that when two objects are in close proximity, they may be segmented as a single object in the depth image, as depicted by the chair in the red circle of Fig. 3. In such cases, the texture information present in RGB images becomes crucial for accurately identifying and distinguishing the objects." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Qualitative Results", "publication_ref": [ "b3", "b2" ], "table_ref": [], "text": "We present qualitative results on Sailvos3D [4] and ScanNet [3], which are depicted in Fig. 4 and Fig. 5. The figures clearly demonstrate the enhanced performance of our method in generating geometric semantic segmentation results when utilizing depth map inputs. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In summary, we introduce the Segment Any RGBD (SAD) model, which combines SAM and OVSeg for semantic segmentation using depth maps. SAD leverages the geometry information of depth maps by rendering them with RGB information and feeding them into SAM. Initial SAM masks are generated, which are refined using OVSeg's coarse semantic segmentation masks. The clustering process is then applied to group adjacent segments of the same class, improving the segmentation results' coherence. Finally, the semantic segmentation results are projected onto the 3D world based on the depth map, enabling comprehensive stereoscopic visualization. Overall, SAD enhances semantic segmentation by incorporating depth maps and leveraging both SAM and OVSeg, resulting in more accurate and context-aware segmentation results. This work opens up new possibilities for advancing semantic segmentation tasks and provides valuable insights into real-world applications." } ]
Figure 1: Segmentation results of depth map by SAM. In contrast to RGB images, segmentation results derived from depth maps inherently encompass a richer set of geometric information.
SAD: Segment Any RGBD
[ { "figure_caption": "Figure 2 :2Figure 2: The overview of the proposed SAD.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Segmentation results with the RGB image input and the rendered depth image inputs.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitative results on the Sailvos3D dataset.Input to SAM SAM Masks with Class Semantic Masks 3D Visualization 3D Visualization", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative results on the ScannetV2 dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Jun Cen; Yizheng Wu; Kewei Wang; Xingyi Li; Jingkang Yang; Yixuan Pei; Lingdong Kong; Ziwei Liu; Qifeng Chen
[ { "authors": "Jiaqi Chen; Zeyu Yang; Li Zhang", "journal": "", "ref_id": "b0", "title": "Semantic segment anything", "year": "2023" }, { "authors": "", "journal": "Pointcept Contributors", "ref_id": "b1", "title": "Pointcept: A codebase for point cloud perception research", "year": "2023" }, { "authors": "Angela Dai; Angel X Chang; Manolis Savva; Maciej Halber; Thomas Funkhouser; Matthias Nießner", "journal": "", "ref_id": "b2", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Y.-T Hu; J Wang; R A Yeh; A G Schwing", "journal": "", "ref_id": "b3", "title": "SAIL-VOS 3D: A Synthetic Dataset and Baselines for Object Detection and 3D Mesh Reconstruction from Video Data", "year": "2021" }, { "authors": "J D Hunter", "journal": "Computing in Science & Engineering", "ref_id": "b4", "title": "Matplotlib: A 2d graphics environment", "year": "2007" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b5", "title": "Segment anything", "year": "2023" }, { "authors": "Feng Liang; Bichen Wu; Xiaoliang Dai; Kunpeng Li; Yinan Zhao; Hang Zhang; Peizhao Zhang; Peter Vajda; Diana Marculescu", "journal": "", "ref_id": "b6", "title": "Open-vocabulary semantic segmentation with mask-adapted clip", "year": "2023" }, { "authors": "Qiuhong Shen; Xingyi Yang; Xinchao Wang", "journal": "", "ref_id": "b7", "title": "Anything-3d: Towards single-view anything reconstruction in the wild", "year": "2023" }, { "authors": "Chunhui Zhang; Li Liu; Yawen Cui; Guanjie Huang; Weilin Lin; Yiqian Yang; Yuehong Hu", "journal": "", "ref_id": "b8", "title": "A comprehensive survey on segment anything model for vision and beyond", "year": "2023" } ]
[]
10.18653/v1/2021.naacl-main.278
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b21", "b25", "b21", "b13", "b21", "b13", "b12" ], "table_ref": [], "text": "Retrieving and reasoning about world knowledge is the core ability of question answering (QA) task (Gupta et al., 2019). Textual question answering (TQA) systems need to retrieve relevant evidence and conduct knowledge reasoning (Chen et al., 2017) on answering complex questions over multiple passages or facts (Yang et al., 2018;Qi et al., 2021). Recently, lots of tasks and datasets have been proposed and sparked significant progress of TQA in different scenarios (Zhu et al., 2021;Yang et al., 2018;Thorne et al., 2021). However, those datasets still have some limitations. On the one hand, most open domain question answering (ODQA) only focus on multi-hop reasoning of a single chain. For example, HotpotQA (Yang et al., 2018) devotes to addressing two-hop questions and BeerQA (Qi et al., 2021) requires a varying number of retrieval steps over multiple passages. The above only focus on one-chain retrieval and reasoning. On the other hand, some textual datasets include multiple discretization chains but only requires single-hop reasoning to answer question, such as WIKINLDB (Thorne et al., 2021) and eQASC (Jhamtani and Clark, 2020). Moreover, compared with knowledge-based question answering (KBQA), current TQA datasets do not thoroughly test the complexity and diversity of question types and reasoning types (Shi et al., 2022)." }, { "figure_ref": [ "fig_0" ], "heading": "Dataset Reasoning Types Evidence Text Type Multi-Chains Multi-Hops Evidence Structures", "publication_ref": [ "b21", "b13", "b11", "b26" ], "table_ref": [], "text": "TriviaQA (Joshi et al., 2017) -✗ passage ✗ ✗ 1 HotPotQA (Yang et al., 2018) 3 ✓ passage ✗ ✗ 1 BeerQA (Qi et al., 2021) 3 ✗ passage ✗ ✓ 3+ WikiNLDB (Thorne et al., 2021) 4 ✓ sentence ✓ ✗ 2 eQASC (Jhamtani and Clark, 2020) 2\n✓ sentence ✗ ✓ 1 ReasonGraphQA (Ours) 5 ✓ sentence ✓ ✓ 262\nTable 1: Comparison ReasonGraphQA with existing datasets of Complex TQA.\nIn fact, answering complex questions often requires a combination of retrieving multi-chains and using multi-hops reasoning to infer the answer. We refer to this process as Graph-Hop retrieval and reasoning (shorted as Graph-Hop). As shown in figure 1, to answer this question, system first retrieves the population of each city (multi-chain), and then uses multi-hop reasoning on each chain to infer the population value. Finally, it compares two values and identifies the city with the larger population. This process requires an evidence graph with multiple chains and hops. In this way, Graph-Hop provides a more fine-grained and adaptable representation for complex question answering tasks.\nIn addition, existing textual question answering systems still have trouble explaining explicitly why an answer is correct or not and \"how\" the answer is obtained step-by-step. Although the existing retrieval methods can directly retrieve the relevant passages (Mou et al., 2021;Rudra et al., 2021;Lu et al., 2020;Zhu et al., 2022), they cannot retrieve a structured evidence graph, which limits the ability of the model's reasoning and interpretation.\nTo address above problems, we introduce a benchmark called ReasonGraphQA and provides explanation evidence graphs to explicitly describe the reasoning process for solving complex questions. Evidence graphs can provide intermediate results and facilitate human understanding. It also allows for better control of the model behavior, enabling users to easily identify errors by inspecting the outputs of intermediate steps. Moreover, compared with other datasets (as shown in Table 1), ReasonGraphQA not only contains more diversified hops and chains evidences, but also cover more reasoning types of complex questions and richer explicitly evidence structures.\nWe also propose a specific Bidirectional Graph Retrieval (BGR) method to support Graph-Hop. This method retrieves evidence from both forward and backward directions, and then fuses them to construct evidence graphs and support to answer complex questions. We compared four types of retrieval and reasoning systems on the Rea-sonGraphQA dataset. Experimental results have shown that BGR achieved strong performance in both the retrieval task and the explanation graph task. However, their performance is still far from human-level performance in the explanation graph construction task, it is suggesting that further research should consider more on Graph-Hop.\nIn summary, our contributions are as follows:\n(1) We propose a Graph-Hop paradigm and construct a new benchmark ReasonGraphQA, which includes diverse question types and explicit reasoning processes to guide interpretable retrieval and question answering over textual databases in a fine-grained and comprehensive way. (2) We also propose a Bidirectional Graph Retrieval (BGR) method, which utilizes both forward reasoning and backward reasoning information. (3) Our evaluation of four retrieval systems on ReasonGraphQA demonstrates that Graph-Hop Retrieval is a promising approach. We also discuss potential future directions to address Graph-Hop challenges." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b13", "b26", "b18" ], "table_ref": [], "text": "Textual question answering (TQA) requires to retrieve evidence from a large corpus to answer natural language questions. Some researchers proposed a novel TQA task over natural language database (NLDB) and support natural language database queries such as filtering, comparison, and aggregation, where database consists of unordered sets of textual facts (Thorne et al., 2021;Zhu et al., 2022). It requires comprehensive reasoning and retrieval of text sentences (Wolfson et al., 2020). Despite the rapid progress in TQA, they ignore the problem of multi-hop retrieval in multi-chain fact sets that may appear in complex textual question answering. In comparison, the proposed ReasonGraphQA requires graph retrieval from large-scale textual databases. And we focus on discrete reasoning over textual evidence, which greatly evaluate the structured path modeling and discrete reasoning ability of QA systems over the textual databases. 1.1 million people live in Brussels.\n2) Generate Textual Facts 3 Graph-Hop Over Textual Database\nReasonGraphQA devotes to answering complex questions that need Graph-Hop (multi-hop multichain) over database. Both question and evidences of database are represented as natural language sentences, each sentence is stand-alone and contains one or multiple facts. Formally, given a question Q and a textual database E = {e 1 , . . . , e n }, system needs to: (1) retrieve an explicable reasoning graph G from the given textual database, ( 2) obtain the answer A based on the explanation graph G; The graph G is a directed acyclic graph composed of the evidences in E that are related to the question and used to reason the answer." }, { "figure_ref": [ "fig_2", "fig_0" ], "heading": "Construction of ReasonGraphQA", "publication_ref": [], "table_ref": [], "text": "In this section, we present the construction process of ReasonGraphQA dataset. Constructing finegrained evidence graphs for complex questions is a non-trivial task. We develop an approach to automatically construct a dataset with complex questions, answers and explanation evidence graphs .\nFigure 2 illustrates the main construction process of ReasonGraphQA using the example in Figure 1." }, { "figure_ref": [ "fig_2" ], "heading": "Question-related Triples Finding", "publication_ref": [ "b12", "b19" ], "table_ref": [], "text": "We obtain complex questions and answers from the KQA-Pro dataset (Shi et al., 2022), a large-scale KBQA dataset (Wu et al., 2019), which requires reasoning over multiple pieces of evidence. To automate the generation of question-related evidence, we use structured queries \"KoPL program\" of the KQA-pro dataset and ground each programming procedure to Wikidata triples. As illustrated in Fig. 2 (1), the structured Golden Program, consisting of \"Relate\", \"Find\", and \"Select between\" operations, can identify five triples of Wikidata. By searching the target knowledge base (e.g., Wikidata), we can obtain factual facts needed to answer the question." }, { "figure_ref": [ "fig_2" ], "heading": "Textual Facts Generation", "publication_ref": [ "b1", "b1", "b24" ], "table_ref": [], "text": "Based on data-to-text work (Agarwal et al., 2021), we can convert the structured facts into unstructured texts. To improve the diversity, naturalness, and information of the generated text, we propose a method of building triple subgraphs by selecting 0-2 triples with the same head entity from Wikidata according to a certain probability and combine them into a subgraph. While ensuring that they do not overlap with other subgraphs to make sure textual facts remain independent. The subgraphs are then input into a pre-trained language model (T5) fine-tuned on the KELM (Agarwal et al., 2021) corpus to generate unstructured text. As shown in Figure 2 (2). To ensure completeness of entities in the triples, we use string matching to exclude missing text, and use BERTScore (Zhang et al., 2019) to select the most appropriate text evidence from multiple generated options as the correct evidence." }, { "figure_ref": [], "heading": "Textual Database Construction", "publication_ref": [ "b9" ], "table_ref": [], "text": "We obtain a large-scale textual database containing generated evidences (4.2). For each question, we can retrieve evidence from those large-scale sentences (e.g., more than 100 billion sentences). However, in our experimental environment (500000 sentences in total), we must consider computing efficiency and retrieval cost. Therefore, we have retrieved an appropriate number of sentences from the complete textual database to form a target textual database from which we select evidence for each question. Specifically, apart from the golden evidence, we also retrieve other sentences that are related to the question to form the target textual database. Additionally, to construct a task closer to the real retrieval scene, and to verify knowledgebased reasoning ability, we have added interference evidence to the database. In this paper, the interference-related evidence is obtained from the following three categories of methods (1/3 of each category): (a) SimCSE (Gao et al., 2021) is used to select evidence with similar semantics of the question; (b) We use the same head entity but different relation triples to regenerate evidence sentences; (c) We randomly select other textual evidence." }, { "figure_ref": [ "fig_2" ], "heading": "Evidence Graph Generation", "publication_ref": [], "table_ref": [], "text": "The reasoning graph of textual evidence is the key component of ReasonGraphQA. We extract and re-summarize the structure among golden triples with the programming language \"KoPL program\", and utilize network 2 to build the reasoning graph of sentences. In order to ensure the high quality of the evidence graph, we carefully follow these constraints during its construction. (1) Each evidence contains at least one knowledge fact; (2) Each question must be answered with a clear reasoning explanation graph G;\n(3) Each graph G must be a directed acyclic graph; (4) Any non-leaf node has at least one path to the root node; (5) All evidence cannot be repeated on the path to the root node (avoiding loops). Samples that do not meet these constraints are removed. An example of evidence graph is shown in figure 2. (3), which reflects the reasoning progress from question to answer." }, { "figure_ref": [], "heading": "Dataset Analysis", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The probability of 8:1:1. Table 2 presents statistics on the graph size and structure of the dataset. The dataset includes four types of evidence graphs: \"single-chain single-hop,\" \"single-chain multi-hop,\" \"multi-chain single-hop,\" and \"multi-chain multihop,\" which account for 19.5%, 38.6%, 8.7%, and 33.2% of the dataset, respectively. There are 262 nonisomorphic graph structures in the dataset. The questions in the dataset are classified into five types: \"query\", \"comparison\", \"count\", \"boolean\", and \"qualifier\" based on nine asking strategies used in original KQA-Pro dataset (details in Appendix A.2). These diverse graph structures provide more detailed and interpretable evidence for complex questions." }, { "figure_ref": [], "heading": "Quality Evaluation", "publication_ref": [], "table_ref": [], "text": "To evaluate the quality of mapping facts from knowledge triples, 500 sampled facts were scored based on smoothness, faithfulness, and sufficiency. 98.2% (491/500) facts were smooth, with only 9 containing repeated text. 98.6% (493/500) facts were faithful to the relation of the triples, with only 7 containing additional information. Three facts replaced incorrect information with correct information, resulting in a faithfulness and sufficiency score of 0. The remaining four facts contained additional information that enriched the context. We conducted manual evaluation and found that the quality of the data set construction is relatively high. For example, 96% of facts in WiKiNLDB are loyal to relationships, while ReasonGraphQA is 98.6%. This demonstrates that the data set presented in this paper is suitable for model development and technical verification of complex question answering in textual databases (details in Appendix A.2)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we present our proposed retrievalbased question-answering model. This model follows the popular retrieval-reader architecture. " }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Bidirectional Retrieval", "publication_ref": [], "table_ref": [], "text": "We design a bidirectional retrieval method to improve graph-hop retrieval accuracy. \"In traditional chain retrieval, the model starts by searching for the first relevant evidence, and continues iteratively. However, the structure of evidence in graph retrieval is more complex, resulting in a higher error rate as the search depth increases. To mitigate this issue, we introduce backward retrieval, which starts at the leaf nodes of evidence graph and searches evidence from back to front. As a result, we obtain two evidence subgraphs,one from forward retrieval and one from backward retrieval, as depicted in Figure 3. By merging these two subgraphs, our bidirectional retrieval method can mitigate the problem of rapidly declining accuracy in the forward retrieval with increasing depth in the graph. Given a question Q and a candidate evidence base E = {e 1 , e 2 , • • • , e n }, we represent Q and E using BERT to obtain their representations, h 0 = BERT(Q). The retrieval process follows a depth-first search, where at each step, the current evidence node i may have multiple paths that are reachable. These paths are represented as\nH i = {h 1 i , • • • , h i k i }\n, where i k is the number of paths per node. These paths are matched one by one with the path code and evidence base E i (E i ⊆ E). To handle the complex structure of graph retrieval, we use a feedforward neural network (composed of linear layers and activation functions) instead of a similarity threshold to match the next layer of evidence nodes. Every time a new evidence node is retrieved, we use the Attention mechanism to combine the path set H i and the retrieved evidence node e i+1 to generate a new path set H i+1 . The whole process is illustrated in Figure 3. We repeat this process until no new nodes can be retrieved.\nEi = ∪ELU (F F N (hi, Ei)) hi ∈ Hi (1) Ei+1 = Ei+1 ∪ Ei, Hi+1 = hi ∪ ẽi | ẽi ∈ Ẽi (2)" }, { "figure_ref": [ "fig_3" ], "heading": "Subgraph Reconstruction", "publication_ref": [], "table_ref": [], "text": "The reconstruction process of the evidence graph is depicted in Figure 3. We utilize networkx3 to build two subgraphs using forward and reverse retrieval techniques. Reverse retrieval allows us to verify the accuracy of our findings. By intersecting the edges of the two subgraphs and removing any nonoverlapping nodes and edges, we can construct a complete evidence graph. This evidence graph visually demonstrates the reasoning process from the initial question to the final answer.\nG = G F ∪ G B if BSC(G F , G B ) > γ G B if BSC(G F , G B ) ≤ γ(3)\nWe first extract edges from the forward subgraph and the backward subgraph respectively, and then select by evaluating the BSC of the subgraph. If the BSC is less than γ, intersection of the bidirectional subgraphs is taken to reconstruct the graph, and new edges are not added twice for the existing nodes.\nBSC(G F , G B ) = Edge F ∩ Edge B Edge F ∪ Edge B(4)\nwhere Edge F ,Edge B is the edge set of forward and backward subgraghs. If the BSC is greater than γ, the backward subgraph is reserved. A threshold value of γ is used to determine whether the intersection of the two subgraphs should be used to construct the final evidence graph. The reason for using BSC and threshold value γ is that, it can effectively improve the retrieval performance, by preserving the integrity and accuracy of the final evidence graph, also it can help to prevent from adding unnecessary edges." }, { "figure_ref": [], "heading": "Answer Generation", "publication_ref": [ "b13", "b26", "b10" ], "table_ref": [], "text": "In order to generate an answer, the multiple evidences are fed into the reader as following.\nA = Reader T 5 ({e i |e i ∈ G})(5)\nwhere evidences are ordered according to the structure of the retrieved evidence graph G.\nTo measure the retrieval performance, We follow the previous settings (Thorne et al., 2021;Zhu et al., 2022) and use the classic T5 (Raffel et al., 2019) model as the fixed reader, but this can easily be adapted to other pre-trained language models." }, { "figure_ref": [], "heading": "Experimental", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the performance of different retrieval and reasoning systems on Rea-sonGraphQA, and investigate performance and limitations of our proposed graph-hop retrieval system." }, { "figure_ref": [], "heading": "Compared Baselines", "publication_ref": [ "b2", "b4", "b20", "b13", "b14", "b15", "b16" ], "table_ref": [], "text": "We compare retrieval models of two retrieval mechanisms representative. Single-Hop retrieval method that retrieves all evidence at once (Random, BM25 (Amati, 2009), DPR (Karpukhin et al., 2020)). Multi-hop retrieval methods retrieve one evidence iteratively in one step (GRR (Asai et al., 2020), MDR (Xiong et al., 2021), SSG (Thorne et al., 2021)). We use the code and parameter settings provided by the original papers for all baselines. For single-Hop retrieval models (BM25, DPR, SSG), we retrieve the top-k evidence, where k is the size of the golden evidence set.\nWe also explore the potential of large language models (LLM) in solving complex reasoning tasks through few-shot learning (Wei et al., 2022;Weng et al., 2022Weng et al., , 2023)). To this end, we have developed five reasoning graph prompts for LLM, detailed in the appendix A.6. These prompts aim to enable the construction of a reason graph by LLM.\nAll methods are tested in the Dev set at the end of each round, and the model with the highest retrieval accuracy in the Dev set is selected for testing. We repeate the process three times by replacing the random seeds and average them as the final result." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [ "b10" ], "table_ref": [], "text": "To measure retrieval mechanism in a fairer opendomain setting, We uniformly use T5 model (Raffel et al., 2019) 4 as reader, and input retrieval evidence of different methods into a fine-tuned T5 model to generate answer. Specifically, we provide the correct evidence and questions in the training set to 4 https://huggingface.co./t5-base the reader for training, three readers were trained by different random seeds. A bert-base-uncased model is chosen as text encoder for extracting feature. We use AdamW (Loshchilov and Hutter, 2018) with warm-up as the optimizer. The learning rate, epoch and batch size are set to 1×10 -5 , 20, 8 respectively. Text maximum length n was set as 30 and the d was set as 768." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b21", "b7", "b0" ], "table_ref": [], "text": "In retrieval task, correctness were measured in terms of Explanation Graph and Evidence Set. Following previous works (Yang et al., 2018;Dalvi et al., 2021), Exact Match (EM) , Precision, Recall and F1 was adopted. As for Explanation Graph evaluation, we used three indicators, Graph Matching (GM) evaluates whether the retrieved evidence graph is consistent with golden evidence graph. Graph Structure (GS) evaluates whether retrieved graph structure and golden graph structure are isomorphic, it will ignore nodes accuracy. Graph Editing Distance (GED) (Abu-Aisheh et al., 2015) measures how many steps does converting retrieved evidence graph to the golden one need. Then we use EM to measure the performance of QA task." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_3", "tab_4", "tab_3" ], "text": "The graph structure and the set retrieval both play a critical role. As shown in Table 3, singlehop methods like DPR perform well in set recall and QA, while multi-hop methods like SSG excel in graph accuracy and QA. This highlights the importance of both the evidence graph structure and set retrieval for accurate question answering. This suggests that previous datasets (Qi et al., 2021), which only evaluate the accuracy of the retrieved set, are not sufficient for measuring QA performance. Additionally, as Table 4 shows, incorporating graph structure information into evidence results can significantly improve QA performance when using large language models.\nLLM is capable of constructing inference diagrams. In our LLM retrieval, as shown in Table 3, we discovered that while LLM has a low accuracy rate for the evidence set, it surpasses existing multi-hop retrieval in constructing inference graphs (especially for Instruct-GPT, Graph reasoning ability is close to Graph-Hop) which illustrates the reasoning potential of LLMs, which may be an important direction of future Graph-Hop research.\nGraph-Hop is more appropriate for Rea-sonGraphQA. We note that multi-hop retrieval systems have high precision but low recall, as true nodes at the same level are ignored when retrieving along one reasoning chain. However, BGR can improve recall to 95.227% by utilizing a bidirectional retrieval architecture. Additionally, Graph-Hop's Forward is better in evidence retrieval, while Backward has a higher graph construction capability. In the next section, we will further analyze Graph-Hop's performance and explain why BGR's performance is better after subgraph reconstruction." }, { "figure_ref": [ "fig_4", "fig_5", "fig_5", "fig_5" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Bidirectional Retrieval. To better understand cooperation mechanism of Forward retrieval and the Backward retrieval. We perform ablation study on retrieval direction. In Table 3 we can clearly find that backward retrieval has a higher performance in the explanation graph, and forward retrieval has a higher performance in evidence retrieval. The BGR has better performance in explanation graph task, evidence retrieval task. And BGR outperform both forward and backward in QA task. This shows that bidirectional subgraph reconstruction(BCD algorithm) can make up deficiency of both and achieve a balance.\nBidirectional BGR with balanced γ value performs best. As depicted in figure 4, we analyzed the effect of the value γ on the accuracy of retrieving the evidence set and graph. When γ=1, the final evidence graph is G B . When γ = 0, the evidence graph of samples that BSC ̸ = 0 is G F ∪ G B . We found that the accuracy of bidirectional BGR is higher than that of forward and backward BGR, because G F performs better in graph structure, while G B tends to retrieve more accurate evidence sets, and the introduction of γ achieves a balanced result in the evidence set and graph structure.\nWhile BGR has achieve strong performance, its still an on-going challenge for graph-hop QA task. This is a meaningful task that are expected to promote development of TQA in knowledge reasoning and interpretability. BGR adapts to different question types. We divide the test set into 5 different question types. Figure 5(A) shows detailed accuracy of We can find that the evidence retrieval ability of the BGR can adapt to different kinds of questions, especially \"Comparison\" and \"Bool\". However, when faced with the task of constructing evidence graph, it is easy to miss nodes and edges. Even in the \"Count\" question, the BGR cannot correctly predict any explanation graph. This proves that the graph construction task still has a certain complexity, and the BGR still has a large room for improvement in the construction of retrieval evidence graphs. BGR performs well in complex, multi-hop explanation graph structures. We classify and compare according to the graph structure, which are single-chain single-hop, single-chain multi-hop, multi-chain single-hop, and multi-chain multi-hop. In Figure 5, more complex structure graph show the better retrieval performance, which proves that BGR can efficiently retrieve evidence in complex text question answering. In addition, BGR has achieved the best performance in MCMH explanation graph structures compared with the other three types, which even close to the QA accuracy with perfect retrieval. It shows that BGR is suitable for graph-hop retrieval. However, the more complex the graph structure is, the more edges there are. We believe that the modeling between edges is challenging due to the high similarity of edges between different nodes, which encourages researchers to conduct further research on explanation graph retrieval in the future. More detailed experimental results are provided in A.4.\nThe construction of multi-chain and multi-hop explanation graph is still challenging. We have evaluated how varying hop and chain number of evidence graph structure influencing graph structure (GM, GS, GED), evidence set (F1, EM), and question answering (QA EM). Our findings reveal that retrieving evidence graphs and answering questions from more complex evidence structures remains a challenging task. Specifically, as shown in Figure 5, the graph structure performance of evidence graph retrieval is strong for simple graphs but poor for complex ones, and the Exact Match of evidence sets retrieval is poor in complex graph structures. This results in relatively lower performance in question answering for complex graph structure samples." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our study introduces the ReasonGraphQA dataset, the first textual database QA dataset with an explanation graph, which provides complex structured retrieval assistance for graph retrieval systems. We have tested various traditional evidence retrieval methods on the ReasonGraphQA dataset and evaluated them manually. Additionally, we propose the graph-hop retrieval paradigm and develop a bidirectional graph retrieval model, which significantly improves the evidence retrieval and graph construction capabilities of complex question answering by reconstructing reasoning paths in different directions. Future research utilizing the Rea-sonGraphQA dataset can enable fine-grained analysis of the explanation graph output from models, leading to further advancements in real and complex QA environments. While the current methods have several limitations, This presents opportunities for future research to improve upon them." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "There are several limitations to our study. Firstly, the ReasonGraphQA dataset is built using a pretrained language model to convert triples into a text database, which may lead to slight differences from the actual evidence. Secondly, we found that some complex graphs may have multiple possible explanation graphs, which can affect the model's training. We have provided detailed statistics on the quality of the ReasonGraphQA dataset in the supplementary material. Thirdly, the bidirectional graph retrieval model (BGR) has a higher time and space complexity compared to other methods, as it needs to retrieve both breadth and depth. This may affect its performance in pure multi-hop tasks." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Our research aimed to graph-hop retrieval in complex textual question answering. We use the pretraining language model to generate a large number of fluent evidences based knowledge base triples. we also realized that, due to our extensive use of pre-trained models with fact data from the Internet, the proposed method does not need manual annotation and reduce the carbon costs, it may be produce inappropriate text (For example, offensive, racially or gender-sensitive responses).\nWe have carefully considered the above issues and provided the following details: (1) All fact data used is collected from the Internet, and it is inevitable that offensive, racially or gender-sensitive evidence facts will occur. We delete the sentences of evidence facts that are offensive, racially or gender-sensitive as much as possible. (2) The quality of the ReasonGraphQA dataset will affect the credibility of the robustness evaluation. We hope to maximize the reliability and implementability of the system based on such evaluation benchmarks. (3) Finally, since the external knowledge base and KQA-Pro dataset are used to build the ReasonGraphQA, the information sources of these data also suffer from issues such as risk and bias. Reducing these potential risks requires ongoing research." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 More Comprehensive Related Work", "publication_ref": [ "b13", "b26", "b18", "b8", "b3" ], "table_ref": [], "text": "Textual question answering (TQA) requires retrieve evidence from a large corpus to answer natural language questions. Some researchers proposed a novel TQA task over natural language database (NLDB) and support natural language database queries such as filtering, comparison and aggregation, where database is consist of unordered sets of textual facts (Thorne et al., 2021;Zhu et al., 2022). Each fact is composed of text with different meanings rather than triples that unlike knowledge base QA. It requires comprehensive reasoning and retrieval of text sentences (Wolfson et al., 2020). These NLDB tasks challenge the model with discretization and interpretable reasoning, where querying natural language databases with filtering, comparison, numerical operation queries (Gupta et al., 2019;Dua et al., 2019), and other operations (Andor et al., 2019) still remains to be challenge.\nDespite the rapid progress in TQA, they ignore the problem of multi-hop retrieval in multi-chain fact sets that may appear in complex textual question answering. For example, eQASC (Jhamtani and Clark, 2020) and BeerQA (Qi et al., 2021) are limited in breadth search, and the WIKINLDB In comparison, the proposed ReasonGraphQA requires graph retrieval from large-scale textual databases. And we focus on the discrete reasoning over textual evidences, which greatly evaluate the structured path modeling and discrete reasoning ability of QA systems over textual database." }, { "figure_ref": [ "fig_7" ], "heading": "A.2 Data Analysis", "publication_ref": [], "table_ref": [], "text": "Each graph has an average of 5.3 edges and 4.2 nodes on everage. ReasonGraphQA contains 262 nonisomorphic graph structures, According to nine different ask strategies of KQA PRO, we divide questions into five types. in Figure 6, which includes \"Query\", \"Comparison\", \"Count\", \"Bool\", \"Qualifier\". the \"Comparison\" involves comparison of multiple evidences. The \"Query\" type inquires head or tail entity of relational knowledge, the \"Qualifier\" query for attributes and relations, the \"Count\" type's answer is number, and the \"bool\" is to judge correctness of a statement. Most question type involves a variety of graph reasoning.\nThe question types that involve the most graph structures are \"Queryname\", \"Count\" and \"Queryattribute\", which comprehensively involve value comparison, relational knowledge, and time knowledge. This further shows the complexity of our data set. We engaged three graduate students to conduct manual evaluation of a randomly selected portion of our dataset and the test set. These students, hailing from China, were provided with a set of samples marked by the author as a reference for their evaluation. They conducted a thorough evaluation according to established standards, working an average of 10 days. Work eight hours a day. We compensated them for their efforts with a total of $1750, which exceeds the average labor standard in China.\nIn Table ??, we can find that a large proportion of the samples only have some problems in the fluency of the evidence text and the problem text.\nWe manually inspect randomly selected samples and assess its quality of questions, facts and evidence graphs by human. We randomly sampled 500 samples, covering the 10 graph structure types with the largest number of samples. For questions, the annotator is required to score according to fluency and comprehensibility. Of the 500 questions, only 9 are considered unsmooth. This is because there is a typo in the question. All the questions are understandable.\nIn order to evaluate the quality of mapping knowledge map triples to facts, we evaluated 200 text databases (each text database contains 25 facts) and scored them from smoothness, faithfulness and sufficiency (whether irrelevant information is included). When and only when all facts meet the requirements, the fact base is considered to meet the requirements. All evidences of 182/200 text databases are fluent, and only one or two of the remaining 18 text databases do not meet the requirements of fluency.\nTo evaluate the quality of the mapping of facts from knowledge triples, 500 sampled facts were scored based on smoothness, faithfulness, and sufficiency. 98.2% (491/500) facts were smooth, with only 9 containing repeated text. 98.6% (493/500) facts were faithful to the relation of the triples, with only 7 containing additional information. Of these, 3 fact replaced incorrect information with correct information, resulting in a faithfulness and sufficiency score of 0. The remaining 4 facts contained additional information that enriched the context.\nIn addition to the separate evaluation of questions, facts, and evidence graphs, we also evaluated the overall quality of 100 randomly sampled databases (each containing 25 facts) using the six evaluation indicators mentioned above. Of these, 92 databases were deemed to be of good quality.\nWe believe that although some samples of Rea-sonGraphQA have problems, the overall score is high, and the average perfect sample score can reach 9.21, which reflects the high quality of our dataset." }, { "figure_ref": [], "heading": "A.4 Performance of Graph-Hop when different graph structures", "publication_ref": [], "table_ref": [ "tab_9", "tab_5" ], "text": "Table 9 presents a detailed analysis of the performance of Graph-Hop on four different types of graph structures. The results indicate that Graph-Hop demonstrates strong performance across all the structures tested. Among them, Graph-Hop particularly excels in its ability to navigate and understand multi-chain and multi-hop structures. These structures are known to be challenging for traditional graph traversal methods, making Graph-Hop's performance on these structures all the more noteworthy. Additionally, it should be noted that the results in Table 5 demonstrate that Graph-Hop is an effective and efficient tool for handling and understanding complex graph structures. " }, { "figure_ref": [], "heading": "Method Training Prediction EC", "publication_ref": [], "table_ref": [], "text": "Forward Retrieval 4 ± 1 0.15 ± 0.1 2.5 ± 0.5 Backward Retrieval 4 ± 1 0.1 ± 0.1 2.5 ± 0.5 BGR 7.5 ± 1 0.25 ± 0.1 5 ± 0.5" }, { "figure_ref": [], "heading": "A.5 Hyperparameter and Detailed Experimental Results", "publication_ref": [ "b17" ], "table_ref": [ "tab_10", "tab_8", "tab_6" ], "text": "All our experiments were conducted in a 10900k CPU computer with 128G memory and RTX3090 GPU. We conduct experiments using the PyTorch (Paszke et al., 2019) and the huggingface (Wolf et al., 2020) framework. We use linear decay of learning rate by 1 × 10 -6 and the Table 10 shows all our super parameter settings. We have counted the training time, prediction time and energy consumption in Table 8 for the BGR model.\nIn Table 6, we further evaluate the performance of the bidirectional method in different graph structures. We divided the ReasonGraphQA dataset into four parts according to the graph structure. They are single-chain single-hop, multi-chain single-hop, single-chain multi-hop, and multi-chain multi-hop. This helps to understand and analyze the performance of the current model for different structures.\nWe can find that multi-chain and multi-hop tasks are more difficult than other tasks, whether in graph construction or retrieval. The bidirectional method can help highlight the path representation advantage of bidirectional retrieval, and it can be significantly improved in multi-chain and multi-hop tasks. But it will slightly reduce the performance of the model for simple problems. We think that this is because for simple problems, the representations in bidirectional are mostly consistent, and there is no major conflict in the learning direction. Therefore, bidirectional is difficult to significantly improve the effect." }, { "figure_ref": [ "fig_7" ], "heading": "A.6 Large language Models Setting", "publication_ref": [ "b5", "b23" ], "table_ref": [], "text": "We evaluated the performance of the original GPT-3 (Brown et al., 2020) (code-davinci-001) model, the Instruct-GPT model (Ouyang et al., 2022) (code-davinci-002), and GLM (Zeng et al., 2022) model on the ReasonGraphQA datasets. All GPT models' predictions were obtained through OpenAI's API. Due to server limitations, the GLM model used Int8 inference on 8 RTX3090 with 512G RAM Memory.\nWe conducted all experiments in the few-shot setting, without any fine-tuning the orginal language model. Apart from the context, we have not provided any other prompt text.\nWhen incorporating graph structure into the input of a language model, the thought chain can serve as a useful approach. As depicted in Figure 6, we utilize the phrase \"Then\" to denote the relationship between adjacent nodes, and \"On the other hand\" to indicate the relationship between different chains. " }, { "figure_ref": [], "heading": "A.7 Evaluation Details", "publication_ref": [ "b21", "b7", "b0", "b24", "b22", "b10" ], "table_ref": [], "text": "We follow the previous work on the retrieval task and explanation graph task (Yang et al., 2018;Dalvi et al., 2021), and we consider three evaluation indicators. They are the accuracy of interpretation graph construction, the recall of evidence retrieval, and the accuracy of NLDB QA. Since the explanation graph can be expressed in many different forms, it needs to be evaluated comprehensively. Graph Matching (GM) is used to evaluate whether the structures of the two graphs are consistent. If all edges of two graphs are the same, they are considered to be consistent. We use the Graph Structure (GS) to evaluate whether the two graphs are isomorphic, which means that the two graphs may have different nodes but have the same graph structure. Graph Editing Distance (Abu-Aisheh et al., 2015) to a correct graph by calculating the addition, deletion, and replacement of nodes and edges, which can interpretably measure the distance between the predicted graph and the correct graph. Retrieval ability is essential to the task of NLDB. We use the F1 to valuate the retrieved evidence text.\nIn recent years, more and more researchers use the language model to evaluate text (Zhang et al., 2019;Yuan et al., 2021). It can evaluate the semantic level and has certain robustness. Therefore, we use the T5 model (Raffel et al., 2019) 5 trained in the ReasonGraphQA answer generation task as a reader to deeply consider the quality of the retrieved evidence. Specifically, we provide the correct evidence and questions in the training set to the reader for training and then provide the retrieval results obtained by the retrieval system to the reader model for a generation. We use different random numbers to train three groups of readers, and take the complete matching rate of their generated results as the evaluation index to measure the accuracy of QA. Novella in 1989Novella in , 1990Novella in , 2001Novella in , 2008Novella in and 1987. 22. 22 " } ]
In Textual question answering (TQA) systems, complex questions often require retrieving multiple textual fact chains with multiple reasoning steps. While existing benchmarks are limited to single-chain or single-hop retrieval scenarios. In this paper, we propose to conduct Graph-Hop -a novel multi-chains and multi-hops retrieval and reasoning paradigm in complex question answering. We construct a new benchmark called ReasonGraphQA, which provides explicit and fine-grained evidence graphs for complex questions to support interpretable reasoning, comprehensive and detailed reasoning. And ReasonGraphQA also shows an advantage in reasoning diversity and scale. Moreover, We propose a strong graph-hop baseline called Bidirectional Graph Retrieval (BGR) method for generating an explanation graph of textual evidence in knowledge reasoning and question answering. We have thoroughly evaluated existing evidence retrieval and reasoning models on the ReasonGraphQA. Experiments highlight Graph-Hop is a promising direction for answering complex questions, but it still has certain limitations. We have further studied mitigation strategies to meet these challenges and discuss future directions.
Towards Graph-hop Retrieval and Reasoning in Complex Question Answering over Textual Database
[ { "figure_caption": "Figure 1 :1Figure 1: An example of ReasonGraphQA, it requires multiple chains of fact sets and each chain involves twohop reasoning in answering this complex question.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(More comprehensive related work is shown in Appendix A.1) : Which city has larger population, the capital of Belgium or the largest city in the Swiss? the largest city, Zurich) (Zurich, Population, 420K) ...... The capital of Belgium is Brussels which has a history of 1000 years.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: ReasonGraphQA construction process. We use Golden Program to generate explanation evidence graphs and create a text database for each question-answer pair. It consisting of three steps: finding question-relevant triples, generating textual evidence, and generating an explanation evidence graph.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of our proposed Bidirectional Graph-hop Retrieval (BGR) method.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: F1 and GM changes with different γ.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The retrieval performance for different question and graph structure types.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Graph statistics of ReasonGraphQA", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Two different comparison methods when using LLM for zero-shut QA.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": ". The Martian Child won the Hugo Award for Best Novelette in 1995. David Gerrold is the winner of the Hugo Award. 23. David Gerrold was nominated for the Nebula Award for Best Novel. 24. John Kessel won the Nebula Award for Best Novelette for Pride and Prometheus in 2008. A: Q-16;Q-3;", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": ": The statistics of ReasonGraphQA, where SC,MC, SH and MH indicate single-chain, multi-chain,single-hop, and multi-hop, respectively.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Main experimental results of BGR compared with three types of Retrieval methods on Retrieval-Reader architecture. In addition, we report the results of human in the test set to show the upper bound of human.", "figure_data": "ModelW/O Reason Graph With Reason GraphGPT-31.0523.55Instruct-GPT12.4345.15GLM4.467.15", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Experimental results of BGR at different hops and different chain numbers.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Explanation graph example. We use the same color to represent the same nodes.", "figure_data": "TypeQuestionEvidenceAnswerGraphSigleChain SingleHopHow is Heaven's Gate related to Joseph Cotten?E.Heaven's Gate stars Joseph Cotten. ...ber Cast mem-QEQueryAE 1ComparisonMultiChain SingleHopWhich one has more area be-tween Billings and Juneau?E 1 .Billings area is 113.467037. E 2 .Juneau area is 8427.626992. ...JuneauQE 2A ComparisonE1.William and his parents, their home-SingleChain MultiHopthe Netherlands? Is William's home-town the capital ofE2.Amsterdam is located in the west of town is Amsterdam.Continent QE1E2BoolAthe Netherlands and is the capital of theNetherlands....E 1 1 . The capital of China is Beijing, which has a history of more than 3000E 1 1E 1 2ComparisonWhich city hasyearsMultiChainlarger population,Quebec QAMultiHopthe capital of China or the largest city in the United States?E 1 2 .21.886 million people live in Bei-jing.E 2 12 E 2ComparisonE 2 1 . New York is the largest city in theUnited States.E 2 2 .New York has a large population of8,510,000..................ExampleDataTriplet{Amsterdam, location, Western of Netherlands},Facts{Amsterdam, the capital of, the Netherlandsp}Please describe the following entities inInputone sentence: Amsterdam, location, Westernof Netherlands. the capital of, the NetherlandspAmsterdam is locatedOutputin the west of theNetherlands and isthe capital of the Netherlands.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Example when using the pre-trained language model to generate facts", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Time (Hour) and Energy Consumption (KWH) statistics of Bidirectional Graph-hop Retrieval", "figure_data": "", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "We show the experimental results under different graph structures.", "figure_data": "GraphMethodExplanation Graph GM↑ GA↑ GED↓F1↑Retrieval Precision↑ Recall ↑EM↑QA EM Acc↑SCForward77.419 78.6290.63387.40184.76193.54877.41988.710SHBackward72.581 73.7900.63185.64582.30592.74272.58187.903Bidirectional 73.790 75.4030.66587.37583.86895.16173.79088.306MCForward50.370 53.3331.51180.02579.38354.07457.03777.037SHBackward47.407 52.5931.88969.01267.03772.96349.63075.556Bidirectional 49.630 52.5931.40085.35883.95190.01260.74181.481SCForward21.678 22.2022.29491.57890.41594.84373.77662.413MHBackward70.629 71.6780.67592.00991.30595.10573.07761.538Bidirectional 70.455 71.6780.68092.34791.64395.54272.72761.364MCForward9.53310.117 15.889 90.30590.46491.74261.47967.315MHBackward35.603 36.187 11.652 83.00484.33283.32869.75862.451Bidirectional 34.825 35.603 13.125 95.05594.91396.35164.39768.289HyperparameterValueBGR Encoderbert-base-uncasedBGR DecoderTransformersHidden Size768Num Layers12(Encoder) + 1(Decoder)γ0.2Dropout0.1Linear decay1e-6Learning Rate1e-5Reader Learning Rate1e-4Batch size8Max length30Num Epochs20GPU DRAM usage18GParams139M", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Hyper-parameter settings.", "figure_data": "", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "is a step of converting Q: Which area is smaller, St. Louis or Eau Claire? A: Let\"s think step by step. St. Louis area is 171.128084. On the other hand, Eau Claire area 89.579141 is in the French department of Eau Claire.", "figure_data": "Q: Which area is smaller, St. Louis or Eau Claire?(LM Input)1. St. Louis area is 171.128084.2. Eau Claire area 89.579141 is in the French department of Eau ClaireA:Eau Claire(Output)(A) W/O Reason Graph(LM Input)Therefore, theanswer isEau Claire(Output)(B) With Reason Graph", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "PROMPT FOR REASONGRAPHQA Q: Does the film Veronica Mars (whose release date is 2014-03-08) or Rambo (that succeeds Rambo III) have the shorter running time? 0. This Side of Resurrection is a drama film which was directed by Joaquim Sapinho and produced by the latter. The film was released on September 8, 2011. 1. The film Back on Track has a duration of +114 minutes. 2. Red Dust is a film that has a duration of +107 minutes. 3. The film Slack Bay was made in 2016 and has a 14 class in the Indie rating. It was directed by Bruno Dumont and stars Valeria Bruni Tedeschi. It was released on 26 January 2017. 4. The film Saving Mr. Banks was a feature film which was released in 2013 and 2014. Its title is for Hispanic America. The film stars Tom Hanks and Melanie Paxson. 5. Fierro is a Spanish language film which was released in 2007. It is a drama with Hector Calori and Roly Serrano as the main characters. 6. Me You Them is a film starring Lima Duarte and Regina Case. It was released on 16 May 2000 and 22 March 2001. The film's musical score was provided by Gilberto Gil. 7. The film Come What May has a duration of +114 minutes. 8. The biographical film La macchinazione was written and directed by David Grieco. It was released in 2016 and is a drama film with a 2017 release date. 9. The Rewrite is a 2014 film starring Allison Janney. The film has an AL rating. It was released on 13 November 2014 and on 25 December 2014. 10. The Rambo is a 90-minute television series. 11. The film Joyeux No l is set in 1914. Its stars are Rolando Villazón, Dany Boon and Joachim Bißmeier. The film was nominated for the Academy Award for Best International Feature Film. 12. The Italian film Open Your Eyes (1997) is about telepresence. Its main characters are Jorge de Juan and Isabel Serrano.", "figure_data": "The director of photography is Hans Burman. The film alsostars Ion Gabella.13. Robin Hood ( 1991 British film ) has a duration of +133minute.14. The film redoubtable has a duration of +107 minutes.15. The film Summer Games, a drama, was released in 2011.Its director was Rolando Colla and stars Giorgio Gobbi andAlessia Barela.16. Roland Verhavert is a director of films and is related to thecategory of films directed by Roland Verhavert.17. No Retreat, No Surrender 2 is a 1987 film directed byCorey Yuen. It stars Max Thayer and Matthias Hues. The filmwas released on 28 January 1988.18. The sequel to Rambo III was released in 2005.19. Forbidden Hours is a drama film that was released in 1927and 1928. It was directed by Harry Beaumont and starredRamon Novarro.20. Veronica Mars is 107 minutes long.21. Humidity is a drama film from Serbia which was releasedon 15 February 2016. The film's cast includes Slaven Do lo.22. Rango ( 2011 film ) has a duration of +107 minute.23. The Wonders is a film which was released in 2014 and2015. It stars Sabine Timoteo and Sam Louwyck.24. Risc vs. Reward is an extended play by Photek. It wasreleased in 1997 and its genre is downtempo. It was followedby Modus Operandi.A: Q-18-20;Q-18-10;", "figure_id": "tab_12", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars 1. Does the film Veronica Mars (whose release date is 2014-03-08) or Rambo (that succeeds Rambo III) have the shorter running time? 0. This Side of Resurrection is a drama film which was directed by Joaquim Sapinho and produced by the latter. The film was released on September 8, 2011. 1. The film Back on Track has a duration of +114 minutes. 2. Red Dust is a film that has a duration of +107 minutes. 3. The film Slack Bay was made in 2016 and has a 14 class in the Indie rating. It was directed by Bruno Dumont and stars Valeria Bruni Tedeschi. It was released on 26 January 2017. 4. The film Saving Mr. Banks was a feature film which was released in 2013 and 2014. Its title is for Hispanic America. The film stars Tom Hanks and Melanie Paxson. 5. Fierro is a Spanish language film which was released in 2007. It is a drama with Hector Calori and Roly Serrano as the main characters. 6. Me You Them is a film starring Lima Duarte and Regina Case. It was released on 16 May 2000 and 22 March 2001. The film's musical score was provided by Gilberto Gil. 7. The film Come What May has a duration of +114 minutes. 8. The biographical film La macchinazione was written and directed by David Grieco. It was released in 2016 and is a drama film with a 2017 release date. 9. The Rewrite is a 2014 film starring Allison Janney. The film has an AL rating. It was released on 13 November 2014 and on 25 December 2014. 10. The Rambo is a 90-minute television series. 11. The film Joyeux No l is set in 1914. Its stars are Rolando Villazón, Dany Boon and Joachim Bißmeier. The film was nominated for the Academy Award for Best International Feature Film. 12. The Italian film Open Your Eyes (1997) is about telepresence. Its main characters are Jorge de Juan and Isabel Serrano. The film Summer Games, a drama, was released in 2011. Its director was Rolando Colla and stars Giorgio Gobbi and Alessia Barela. 16. Roland Verhavert is a director of films and is related to the category of films directed by Roland Verhavert. 17. No Retreat, No Surrender 2 is a 1987 film directed by Corey Yuen. It stars Max Thayer and Matthias Hues. The film was released on 28 January 1988. 18. The sequel to Rambo III was released in 2005. 19. Forbidden Hours is a drama film that was released in 1927 and 1928. It was directed by Harry Beaumont and starred Ramon Novarro. 20. Veronica Mars is 107 minutes long. 21. Humidity is a drama film from Serbia which was released on 15 February 2016. The film's cast includes Slaven Do lo. 22. Rango ( 2011 film ) has a duration of +107 minute. 23. The Wonders is a film which was released in 2014 and 2015. It stars Sabine Timoteo and Sam Louwyck. 24. Risc vs. Reward is an extended play by Photek. It was released in 1997 and its genre is downtempo. It was followed by Modus Operandi.", "figure_data": "PROMPT FOR REASONGRAPHQAQ:", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars 2. Which area is smaller, St. Louis or Eau Claire? 0. The area of Belle Prairie City, Illinois is +0.45 square mile. 1. La Prairie , Illinois has an area of +0.23 square mile. 2. The area of Bellerive, Missouri is +0.873665 square kilometres. 3. Pierpont, in Missouri, has an area of +0.25 square mile. 4. Montclare, Chicago has an area of +2.56 square kilometres. 5. Fredericton is located in the area of 130680000. 6. Lewistown, Illinois has a surface area of +5.177494 square kilometres. 7. Old Shawneetown, Illinois has an area of +0.53 square mile. 8. St. Marys, Iowa has an area of +0.361422 square kilometres. 9. San Jose, Illinois has an area of +0.50 square mile. 10. La Loge Pas-de-Calais has an area of +0.68 square kilometres. 11. Eau Claire, Calgary has a surface area of +0.4 square kilometres. 12. Saunemin, Illinois has an area of +0.24 sq. mi. 13. St. Louis area is 171.128084.", "figure_data": "PROMPT FOR REASONGRAPHQAQ:", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars 3. Which work of Fritz Leiber Junior was awarded Nebula Award for Best Novella? 0. Brian Aldiss won the Nebula Award for Best Novella. 1. Frederik Pohl won the Nebula Award for Best Novel in 1976 for Gateway and in 1977 for Man Plus. 2. James Tiptree Jr. won the Nebula Award for Best Novella for the novel \"A Momentary Taste of Being\". He also won the Nebula Award for Best Novella for \"Houston, Houston, Do You Read?\" in 1975. He also won the Nebula Award for Best Novella in 1976 and in 1985. 3. Fritz Leiber Junior won the Nebula Award for Best Novella for Ill Met in Lankhmar. 4. The Locus Award for Best Novel is a literary award. It was formerly known as the Locus Award for Best Science Fiction Novel. 5. Fritz Leiber is a human being who wrote science fiction. He was influenced by Robert E. Howard. He won the Hugo Award for Best Novelette. He was nominated for the Hugo Award for Best Dramatic Presentation. He won the Geffen Award. 6. David Gerrold was nominated for the Nebula Award for Best Novella in 1998. 7. The winner of the Nebula Award for Best Novelette is Eugie Foster. 8. Robert Silverberg was the winner of the Hugo Award for Best Novelette. 9. In addition to the Nebula Award nominations, Le Guin was nominated for the Nebula Award for Best Novel for Powers. 11. Then there is the Nebula Award for Best Novel for Powers, the book that was nominated for the Nebula Award for Best Novel. 13. Jonathan Lethem was nominated for the Nebula Award for Best Novella in 2000. 14. Geoff Ryman's novelette What We Found won the Nebula Award for Best Novelette in 2007. He was nominated for the award in 2011. 15. Fritz Leiber was nominated for the Hugo Award for Best Novel. 16. Fritz Leiber Junior won the Nebula Award for Best Novella. 17. Samuel R. Delany won the Nebula Award for Best Novel for The Einstein Intersection in 1966. He also won the Nebula Award for Best Novel for Dhalgren in 1967. He also won the Nebula Award for Best Novel for Triton in 1975. 18. Fritz Leiber won the Hugo Award for Best Novelette. 19. David Gerrold won the Nebula Award for Best Novelette. 20. The novel Binti won the Nebula Award for Best Novella in 2015. 21. Lucius Shepard won the Hugo Award for Best Novella for Barnacle Bill the Spacer in 1993. He has also won the Hugo Award for Best", "figure_data": "PROMPT FOR REASONGRAPHQAQ:", "figure_id": "tab_15", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars 4. What is the connection between Margot Kidder to Brian De Palma? 0. The drama genre Counterblast was created by Guy Morgan and Jack Whittingham. It stars Martin Miller and Nova Pilbeam. Margaretta Scott and Martin Miller are also in the cast. 1. The screenwriter of Mix Me a Person was Ian Dalrymple. The drama genre of the film is Drama. The cast includes Carole Ann Ford, Donald Sinden, Tony Booth and Sergei Nolbandov. 2. The romantic comedy Dear Heart stars Richard Deacon, Barbara Nichols, Ruth McDevitt and Martin Manulis. It was written by Tad Mosel. 3. The Spy Who Dumped Me was produced by Brian Grazer and Lionsgate. It is rated B15 by the RTC. It stars Justin Theroux and Mila Kunis. It is written in English. 4. Lyra Belacqua is a fictional human being created by Philip Pullman. She is a film character portrayed by Dakota Blue Richards and Dafne Keen. 5. Waiting for the Light is a comedy film starring Teri Garr, Shirley MacLaine and produced by Caldecot Chubb. It was written by Christopher Monger. 6. Paper Heart is a comedy-drama starring Michael Cera and Demetri Martin. The film was written by Charlyne Yi and produced by Charlyne Yi. It also stars Derek Waters. 7. Philippa Boyens is the wife of Paul Gittins, who has a child called Calum Gittins. 8. I, Cesar is a drama film released on 9 April 2003. Its stars are Maria de Medeiros and Karine Silla. It was directed by Richard Berry. 9. Narc is a drama (film and television) with a mystery genre. Ray Liotta and Alan van Sprang star. Joe Carnahan is the screenwriter. 10. The Halfway House is a drama produced by Ealing Studios and distributed by Ealing Studios. It stars Glynis Johns and Françoise Rosay. The producer is Michael Balcon. 11. The Object of My Affection is a romantic comedy directed by Nicholas Hytner. It stars Peter Maloney and Sarah Hyland. The screenplay was written by Wendy Wasserstein. 12. Mr. Denning Drives North is a mystery film made by London Films. Its screenwriter is Alec Coppel, it stars John Mills and Phyllis Calvert. It was produced by Stephen Mitchell. 13. The screenwriter of The Girl and the Millionaire is Peer Guldbrandsen. Paul Hagen 14. I Thank a Fool is a drama and mystery film written by Karl Tunberg. The cast includes Richard Wattis, Diane Cilento and Cyril Cusack. 15. The Wrong Man is a film noir written by Angus MacPhail. It stars Dayton Lummis, Esther Minciotti and Harold J. Stone. 16. Margot Kidder is married to Brian De Palma. 17. David Krumholtz starred in the episode \"Scorched\" in Numbers. He is the actor Charlie Eppes. 18. Amy Spettigue is a character in Where's Charley?. 19. The Impatient Alchemist is a 2002 mystery film directed by Patr cia Ferreira. Chete Lera and Miguel ngel Sola are stars. 20. Martha O'Driscoll and Lou Costello are both stars, as is production designer John B. Goodman. Here Come the Co-Eds was also a TV series. 21. Mary Lynn Rajskub, Brent Spiner and Broderick Johnson are the stars of Dude, Where's My Car? 22. Philip Merivale is married to Gladys Cooper and is the husband of Viva Birkett. He has a child called John Merivale. Merivale is a human being. 23. The Wicker Man is a drama with a mystery genre. It stars Ross Campbell, Ingrid Pitt and Lindsay Kemp. The music is by Paul Giovanni. 24. The US film Catch 44 was produced by Megan Ellison. Its cast includes Malin kerman, Deborah Ann Woll and Michael Benaroya. The film is about a Las Vegas Valley.", "figure_data": "PROMPT FOR REASONGRAPHQAQ:", "figure_id": "tab_16", "figure_label": "14", "figure_type": "table" } ]
Minjun Zhu; Yixuan Weng; Shizhu He; Kang Liu; Jun Zhao; Mo Mou; Shiyu Yu; Yufei Chang; Li Feng; Hui Su 2021 Zhang; Ev- Idence Complementary; Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Car- Roll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul Christiano; Jan Leike; Ryan 2022 Lowe; Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison
[ { "authors": "Zeina Abu-Aisheh; Romain Raveaux; Jean-Yves Ramel; Patrick Martineau", "journal": "", "ref_id": "b0", "title": "An exact graph edit distance algorithm for solving pattern recognition problems", "year": "2015" }, { "authors": "Oshin Agarwal; Heming Ge; Siamak Shakeri; Rami Al-Rfou", "journal": "", "ref_id": "b1", "title": "Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training", "year": "2021" }, { "authors": "Giambattista Amati", "journal": "BM", "ref_id": "b2", "title": "", "year": "2009" }, { "authors": "Daniel Andor; Luheng He; Kenton Lee; Emily Pitler", "journal": "", "ref_id": "b3", "title": "Giving bert a calculator: Finding operations and arguments with reading comprehension. empirical methods in natural language processing", "year": "2019" }, { "authors": "Akari Asai; Kazuma Hashimoto; Hannaneh Hajishirzi; Richard Socher; Caiming Xiong", "journal": "", "ref_id": "b4", "title": "Learning to retrieve reasoning paths over wikipedia graph for question answering", "year": "2020" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Reading Wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Bhavana Dalvi; Peter Jansen; Oyvind Tafjord; Zhengnan Xie; Hannah Smith; Leighanna Pipatanangkura; Peter Clark", "journal": "", "ref_id": "b7", "title": "Explaining answers with entailment trees. empirical methods in natural language processing", "year": "2021" }, { "authors": "Dheeru Dua; Yizhong Wang; Pradeep Dasigi; Gabriel Stanovsky; Sameer Singh; Matt Gardner", "journal": "", "ref_id": "b8", "title": "Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. north american chapter of the association for computational linguistics", "year": "2019" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b10", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Koustav Rudra; Zeon ; Trevor Fernando; Avishek Anand", "journal": "arXiv: Information Retrieval", "ref_id": "b11", "title": "An in-depth analysis of passagelevel label transfer for contextual document ranking", "year": "2021" }, { "authors": "Jiaxin Shi; Shulin Cao; Liangming Pan; Yutong Xiang; Lei Hou; Juanzi Li; Hanwang Zhang; Bin He", "journal": "", "ref_id": "b12", "title": "Kqa pro: A dataset with explicit compositional programs for complex question answering over knowledge base", "year": "2022" }, { "authors": "James Thorne; Majid Yazdani; Marzieh Saeidi; Fabrizio Silvestri; Sebastian Riedel; Alon Halevy", "journal": "", "ref_id": "b13", "title": "Database reasoning over text", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b14", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Yixuan Weng; Minjun Zhu; Shizhu He; Kang Liu; Jun Zhao", "journal": "", "ref_id": "b15", "title": "Large language models are reasoners with self-verification", "year": "2022" }, { "authors": "Yixuan Weng; Minjun Zhu; Fei Xia; Bin Li; Shizhu He; Kang Liu; Jun Zhao", "journal": "", "ref_id": "b16", "title": "Neural comprehension: Language models with compiled neural networks", "year": "2023" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Tomer Wolfson; Mor Geva; Ankit Gupta; Matt Gardner; Yoav Goldberg; Daniel Deutch; Jonathan Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b18", "title": "Break it down: A question understanding benchmark", "year": "2020" }, { "authors": "Peiyun Wu; Xiaowang Zhang; Zhiyong Feng", "journal": "", "ref_id": "b19", "title": "A survey of question answering over knowledge base", "year": "2019" }, { "authors": "Wenhan Xiong; Lorraine Xiang; Srinivasan Li; Jingfei Iyer; Patrick Du; William Yang Lewis; Yashar Wang; Wen-Tau Mehdad; Sebastian Yih; Douwe Riedel; Barlas Kiela; Oguz", "journal": "", "ref_id": "b20", "title": "Answering complex open-domain questions with multi-hop dense retrieval", "year": "2021" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William W Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "", "ref_id": "b21", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering. empirical methods in natural language processing", "year": "2018" }, { "authors": "Weizhe Yuan; Graham Neubig; Pengfei Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Bartscore: Evaluating generated text as text generation", "year": "2021" }, { "authors": "Aohan Zeng; Xiao Liu; Zhengxiao Du; Zihan Wang; Hanyu Lai; Ming Ding; Zhuoyi Yang; Yifan Xu; Wendi Zheng; Xiao Xia", "journal": "", "ref_id": "b23", "title": "Glm-130b: An open bilingual pre-trained model", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "Learning", "ref_id": "b24", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Chao Wang; Jianming Zheng; Soujanya Poria; Tat-Seng Chua", "journal": "Artificial Intelligence", "ref_id": "b25", "title": "Retrieving and reading: A comprehensive survey on open-domain question answering", "year": "2021" }, { "authors": "Minjun Zhu; Yixuan Weng; Shizhu He; Kang Liu; Jun Zhao", "journal": "", "ref_id": "b26", "title": "Reasonchainqa: Text-based complex question answering with explainable evidence chains", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 119.86, 123.5, 333.86, 16.28 ], "formula_id": "formula_0", "formula_text": "✓ sentence ✗ ✓ 1 ReasonGraphQA (Ours) 5 ✓ sentence ✓ ✓ 262" }, { "formula_coordinates": [ 5, 83.84, 624.92, 91.51, 15.41 ], "formula_id": "formula_1", "formula_text": "H i = {h 1 i , • • • , h i k i }" }, { "formula_coordinates": [ 5, 309.82, 134.06, 215.2, 40.47 ], "formula_id": "formula_2", "formula_text": "Ei = ∪ELU (F F N (hi, Ei)) hi ∈ Hi (1) Ei+1 = Ei+1 ∪ Ei, Hi+1 = hi ∪ ẽi | ẽi ∈ Ẽi (2)" }, { "formula_coordinates": [ 5, 316.49, 355.7, 208.65, 26.94 ], "formula_id": "formula_3", "formula_text": "G = G F ∪ G B if BSC(G F , G B ) > γ G B if BSC(G F , G B ) ≤ γ(3)" }, { "formula_coordinates": [ 5, 333.8, 497.88, 191.34, 25.55 ], "formula_id": "formula_4", "formula_text": "BSC(G F , G B ) = Edge F ∩ Edge B Edge F ∪ Edge B(4)" }, { "formula_coordinates": [ 6, 114.23, 88.95, 175.63, 10.69 ], "formula_id": "formula_5", "formula_text": "A = Reader T 5 ({e i |e i ∈ G})(5)" }, { "formula_coordinates": [ 13, 76.84, 627.38, 214.91, 27.99 ], "formula_id": "formula_6", "formula_text": "Forward Retrieval 4 ± 1 0.15 ± 0.1 2.5 ± 0.5 Backward Retrieval 4 ± 1 0.1 ± 0.1 2.5 ± 0.5 BGR 7.5 ± 1 0.25 ± 0.1 5 ± 0.5" } ]
10.18653/v1/2022.sigmorphon-1.11
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Decompounding is the task of separating compound words into their single word constituents. Decompounding is used in user-facing tools such as dictionaries and morphological analyzers (Altinok, † Equal senior authorship. " }, { "figure_ref": [], "heading": "2018", "publication_ref": [ "b38", "b7", "b0", "b23", "b23", "b48", "b43", "b58", "b47", "b26", "b50", "b59", "b26" ], "table_ref": [], "text": "). Historically, it has also been widely used as a preprocessing step for other NLP tasks, e.g. for information retrieval (Monz and De Rijke, 2002;Braschler and Ripplinger, 2004), automatic speech recognition (Adda-Decker and Adda, 2000) and machine translation (Koehn and Knight, 2003).\nDecompounding can come in two similar yet different task formats: (i) compound segmentation and (ii) compound normalization (Ziering and van der Plas, 2016). Compound segmentation is the task of segmenting a word into its compound constituents, while preserving its surface form (e.g. bridesmaid → brides + maid). Compound normalization is the task of recovering the base form of each compound constituent (e.g. bridesmaid → bride + maid). 1Most prior work on decompounding has focused on the few languages with excessively productive compound formation such as Finnish, German and Swedish (Koehn and Knight, 2003;Shapiro, 2016;Riedl and Biemann, 2016). However, compound words occur in a large, diverse number of languages (Vogel and Scalise, 2010). Yet, datasets which annotate compounds with their segmented or normalized form sparsely exist, even in languages with high compound usage. As the first contribution of this work, we aim to address this issue by introducing a dataset of 255k compound words and their normalized form as well as non-compound words covering 56 languages obtained from Wiktionary (www.wiktionary.org).\nUsing our dataset, we then find that large language models (LLMs), which typically rely on subword-based tokenization (Sennrich et al., 2016;Kudo and Richardson, 2018), struggle with decompounding, as illustrated in Figure 1. Performance is especially low for compounds where subword boundaries do not coincide with compound constituent boundaries; we term compounds with this property 'hard' compounds (Figure 2).\nIn order to create a more effective decompounding model, we then formulate compound segmentation and normalization as a sequence-to-sequence learning task (Sutskever et al., 2014) and train a byte-level ByT5 model (Xue et al., 2022) using a two-stage framework. In the first stage, we use a novel self-supervised hyphen-prediction objective to learn compound segmentation without any labeled data. In the second stage, we turn the model into a compound normalization model via supervised training on our Wiktionary data. In addition, we introduce a procedure to predict the segmentation of any compound word based on its normalized form, effectively making compound segmentation a subtask of normalization. Finally, we demonstrate that decompounding has real-world applications by investigating compound segmentation for language model tokenization. We apply compound segmentation as pretokenization during training of a SentencePiece tokenizer (Kudo and Richardson, 2018), which results in fewer hard compounds while incurring no extra cost during training and inference of the language model (i.e. the only extra cost occurs during creation of the tokenizer).\nOur Stage 1 models outperform the best prior unsupervised models by 13.9% accuracy on average, while our (supervised) Stage 2 models outperform all prior language-specific decompounding tools. Furthermore, a model trained with a Com-poundPiece tokenizer achieves a 5.5% improved performance on compound normalization over an otherwise equivalent SentencePiece model." }, { "figure_ref": [], "heading": "Contributions. 1)", "publication_ref": [], "table_ref": [], "text": "We introduce a dataset for decompounding of 255k words across 56 languages obtained from Wiktionary. 2) We show that a byte-level language model can efficiently decompound words via a two-stage training framework, whereas current subword-based LLMs fall short.\n3) We present a way to improve subword tokenization by performing compound segmentation during creation of the tokenizer. 4) We make our code, models and dataset publicly available at github.com/bminixhofer/compoundpiece." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b28", "b23", "b27", "b30", "b57", "b23", "b32", "b23", "b54", "b6", "b1", "b43", "b14", "b17", "b13", "b35", "b18", "b4", "b55", "b21", "b41", "b31", "b24", "b48", "b56", "b15", "b42", "b26", "b47", "b39", "b34", "b61", "b46", "b16", "b36" ], "table_ref": [], "text": "Decompounding. Early work in decompounding used word frequency lists along with manually specified suffixes (e.g., a connective -s-) to segment and normalize German compounds (Langer, 1998;Koehn and Knight, 2003). Subsequently, multiple submissions to the Morpho Challenge in morphological segmentation (Kurimo et al., 2010) explicitly or implicitly made use of compound segmentation (Lignos, 2010;Virpioja et al., 2011). Later work replaced the fixed list of suffixes used in Koehn and Knight (2003) by learned morphological operations from parallel corpora (Macherey et al., 2011) or from pre-lemmatized corpora of non-compound words (Ziering and van der Plas, 2016). Another branch of work added more linguistic knowledge in the form of black-and white-lists to the paradigm of Koehn and Knight (2003), resulting in JWordSplitter 2 (German) and nl-splitter 3 (Dutch); this has only been done for a couple of languages due to its knowledge-intensive nature. CharSplit (Tuggener, 2016) achieves high performance for German by relying on the frequency of character n-grams appearing within the compound.\nWhile the approaches above use (at most) light supervision, there exist supervised approaches which learn directly from an annotated corpus of compounds and their constituents, along with optional auxiliary signals (Biemann et al., 2008;Alfonseca et al., 2008). In contrast, SECOS (Riedl and Biemann, 2016) is a fully unsupervised and language-agnostic method achieving competitive performance by using word embeddings along with word frequencies for semantic compound segmen- tation. Our method improves over SECOS in the unsupervised case and provides a unified alternative to prior language-specific decompounding tools via additional training on labelled data.\nRelation to Morphological Segmentation. Decompounding can be seen as a special case of morphological segmentation (Batsuren et al., 2022a). However, a large amount of work in morphological segmentation focuses on derivational and inflectional morphology (Cotterell et al., 2016;Faruqui et al., 2016;Cotterell et al., 2018;McCarthy et al., 2019;Goldman et al., 2022), which is reflected by datasets such as UniMorph (Batsuren et al., 2022b) and MorphyNet (Batsuren et al., 2021) annotating inflectional and derivational affixes, but not compound constituents. The SIGMORPHON-2022 Shared Task (Batsuren et al., 2022a, SMST 2022) breaks this pattern by providing a dataset for segmentation into compound constituents in addition to inflectional and derivational affixes. We improve on the SMST 2022 dataset by broadening coverage from 9 to 56 languages, as well as handling negatives (i.e., non-compounds) more carefully ( §3.1).\nDecompounding Datasets. Besides the SMST 2022 dataset, datasets for decompounding include AuCoPro (van Zaanen et al., 2014) for Dutch and Afrikaans, and the GermaNet dataset for German (Henrich and Hinrichs, 2011). Although there is a significant amount of work studying compound terms in languages with highly productive compound formation beyond German and Dutch, such as Finnish and Greek (Pollatsek et al., 2000;Lindén and Pirinen, 2009;Koliopoulou, 2014;Shapiro, 2016;Virkkunen et al., 2018), to the best of our knowledge there exist no public datasets for decompounding in these languages (and beyond).\nLinguistically Informed Tokenization. Various studies have tried augmenting or replacing the 'linguistically uninformed' subword-tokenizers used in contemporary LMs (Devlin et al., 2019;Raffel et al., 2020, inter alia) such as SentencePiece (Kudo and Richardson, 2018) and BPE (Sennrich et al., 2016) with linguistic knowledge. Using manually constructed morphological analyzers before applying BPE (Pan et al., 2020) or after generation (Matthews et al., 2018) has led to improvements, but is limited by the availability (and quality) of morphological analyzers across many languages.\nUnsupervised morphological segmentation has not shown consistent improvements (Zhou, 2018;Saleva and Lignos, 2021;Domingo et al., 2023); see Mielke et al. (2021) for additional discussion." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Dataset Construction", "publication_ref": [ "b23", "b43", "b54", "b21", "b55" ], "table_ref": [], "text": "We use words categorized as compound terms on Wiktionary to create a dataset for decompounding. The information on Wiktionary allows associating compound terms with their corresponding normalized constituents. Since Wiktionary only annotates the top-level split,4 we recursively split constituents into their smallest parts by checking if the top-level constituents are themselves compound words. Many prior decompounding tools do not evaluate performance on negative examples (i.e. non-compound words; Koehn and Knight, 2003;Riedl and Biemann, 2016;Tuggener, 2016) since most prior datasets do not contain any (Henrich and Hinrichs, 2011;van Zaanen et al., 2014). It is not trivial to obtain negative examples from Wiktionary since a large amount of compound words are not categorized as such, leading to many false negatives. We solve this issue by using all normalized compound constituents as negative examples, since by definition the compound constituents can also appear on their own as non-compound words.\nNote that this way of obtaining negative examples is biased against words which never occur inside compounds; however, we found this to be a rather weak bias (Appendix E). We include every language with at least 100 words, leading to a dataset which covers 56 languages. The number of training examples is shown in Figure 3, example words in Figure 4. We select up to 1,000 words (but at most 50% of total words) in every language as evaluation data. See Appendix A for further details concerning the dataset." }, { "figure_ref": [], "heading": "Two-Stage Training", "publication_ref": [ "b37" ], "table_ref": [], "text": "To overcome the problem of data scarcity in lowresource languages, we introduce a two-stage training procedure for creating dedicated decompounding models. In Stage 1, we train on the selfsupervised objective of restoring hyphenation in words extracted from a large-scale Web corpus, leading to a self-supervised compound segmentation model. In Stage 2, we fine-tune the model on compounds and their normalized constituents from an annotated corpus in a supervised fashion, turning it into a compound normalization model.\nStage 1: Self-Supervised Compound Segmentation. This stage is motivated by the fact that hyphen characters can be seen as a high-precision, lowrecall indicator of compound constituent boundaries, in the same way that newline characters are a high-precision, low-recall indicator of sentence boundaries (Minixhofer et al., 2023). We use this natural segmentation into compound constituents to create a compound segmentation model without requiring any labeled data. First, we obtain all words containing a hyphen plus an equivalent amount of non-hyphenated words from a corpus of unannotated text. Hyphens primarily have two uses: (1) as a compound boundary and (2) to indicate the word continues on the next line. We only want to retain hyphens when they function as compound boundaries, so we filter the instances of ( 2) by discarding all words where the hyphenated form of the word occurs x ≤ e -6 times less frequent than the non-hyphenated form. 5We strip all words of hyphens and train a seq2seq LM to predict the original (hyphenated) form of each word. We introduce a logit bias b added to the logit of the token representing a hyphen to skew generation towards or away from hyphenation at inference time. Training on this data enables effective compound segmentation without relying on human annotations, as demonstrated later in §5.\nStage 2: Supervised Compound Normalization. In the second stage, we improve upon the Stage 1 model by additional training on labeled data, where the inputs are individual compounds, and the target is to predict the normalized constituents of each compound, separated by a hyphen. Training exclusively on compound normalization allows using data from the collected Wiktionary dataset, which contains compound terms along with their normalized constituents across many languages, but does not contain compound segmentation annotations." }, { "figure_ref": [ "fig_2" ], "heading": "Turning Normalization into Segmentation", "publication_ref": [ "b29", "b19" ], "table_ref": [], "text": "Considering the scarcity of annotated compound segmentation data, it is infeasible to train a multilingual model directly on segmentation. Thus, we introduce a method to predict a segmentation given the normalized constituents. Let x be a word of length n. In addition, we have k normalized com-pound constituents c = {c 1 , ..., c k } (e.g. predicted by the Stage 2 model). Our aim is to find boundaries r = {r 0 , ..., r k }, r 0 = 0, r k = n giving rise to the segmentation s = {x[r 0 : r 1 ], ..., x[r k-1 : r k ]}. We approach this problem by minimizing the edit distance of each segment to its corresponding normalized constituent. This leads to an optimization problem where the cost C(s) indicates the total edits needed to turn all segments into their corresponding normalized constituents:\nC(s) = k i=1 L(s i , c i ).\nHere, L is an edit distance metric such as Levenshtein distance (Levenshtein et al., 1966). The optimal segmentation s ⋆ is the segmentation with the minimal cost: s ⋆ = arg min s C(s).\nIn case of ties, we prefer segmentations with higher edit cost for segments with lower indices due to the preference for languages in our training set for suffixation over prefixation (Hammarström, 2021). 6 There is a total of n k-1 possible segmentations, so solving the optimization problem via enumeration of all solutions is only feasible for short words (Figure 5). We introduce a more efficient search algorithm which is capable of quickly finding the optimal segmentation of long words by enumerating candidates in order of a lower bound on the edit distance in Appendix B. This method can be used to turn the normalization predictions of a model into segmentation. We also use it on the ground-truth normalization from Wiktionary, making it possible to approximate compound segmentation performance by comparing the segmentation corresponding to the ground-truth normalization to the segmentation produced by the model normalization predictions." }, { "figure_ref": [ "fig_4" ], "heading": "Reducing Hard Compounds", "publication_ref": [ "b11", "b59", "b15", "b42", "b8", "b36" ], "table_ref": [], "text": "We define hard compounds relative to a particular tokenizer as compound words where the constituent boundaries do not coincide with token boundaries set by the tokenizer. More formally, a compound word made up of k constituents and l subwords is hard if the constituent boundaries r = {r 0 , ..., r k } are not a subset of the token boundaries t = {t 0 , ..., t l } i.e. r ̸ ⊂ t. We hypothesize that hard compounds may impair language model performance due to the nontrivial relation of subwords to the compound word. In contrast, in easy compounds the word is naturally decomposed into its constituents. The increased difficulty of hard compounds is apparent on the sequence-to-sequence compound segmentation task: for an easy compound, all tokens can be copied to the output (only the special separator tokens must be inserted). On the other hand, for hard compounds, the tokens change, requiring knowledge of the characters within each token.\nTokenizers where every possible constituent boundary is a token boundary trivially do not give rise to any hard compounds. This includes character-level (Clark et al., 2022;Tay et al., 2022b) as well as byte-level tokenizers (Xue et al., 2022). However, many contemporary language models use subword-based tokenizers to increase efficiency (Devlin et al., 2019;Raffel et al., 2020;Brown et al., 2020). We propose a modification to subword tokenization to reduce the number of hard compounds while keeping the efficiency advantages.\nSubword tokenizers typically segment text into pre-tokens (e.g. by splitting on whitespace) before applying their subword tokenization algorithm (Mielke et al., 2021). We propose modifying pretokenization by applying compound segmentation in addition to splitting on whitespace. This modification is only done during creation of the tokenizer, thus incurring no additional cost once the tokenizer has been created. We refer to tokenizers created in this way as CompoundPiece tokenizers. The modified pretokenization tries to create more subwords which do not span compound constituent boundaries, thus decreasing the fraction of hard compounds (Figure 6). It aims to turn the dual-route model for computing the meaning of complex (compound) words proposed by Hofmann et al. ( 2021) into a single-route model which always computes the meaning of compounds from their constitutent subwords, and never stores a compound word as a single subword." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b60" ], "table_ref": [], "text": "We obtain Stage 1 data by selecting all words containing a hyphen from a subset of the mC4 corpus (Xue et al., 2021) which results in 25M hyphenated words. As negative examples, we choose the n most common words from mC4 such that there is an equivalent amount of non-hyphenated and hyphenated words in every language. Regarding the Stage 2 data, see Section §3.1 before." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b59", "b42", "b60", "b44", "b42" ], "table_ref": [], "text": "We train a decompounding model using a two-stage framework ( §3) covering 56 languages. We use ByT5 (Xue et al., 2022) as our main pretrained model and the main starting point since it directly ingests Unicode bytes instead of using subword tokenization, leading to zero hard compounds. We compare our approach against the subword-based T5 (Raffel et al., 2020), Flan-T5 (Chung et al., 2022) and mT5 (Xue et al., 2021) trained with the same two-stage framework. We use t5x (Roberts et al., 2022) for training with a batch size of 512 and a maximum sequence length of 64 tokens, otherwise matching T5 pretraining (Raffel et al., 2020). The setup is the same for Stage 1 and Stage 2." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b21", "b55", "b40" ], "table_ref": [], "text": "Metric. We measure performance via averaged accuracy, i.e., the ratio of examples which are entirely correctly segmented or normalized.\nDatasets. Besides our new Wiktionary evaluation subset, we use the established datasets for particular languages: GermaNet (Henrich and Hinrichs, 2011), AuCoPro for Dutch (van Zaanen et al., 2014) as well the subset containing compound-only words across 6 languages from the SIGMORPHON 2022 Shared Task (Batsuren et al., 2022a). 7 Baselines. We use SECOS as the main unsupervised baseline, as well as CharSplit, JWS and nlsplitter as baselines using different amounts of supervision. For the SIGMORPHON 2022 Shared Task dataset, we compare against the task winner, DeepSPIN-3 (Peters and Martins, 2022).\nLanguages. For clarity of presentation, we present results on Danish, German, English, Spanish, Estonian, Greek, Persian, Finnish, Hungarian, Kazakh, Latvian, Dutch, Polish and Swedish as a linguistically diverse subset of languages with productive compound formation in the main paper. For the full evaluation across all languages, see Appendix C." }, { "figure_ref": [ "fig_5" ], "heading": "Results and Discussion", "publication_ref": [ "b42", "b12", "b60", "b45", "b12", "b26", "b25" ], "table_ref": [ "tab_3", "tab_5" ], "text": "Main compound segmentation results are shown in Table 1. For the self-supervised models, we choose the logit bias b = 3 to bias generation towards hyphenated words.8 ByT5 outperforms subwordbased models by a large margin with an absolute 8.9% improvement over the best subword-based model after Stage 1 training, and a 3.7% improvement after Stage 2 training. Comparing models not trained on any annotated data, the self-supervised ByT5 outperforms SECOS on 13 out of 14 languages, and by 13.9% on average.\nWe further compare against language-specific and supervised methods in Table 2. Our ByT5based model outperforms all prior methods on every dataset. Since GermaNet tests compound head segmentation (i.e., even if a word contains multiple constituents, it is only split into a head and a modifier) we count an example as correctly segmented if either the first constituent matches the modifier or the last constituent matches the head.\nEvaluating LLMs on Decompounding. We also evaluate in-context learning performance of multiple LLMs on compound segmentation. We use T5 models with 770M, 3B and 11B parameters (Raffel et al., 2020) as well as the UL2 model with 20B parameters (Tay et al., 2022a) since all of them use the same tokenizer, enabling performance comparisons on hard compounds across LLMs. We use the model versions fine-tuned on the Flan dataset collection (Chung et al., 2022), matching our prompt to the style of instructions in the Flan collection (Appendix D). Zero-to 16-shot results are shown in Figure 7. Although the LLMs perform non-trivially well on easy compounds, performance is close to zero (<3%) on hard compounds. Intriguingly, UL2 20B performs worse than Flan T5 XXL (11B), reversing the trend seen on other tasks (Tay et al., 2022a). All the LLMs perform considerably worse than our ByT5-based model; see Figure 1 Reducing Hard Compounds via Compound-Piece. To evaluate our method of reducing the number of hard compounds in subword-based language models ( §3.4), we train CompoundPiece models in two configurations: (i) multilingual tokenizers across all 56 languages and (ii) separate monolingual tokenizers for every language. For the multilingual tokenizers, we sample languages with p(L) ∝ |L| α where p(L) is the probability of sampling text from a language L with |L| texts as in prior work (Conneau et al., 2020). We use a subsample of 10M texts from the mC4 corpus (Xue et al., 2021) with α = 0.2. The vocabulary size is 250k for the multilingual and 32k for the monolin-gual tokenizers, following prior work (Rust et al., 2021;Conneau et al., 2020).\nWe use our fine-tuned ByT5 model for traintime pretokenization into compound constituents and SentencePiece (Kudo and Richardson, 2018) with Unigram LM (Kudo, 2018) as the subword tokenization applied after pretokenization. As a baseline, we train SentencePiece tokenizers with pretokenization into words (split by whitespace) on the same data. Table 3 shows the percentage of hard compounds for every tokenizer. Compound-Piece reduces the number of hard compounds from 27.1% → 9.7% on average in the monolingual case. In the multilingual case, there is a less marked common tokens in other languages is likely the lead cause for the increased number of hard compounds in the multilingual tokenizers. It could potentially be solved by adjusting token probability based on the input language; we leave this to future work.\nTo more thoroughly evaluate our tokenization, we train multilingual T5 models using Sentence-Piece and CompoundPiece. We use the same sampling ratio (α = 0.2) of mC4 as for creating the tokenizer, but instead use a subset of 500M texts. We match the architecture and the pretraining setup of the mT5-base model, but train for a total of 65.5B tokens. 9 We evaluate the model on the decompounding task. Results are shown in Table 5.\nAblation Studies. We quantify the impact of the most significant design choices of our model in Table 6. Although filtering hyphens-as-newlineindicator ( §4.1) removes only 300k words (<1%) from the pretraining data, it increases performance on negatives by a large margin. Removing Stage 1 training (i.e., fine-tuning directly on the Wiktionary data instead) consistently decreases performance." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We systematically investigated word decompounding tasks of compound segmentation and normalization on a wide scale and in multilingual contexts. To this end, we introduced a dataset of 255k words including compounds and non-compounds across 56 languages from Wiktionary, which allowed us to evaluate performance of LLMs on decompounding. We found that current LLMs' performance is limited due to hard compounds which arise when subword token boundaries do not coincide with compound constituent boundaries. We then introduced dedicated models for decompounding which use byte-level tokenization to entirely avoid hard compounds. Finally, we used our decompounding models to create novel CompoundPiece tokenizers, keeping the efficiency advantages of subword tokenization while strongly decreasing the amount of hard compounds; this increases the performance of CompoundPiece models over comparable Senten-cePiece models on the decompounding tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although self-supervised training in Stage 1 allows for decompounding without any annotated training data, Stage 2 training is limited to languages with sufficient entries in Wiktionary: this excludes extremely low-resource languages. Furthermore, due to computational constraints we have not trained larger models using CompoundPiece tokenization; hence we are unable to report on its benefits at larger scales and on tasks besides decompounding.\nPatrick Ziering and Lonneke van der Plas. 2016. Towards unsupervised and language-independent compound splitting using inflectional morphological transformations. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-653, San Diego, California. Association for Computational Linguistics." }, { "figure_ref": [], "heading": "A Dataset Statistics", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Statistics for the training and validation splits of the Wiktionary dataset are shown in Table 7." }, { "figure_ref": [], "heading": "B Efficient Segmentation Algorithm", "publication_ref": [ "b33", "b20" ], "table_ref": [], "text": "Pseudocode of the brute-force algorithm to turn normalization into segmentation is shown in Algorithm 1. Since enumerating all possible segmentations is only feasible for short words ( §3.3) we introduce a more efficient algorithm (Algorithm 2) where candidate segmentations are ordered such that segmentations with constituents closest in length to the corresponding normalized constituents appear first. Assuming insertions and deletions both have a cost of one (as is the case in standard Levenshtein distance), constituents are thus sorted in increasing order of a lower bound on edit distance. The procedure can stop once the lower bound on edit distance reaches the cost of the best solution found so far since by that point it is impossible for a better solution to be found. Note that the normalization-to-segmentation problem is related to sequence partitioning (Manne and Sorevik, 1995;Han et al., 1992) where the aim is to find a partition of a sequence such that the maximum cost across partitions of some cost function is minimized. However, since our goal is to find the partitioning with the minimum aggregated cost, algorithms for conventional sequence partitioning are not applicable." }, { "figure_ref": [], "heading": "C Results for All Languages", "publication_ref": [], "table_ref": [], "text": "Segmentation accuracy for all languages is shown in Tables 891011." }, { "figure_ref": [ "fig_7" ], "heading": "D LLM Prompts", "publication_ref": [], "table_ref": [], "text": "The prompt used for LLM evaluations ( §5) is shown in Figure 8. The prompt was chosen among 10 prompts to maximize performance on Flan T5 Large. For 2-to 16-shot results, we provide 50% positive (compound) and 50% negative (noncompound) examples in a random order." }, { "figure_ref": [], "heading": "E Quantifying Negative Collection Bias", "publication_ref": [ "b49" ], "table_ref": [], "text": "We conduct an experiment to measure the extent of the bias against words which do not occur inside compounds in our data collection methodology ( §3.1). In particular, we quantify the bias against long non-compound words, which usually would not occur inside compounds. We took a random sample of 500 words each from word frequency lists in English and German (Speer, 2022), manually removed compound words, and compared the length statistics of this (unbiased) sample of non-compounds to our non-compound dataset.\nWhile words in our non-compound dataset are indeed shorter on average (6.0 vs. 6.7 chars for English, 6.7 vs. 7.1 chars for German), with less than one character length difference on average, there is only a weak length bias in data collection.\nWe also found qualitatively that our noncompound dataset contains a wide variety of words since compounding is typically a process that can occur for many different root words. \nData: Compound x, norm. constituents c. Result: Optimal segmentation s ⋆ . k ← ∥c∥, n ← ∥x∥ r 0 ← 0, r n ← n best_cost ← ∞ for r 1 , ..., r n-1 ∈ [n] k-1 do Compute s, C(s) /*" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Ivan Vulić is supported by a personal Royal Society University Research Fellowship 'Inclusive and Sustainable Language Technology for a Truly Multilingual World' (no 221137; 2022-).\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC).\nWe thank Sebastian Ruder and Srini Narayanan for helpful feedback on a draft of this paper." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "/danielnaber/jwordsplitter 3 github.com/bminixhofer/" } ]
While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German, Dutch) and there is no public dataset containing compound and non-compound words across a large number of languages. In this work, we systematically study decompounding, the task of splitting compound words into their constituents, at a wide scale. We first address the data gap by introducing a dataset of 255k compound and noncompound words across 56 diverse languages obtained from Wiktionary. We then use this dataset to evaluate an array of Large Language Models (LLMs) on the decompounding task. We find that LLMs perform poorly, especially on words which are tokenized unfavorably by subword tokenization. We thus introduce a novel methodology to train dedicated models for decompounding. The proposed two-stage procedure relies on a fully self-supervised objective in the first stage, while the second, supervised learning stage optionally fine-tunes the model on the annotated Wiktionary data. Our self-supervised models outperform the prior best unsupervised decompounding models by 13.9% accuracy on average. Our fine-tuned models outperform all prior (language-specific) decompounding tools. Furthermore, we use our models to leverage decompounding during the creation of a subword tokenizer, which we refer to as CompoundPiece. CompoundPiece tokenizes compound words more favorably on average, leading to improved performance on decompounding over an otherwise equivalent model using SentencePiece tokenization. 2 github.com
CompoundPiece: Evaluating and Improving Decompounding Performance of Language Models
[ { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: In-context learning performance of LLMs on compound segmentation vs. our method ( §5).Constituents Subwords", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example words in the Wiktionary dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Turning compound normalization into segmentation by minimizing edit distance ( §3.3).", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "among the bright flowerbeds. among the bright flowerbeds. among the bright flowerbeds.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: (a) no pretokenization, (b) pretokenization by splitting on whitespace, (c) our pretokenization.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Few-shot in-context learning performance of LLMs on easy positives, hard positives, negatives and across all examples. Hard negatives are the same across all LLMs since they use the same tokenizer.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Prompts used to evaluate LLM in-context learning compound segmentation performance.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 3: Number of positive and negative examples across languages in the Wiktionary dataset.", "figure_data": "10 4Positive (Compound) Negative (Non-Compound)10 210 210 4fi en de nl sv hu th is da eo te hy ru ta pl hi la es it ml cs et yo ga bn fa tr cy uk kk ro af el gu lv ca yi ka fr sq eu fy az mk bg lt gl pt ky be mg mt id he pa skWordConstituentsLanguage‫ﻧﯿﺎ‬ ‫ھﻢ‬ (sibling)‫ھﻢ‬ (same) + ‫ﻧﯿﺎ‬ (ancestor)Persianakiratis (horizon)akis (eye) + ratas (circle)Lithuanianшекара (border)шек (limit) + apa (distance)KazakhAbenteuer (adventure)NoneGermanરાે કડ વાહ (cashflow)રાે કડ (cash) + વાહ (stream)Gujarati", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ".", "figure_data": "dadeenesetelfafihukklvnlplsvMacro Avg.SECOS 30.0 66.5 41.2 29.0 23.4 5.3 1.4 53.1 38.8 5.0 13.9 46.8 22.2 32.229.2T555.3 56.1 85.9 69.8 29.0 0.0 0.0 31.6 48.6 16.9 29.6 44.9 36.1 53.139.8PS1Flan-T5 58.4 58.5 89.1 71.0 37.0 0.0 0.0 33.0 53.4 17.6 41.7 44.8 40.3 56.5 mT5 25.8 38.8 79.7 58.3 18.6 21.6 3.9 24.1 18.8 45.0 20.2 23.0 32.9 21.9 ByT5 75.6 76.0 91.3 77.2 51.6 40.9 20.9 52.7 70.0 75.9 41.7 57.2 51.8 64.842.9 30.9 60.5T586.3 96.0 95.4 82.5 77.7 0.0 0.0 98.2 89.1 18.3 69.1 94.0 78.0 89.669.6S1+S2Flan-T5 86.6 95.3 95.5 83.2 80.9 0.0 0.0 98.3 87.3 16.5 68.2 93.6 77.4 89.4 mT5 87.1 94.1 95.4 82.3 83.2 73.1 62.1 97.1 90.4 86.7 76.7 93.4 84.1 90.069.5 85.4ByT592.2 96.6 97.8 87.1 92.6 86.1 76.6 98.8 97.2 91.7 84.8 97.5 91.2 94.391.7SECOS 96.1 86.6 93.8 97.4 98.6 99.7 100 88.2 95.5 100 100 94.1 96.9 97.396.0T588.5 91.8 91.7 88.7 82.3 100 100 82.2 93.8 74.0 87.4 83.7 90.6 91.889.0NS1Flan-T5 88.5 92.1 91.3 89.9 82.3 100 100 82.9 91.6 72.9 87.0 87.0 90.4 92.4 mT5 92.7 92.8 90.9 92.3 89.9 95.3 99.3 88.2 98.0 88.0 95.9 89.1 94.5 94.8 ByT5 89.0 89.7 88.4 81.5 76.0 95.7 97.3 77.6 87.1 72.1 87.7 80.3 91.4 87.889.2 93.0 85.8T593.3 94.5 98.3 97.8 95.1 100 100 95.4 99.2 91.1 97.4 97.5 98.1 96.796.7S1+S2Flan-T5 94.1 95.5 97.9 95.9 95.8 100 100 96.7 98.6 92.6 96.7 97.5 97.1 96.7 mT5 93.8 96.2 99.2 97.4 97.9 96.3 98.7 94.1 98.6 96.9 98.1 96.7 97.9 97.396.8 97.1ByT595.2 96.2 98.3 98.8 97.9 97.3 97.3 95.4 99.7 99.2 98.9 97.9 99.0 97.697.8SECOS 53.5 72.4 53.9 63.2 56.0 60.9 52.2 58.4 59.0 50.7 61.0 58.1 57.8 53.657.9T567.1 66.5 87.3 79.3 52.1 59.0 51.5 39.3 64.7 44.4 61.2 54.2 62.1 65.861.0AllS1Flan-T5 69.1 68.3 89.6 80.5 56.6 59.0 51.5 40.6 67.0 44.2 66.5 54.9 64.2 68.3 mT5 49.6 54.6 82.4 75.3 49.5 65.1 53.1 33.8 47.0 65.7 61.6 38.8 62.3 45.9 ByT5 80.4 80.0 90.6 79.4 62.2 73.2 60.3 56.5 76.1 74.1 66.9 62.7 70.7 72.462.9 56.0 71.8T588.8 95.6 96.1 90.2 85.2 59.0 51.5 97.8 92.7 53.4 84.6 94.8 87.6 91.983.5S1+S2Flan-T5 89.3 95.4 96.1 89.6 87.3 59.0 51.5 98.1 91.3 53.2 83.7 94.5 86.8 91.8 mT5 89.5 94.7 96.3 89.8 89.6 86.8 80.9 96.6 93.3 91.6 88.4 94.2 90.7 92.483.4 91.1ByT593.3 96.5 97.9 92.9 94.9 92.7 87.3 98.3 98.1 95.3 92.5 97.6 94.9 95.494.8", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison against supervised and rule-based baseline models. We use the subset of compound-only words from the Sigmorphon Shared Task (SMST) 2022 data which covers 7 languages(Batsuren et al., 2022a).", "figure_data": "Segmentation NormalizationWordMonolingual Segmentation Segmentation Token Origin MultilingualP N All P N All_tu: es, sk, itGermaNetJWS CharSplit SECOS ByT5 (S1+S2) 97.9 -83.7 -95.1 -83.6 -83.7 53.4 -95.1 --83.6 --97.9 79.6 -53.4 --79.6tugboat _tug, boat mindstate _mind, state _mindst, ate _tu, gbo, atgbo: yo, mg, fr at: id, hu, la _mindst: da ate: it, et, enOurs (de)JWS CharSplit SECOS59.7 97.6 70.8 43.2 97.6 59.1 84.7 29.5 68.6 ---66.5 86.6 72.4 ---coatrack _coat, rack _coa, track_coa: gl, ro track: hu, th, daByT5 (S1+S2) 96.6 96.2 96.5 89.8 96.2 91.7nl-splitter74.5 -74.5 67.1 -67.1AuCoPro-nlSECOS59.7 -59.7 ---ByT5 (S1+S2) 91.7 -91.7 76.2 -76.2nl-splitter61.2 96.7 69.7 47.0 91.2 57.6Ours (nl)SECOS46.8 94.1 58.1 ---ByT5 (S1+S2) 97.5 97.9 97.6 87.8 97.9 90.2SMST 2022DeepSpin-3 ByT5 (S1+S2) 92.5 -88.6 -88.6 87.3 -92.5 88.6 -87.3 88.6LanguageMultilingual SPM (mT5) SPM CPM SPM CPM MonolingualDanish15.5 16.5 12.4 24.75.9German9.9 10.38.2 14.61.8English7.58.24.66.83.7Spanish29.0 24.9 18.7 14.2 10.3Estonian25.5 29.5 15.2 35.47.2Greek39.9 33.6 23.1 28.9 14.9Persian38.6 46.1 37.2 70.9 41.8Finnish25.1 25.1 20.3 10.35.1Hungarian13.8 17.1 10.1 26.13.7Kazakh14.4 13.79.0 28.44.0Latvian20.2 23.8 16.1 47.5 11.7Dutch12.8 15.4 10.2 17.23.3Polish45.7 42.5 33.1 33.6 17.0Swedish13.9 17.7 12.5 21.35.4Macro Avg.22.3 23.2 16.5 27.19.7Table 3: Percentage of hard compounds after segmenta-tion with different tokenizers. SPM (mT5) is the Sen-tencePiece tokenizer used by mT5 (Xue et al., 2021).SentencePiece (SPM) and CompoundPiece (CPM) tok-enizers are trained on text in all 56 languages (Multilin-gual) and for every language separately (Monolingual).improvement of 23.2% → 16.5%. This may bebecause tokens from different languages interferewith the segmentation of any given word. We testthis hypothesis by computing plausible token ori-gins for tokens in the multilingual tokenizer. Thisis done by checking which monolingual tokeniz-ers also contain the token in their vocabulary, andordering the result by unigram token probability.Examples are shown in Table 4. Interference from", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Example compound words which are easy for the monolingual but hard for the multilingual Com-poundPiece tokenizer. \"_\" indicates whitespace.", "figure_data": "LanguageSegmentation SPM-T5 CPM-T5 SPM-T5 CPM-T5 NormalizationDanish77.877.765.569.1German81.080.761.563.8English84.985.882.984.0Spanish75.274.750.155.2Estonian78.684.555.161.3Greek70.670.047.157.8Persian58.261.246.658.1Finnish72.874.159.059.6Hungarian76.276.973.376.2Kazakh72.975.759.074.4Latvian75.269.153.557.3Dutch78.280.760.964.9Polish65.865.642.646.7Swedish76.277.361.065.6Macro Avg.74.675.358.463.9Table 5: Accuracy of our multilingual T5 models trainedwith SentencePiece (SPM-T5) and CompoundPiece(CPM-T5) on segmentation and normalization.SegmentationNormalizationPNAllPNAllByT5 (S1)50.8 82.5 66.6 28.5 82.5 55.2-hyphen filtering53.8 62.3 58.9 30.3 62.3 47.0ByT5 (S1+S2)80.9 98.0 89.8 58.2 97.8 78.5-S179.3 97.3 88.6 56.8 97.1 77.4", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation studies on not filtering hyphens-asnewline-indicator and on skipping Stage 1 training.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "see §3.3 */ if C(s) < best_cost then s best ← s best_cost ← C(s) end end s ⋆ ← s best Algorithm 1: Naïve brute-force segmentation. Data: Compound x, norm. constituents c. Result: Optimal segmentation s ⋆. k ← |c|, n ← |x| r 0 ← 0, r k ← n best_cost ← ∞ /* ∆ is the total difference in length of the normalized constituents to the word. */ ∆ = ni |c i | lower_bound ← |∆| while lower_bound < best_cost do offsets = {x | |x| = k, x i = ∆} lower_bound ← lower_bound + 1 for o 1 , ..., o k ∈ offsets do r 1 , ..., r k-1 = |c 1 | + o 1 , ..., n-1 i=1 |c i | + o i Compute s, C(s) /* see §3.3 */ if C(s) < best_cost then s best ← s best_cost ← C(s) end end end s ⋆ ← s bestAlgorithm 2: Segmentation by enumerating candidates in order of increased lower bound on edit distance. Statistics of the Wiktionary dataset.", "figure_data": "TrainingValidationLanguageiso#Positive#NegativeTotal#Positive#NegativeTotalAfrikaansaf326193519322197519Azerbaijaniaz78971758589174Belarusianbe324779403878Bulgarianbg71891606892160Bengalibn301334635304331635Catalanca220218438219218437Czechcs388358746392354746Welshcy308273581299281580Danishda2145129834436443561000Germande207437846285897082921000Greekel216292508208299507Englishen228966480293767592411000Esperantoeo109784919465594411000Spanishes433401834417417834Estonianet349315664376288664Basqueeu1029820098101199Persianfa268314582282300582Finnishfi6994813314832628481521000Frenchfr149135284135148283Western Frisian fy92851779086176Irishga332322654328325653Galiciangl70791498069149Gujaratigu227279506221285506Hebrewhe293463184462Hindihi47256910414785221000Hungarianhu5238316284006443561000Armenianhy87274516175094911000Indonesianid264571323870Icelandicis2333160339365924081000Italianit452352804437366803Georgianka137156293149143292Kazakhkk244292536278258536Kirghizky394584394483Latinla450410860452407859Lithuanianlt65941597683159Latvianlv244249493223269492Malagasymg354277324577Macedonianmk75941697990169Malayalamml318435753331421752Maltesemt353671363571Dutchnl151845258204427612391000Panjabipa243458193958Polishpl62855611845234771000Portuguesept405797534497Romanianro272261533268265533Russianru75371814715074931000Slovaksk262854252954Albaniansq124113237109127236Swedishsv88834172130556713291000Tamilta65671013664845161000Telugute89490918035074931000Thaith4287275470416143861000Turkishtr295287582310271581Ukrainianuk281291572277295572Yiddishyi162218380176203379Yorubayo349312661348312660Total16471358757223470175391393831477", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Benjamin Minixhofer; Jonas Pfeiffer; Ivan Vulić
[ { "authors": "Martine Adda; - Decker; Gilles Adda", "journal": "", "ref_id": "b0", "title": "Morphological decomposition for asr in german", "year": "2000" }, { "authors": "Enrique Alfonseca; Slaven Bilac; Stefan Pharies", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Decompounding query keywords from compounding languages", "year": "2008" }, { "authors": "Duygu Altinok", "journal": "", "ref_id": "b2", "title": "Demorphy, german language morphological analyzer", "year": "2018" }, { "authors": "Khuyagbaatar Batsuren; Gábor Bella; Aryaman Arora; Viktor Martinovic; Kyle Gorman; Zdeněk Žabokrtský; Amarsanaa Ganbold; Šárka Dohnalová; Magda Ševčíková; Kateřina Pelegrinová; Fausto Giunchiglia; Ryan Cotterell; Ekaterina Vylomova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "The SIGMORPHON 2022 shared task on morpheme segmentation", "year": "2022" }, { "authors": "Khuyagbaatar Batsuren; Gábor Bella; Fausto Giunchiglia", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "MorphyNet: a large multilingual database of derivational and inflectional morphology", "year": "2021" }, { "authors": "Khuyagbaatar Batsuren; Omer Goldman; Salam Khalifa; Nizar Habash; Witold Kieraś; Gábor Bella; Brian Leonard; Garrett Nicolai; Kyle Gorman; Yustinus Ghanggo Ate; Maria Ryskina; Sabrina Mielke; Elena Budianskaya; Charbel El-Khaissi; Tiago Pimentel; Michael Gasser; William Abbott Lane; Mohit Raj; Matt Coler; Jaime Rafael Montoya Samame; Delio Siticonatzi Camaiteri; Esaú Zumaeta Rojas; Didier López Francis; Arturo Oncevay; Juan López Bautista; Gema ; Celeste Silva Villegas; Lucas Torroba Hennigen; Adam Ek; David Guriel; Peter Dirix; Jean-Philippe Bernardy; Andrey Scherbakov; Aziyana Bayyr-Ool; Antonios Anastasopoulos; Roberto Zariquiey; Karina Sheifer; Sofya Ganieva; Hilaria Cruz; Ritván Karahóǧa; Stella Markantonatou; George Pavlidis; Matvey Plugaryov; Elena Klyachko; Ali Salehi; Candy Angulo; Jatayu Baxi; Andrew Krizhanovsky; Natalia Krizhanovskaya; Elizabeth Salesky; Clara Vania; Sardana Ivanova; Jennifer White; Rowan Hall Maudslay; Josef Valvoda; Ran Zmigrod; Paula Czarnowska; Irene Nikkarinen; Aelita Salchak; Brijesh Bhatt; Christopher Straughn; Zoey Liu; Jonathan North Washington; Yuval Pinter; Duygu Ataman; Marcin Wolinski; Totok Suhardijanto; Anna Yablonskaya; Niklas Stoehr; Hossep Dolatian; Zahroh Nuriah; Shyam Ratan; Francis M Tyers; M Edoardo; Grant Ponti; Aryaman Aiton; Richard J Arora; Ritesh Hatcher; Jeremiah Kumar; Daria Young; Anastasia Rodionova; Taras Yemelina; Igor Andrushko; Polina Marchenko; Alexandra Mashkovtseva; Emily Serova; Maria Prud'hommeaux; Fausto Nepomniashchaya; Eleanor Giunchiglia; Mans Chodroff; Miikka Hulden; Silfverberg; D Arya; David Mc-Carthy; Ryan Yarowsky; Reut Cotterell; Ekaterina Tsarfaty; Vylomova", "journal": "European Language Resources Association", "ref_id": "b5", "title": "UniMorph 4.0: Universal Morphology", "year": "2022" }, { "authors": "Chris Biemann; Uwe Quasthoff; Gerhard Heyer; Florian Holz", "journal": "European Language Resources Association (ELRA", "ref_id": "b6", "title": "ASV toolbox: a modular collection of language exploration tools", "year": "2008" }, { "authors": "Martin Braschler; Bärbel Ripplinger", "journal": "Information Retrieval", "ref_id": "b7", "title": "How effective is stemming and decompounding for german text retrieval?", "year": "2004" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b8", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b10", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Jonathan H Clark; Dan Garrette; Iulia Turc; John Wieting", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b11", "title": "Canine: Pre-training an efficient tokenization-free encoder for language representation", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Ryan Cotterell; Christo Kirov; John Sylak-Glassman; Géraldine Walther; Ekaterina Vylomova; D Arya; Katharina Mc-Carthy; Sabrina J Kann; Garrett Mielke; Miikka Nicolai; David Silfverberg; Jason Yarowsky; Mans Eisner; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", "year": "2018" }, { "authors": "Ryan Cotterell; Tim Vieira; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "A joint model of orthography and morphological segmentation", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Miguel Domingo; Mercedes García-Martínez; Alexandre Helle; Francisco Casacuberta; Manuel Herranz", "journal": "Springer", "ref_id": "b16", "title": "How much does tokenization affect neural machine translation?", "year": "2019" }, { "authors": "Manaal Faruqui; Yulia Tsvetkov; Graham Neubig; Chris Dyer", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Morphological inflection generation using character sequence to sequence learning", "year": "2016" }, { "authors": "Omer Goldman; David Guriel; Reut Tsarfaty", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "un)solving morphological inflection: Lemma overlap artificially inflates models' performance", "year": "2022" }, { "authors": "Harald Hammarström", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Measuring prefixation and suffixation in the languages of the world", "year": "2021" }, { "authors": "Yijie Han; Bhagirath Narahari; H-A Choi", "journal": "Information Processing Letters", "ref_id": "b20", "title": "Mapping a chain task to chained processors", "year": "1992" }, { "authors": "Verena Henrich; Erhard Hinrichs", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Determining immediate constituents of compounds in Ger-maNet", "year": "2011" }, { "authors": "Valentin Hofmann; Janet Pierrehumbert; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Superbizarre is not superb: Derivational morphology improves BERT's interpretation of complex words", "year": "2021" }, { "authors": "Philipp Koehn; Kevin Knight", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Empirical methods for compound splitting", "year": "2003" }, { "authors": "Maria Koliopoulou", "journal": "Journal of Greek Linguistics", "ref_id": "b24", "title": "Issues of modern greek and german compounding: a contrastive approach", "year": "2014" }, { "authors": "Taku Kudo", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "year": "2018" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Mikko Kurimo; Sami Virpioja; Ville Turunen; Krista Lagus", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Morpho challenge 2005-2010: Evaluations and results", "year": "2010" }, { "authors": "Stefan Langer", "journal": "", "ref_id": "b28", "title": "Zur morphologie und semantik von nominalkomposita", "year": "1998" }, { "authors": " Vladimir I Levenshtein", "journal": "Soviet Union", "ref_id": "b29", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "Constantine Lignos", "journal": "", "ref_id": "b30", "title": "Learning from unseen data", "year": "2010" }, { "authors": "Krister Lindén; Tommi Pirinen", "journal": "Northern European Association for Language Technology (NEALT", "ref_id": "b31", "title": "Weighted finite-state morphological analysis of Finnish compounding with HFST-LEXC", "year": "2009" }, { "authors": "Klaus Macherey; Andrew Dai; David Talbot; Ashok Popat; Franz Och", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Language-independent compound splitting with morphological operations", "year": "2011" }, { "authors": "Fredrik Manne; Tor Sorevik", "journal": "Journal of Algorithms", "ref_id": "b33", "title": "Optimal partitioning of sequences", "year": "1995" }, { "authors": "Austin Matthews; Graham Neubig; Chris Dyer", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Using morphological knowledge in openvocabulary neural language models", "year": "2018" }, { "authors": "D Arya; Ekaterina Mccarthy; Shijie Vylomova; Chaitanya Wu; Lawrence Malaviya; Garrett Wolf-Sonkin; Christo Nicolai; Miikka Kirov; Sabrina J Silfverberg; Jeffrey Mielke; Ryan Heinz; Mans Cotterell; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and crosslingual transfer for inflection", "year": "2019" }, { "authors": "Sabrina J Mielke; Zaid Alyafeai; Elizabeth Salesky; Colin Raffel; Manan Dey; Matthias Gallé; Arun Raja; Chenglei Si; Wilson Y Lee; Benoît Sagot", "journal": "", "ref_id": "b36", "title": "Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp", "year": "2021" }, { "authors": "Benjamin Minixhofer; Jonas Pfeiffer; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Where's the point? self-supervised multilingual punctuation-agnostic sentence segmentation", "year": "2023" }, { "authors": "Christof Monz; Maarten De; Rijke ", "journal": "Springer", "ref_id": "b38", "title": "Shallow morphological analysis in monolingual information retrieval for dutch, german, and italian", "year": "2001-09-03" }, { "authors": "Yirong Pan; Xiao Li; Yating Yang; Rui Dong", "journal": "", "ref_id": "b39", "title": "Morphological word segmentation on agglutinative languages for neural machine translation", "year": "2020" }, { "authors": "Ben Peters; Andre F T Martins", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Beyond characters: Subword-level morpheme segmentation", "year": "2022" }, { "authors": "Alexander Pollatsek; Jukka Hyönä; Raymond Bertram", "journal": "Journal of Experimental Psychology: Human perception and performance", "ref_id": "b41", "title": "The role of morphological constituents in reading finnish compound words", "year": "2000" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b42", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Martin Riedl; Chris Biemann", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Unsupervised compound splitting with distributional semantics rivals supervised methods", "year": "2016" }, { "authors": "Adam Roberts; Hyung Won Chung; Anselm Levskaya; Gaurav Mishra; James Bradbury; Daniel Andor; Sharan Narang; Brian Lester; Colin Gaffney; Afroz Mohiuddin; Curtis Hawthorne; Aitor Lewkowycz; Alex Salcianu; Jacob Marc Van Zee; Sebastian Austin; Livio Baldini Goodman; Haitang Soares; Sasha Hu; Aakanksha Tsvyashchenko; Jasmijn Chowdhery; Jannis Bastings; Xavier Bulian; Jianmo Garcia; Andrew Ni; Kathleen Chen; Jonathan H Kenealy; Stephan Clark; Dan Lee; James Garrette; Colin Lee-Thorp; Noam Raffel; Marvin Shazeer; Maarten Ritter; Alexandre Bosma; Jeremy Passos; Noah Maitin-Shepard; Mark Fiedel; Brennan Omernick; Ryan Saeta; Alexander Sepassi; Joshua Spiridonov; Andrea Newlan; Gesmundo", "journal": "", "ref_id": "b44", "title": "Scaling up models and data with t5x and seqio", "year": "2022" }, { "authors": "Phillip Rust; Jonas Pfeiffer; Ivan Vulić; Sebastian Ruder; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "How good is your tokenizer? on the monolingual performance of multilingual language models", "year": "2021" }, { "authors": "Jonne Saleva; Constantine Lignos", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "The effectiveness of morphology-aware segmentation in low-resource neural machine translation", "year": "2021" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Naomi Tachikawa Shapiro", "journal": "", "ref_id": "b48", "title": "Splitting compounds with ngrams", "year": "2016" }, { "authors": "Robyn Speer", "journal": "", "ref_id": "b49", "title": "rspeer/wordfreq", "year": "2022" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b50", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b51", "title": "", "year": "" }, { "authors": "Yi Tay; Mostafa Dehghani; Xavier Vinh Q Tran; Dara Garcia; Tal Bahri; Huaixiu Schuster; Neil Steven Zheng; Donald Houlsby; Metzler", "journal": "", "ref_id": "b52", "title": "Unifying language learning paradigms", "year": "2022" }, { "authors": "Yi Tay; Q Vinh; Sebastian Tran; Jai Ruder; Hyung Won Gupta; Dara Chung; Zhen Bahri; Simon Qin; Cong Baumgartner; Donald Yu; Metzler", "journal": "", "ref_id": "b53", "title": "Charformer: Fast character transformers via gradientbased subword tokenization", "year": "2022" }, { "authors": "Don Tuggener", "journal": "", "ref_id": "b54", "title": "Incremental coreference resolution for German", "year": "2016" }, { "authors": "Gerhard Menno Van Zaanen; Suzanne Van Huyssteen; Chris Aussems; Roald Emmery; Eiselen", "journal": "European Language Resources Association (ELRA", "ref_id": "b55", "title": "The development of Dutch and Afrikaans language resources for compound boundary analysis", "year": "2014" }, { "authors": "Johanna Päivi; Juraj Virkkunen; Heini Simko; Martti Tapani Henriikka Kallio; Vainio", "journal": "International Speech Communications Association", "ref_id": "b56", "title": "Prosodic features of finnish compound words", "year": "2018" }, { "authors": "Sami Virpioja; T Ville; Sebastian Turunen; Oskar Spiegler; Mikko Kohonen; Kurimo", "journal": "Traitement Automatique des Langues", "ref_id": "b57", "title": "Empirical comparison of evaluation methods for unsupervised learning of morphology", "year": "2011" }, { "authors": "Irene Vogel; Sergio Scalise", "journal": "Cross-Disciplinary Issues in Compounding", "ref_id": "b58", "title": "Crossdisciplinary issues in compounding", "year": "2010" }, { "authors": "Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b59", "title": "ByT5: Towards a token-free future with pre-trained byte-to-byte models", "year": "2022" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Giulio Zhou", "journal": "", "ref_id": "b61", "title": "Morphological zero-shot neural machine translation", "year": "2018" } ]
[ { "formula_coordinates": [ 5, 132.27, 221.69, 95.46, 33.71 ], "formula_id": "formula_0", "formula_text": "C(s) = k i=1 L(s i , c i )." }, { "formula_coordinates": [ 14, 317.05, 73.84, 185.91, 107.82 ], "formula_id": "formula_1", "formula_text": "Data: Compound x, norm. constituents c. Result: Optimal segmentation s ⋆ . k ← ∥c∥, n ← ∥x∥ r 0 ← 0, r n ← n best_cost ← ∞ for r 1 , ..., r n-1 ∈ [n] k-1 do Compute s, C(s) /*" } ]
10.18653/v1/2023.acl-short.117
2023-10-27
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b39", "b28", "b14", "b24", "b30", "b29", "b2", "b8", "b16", "b0", "b28", "b44", "b23", "b44", "b40", "b10", "b41", "b15", "b36" ], "table_ref": [], "text": "Text-to-SQL parsing, the task of mapping a natural language utterance to a SQL query, has found wide applications in building language agents for databases and piqued significant research interest in recent years (Deng et al., 2021;Yu et al., 2021;Rajkumar et al., 2022;Hongjin et al., 2023;Ni et al., 2023). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning (Shaw et al., 2021;Scholak et al., 2021). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise.\nAs an alternative to supervised learning, incontext learning (Brown et al., 2020), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference (Chowdhery et al., 2022;Kojima et al., 2022;Wei et al., 2022b,a;Brohan et al., 2023). When applied to text-to-SQL parsing, in-context learning has also shown encouraging results (Rajkumar et al., 2022;Chang et al., 2023b;Liu et al., 2023a), but there is still much room for improvement.\nWe hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multistep reasoning ability. Even for a seemingly simple question, such as \"What is the ID of Kyle,\" a model has to ground it to the given database schema, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. To enhance LLMs' reasoning capabilities, chain of thought (CoT) style prompting methods (Wei et al., 2022b;Zhou et al., 2023) are proposed and have shown promising results. However, how to apply CoT-style prompting to text-to-SQL parsing remains under-explored, and we fill this gap by systematically exploring CoT-style prompting for textto-SQL parsing. Specifically, we seek to answer two research questions: (RQ1) Which prompting style is better, generating all reasoning steps in one pass, or iterative prompting and problem solving? (RQ2) Do more detailed reasoning steps lead to better results for text-to-SQL parsing?\nTo address these questions, we adapt two widely used prompting methods for text-to-SQL parsing. As the first method, we apply chain-of-thought prompting (Wei et al., 2022b) by drawing an anal- ogy between its problem-solving process and the execution procedure of a SQL query (Figure 1A). Referring to the logical execution order of SQL clauses (Narechania et al., 2021), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. For the second method, we follow Zhou et al. (2023) to apply least-to-most prompting in two stages: (1) problem reduction: generate a series of sub-questions from the original question and (2) problem solving: iteratively translate each subquestion into its corresponding SQL query, with the original question as the last sub-question, as shown in Figure 1B. With a careful analysis (Section 5.2), we find that directly applying these two methods for text-to-SQL parsing tends to introduce error propagation issues frequently. Also, the iterative process in least-to-most prompting incurs more computational costs to generate each SQL query.\nTherefore, we propose a new CoT-style prompting method called question-decomposition prompting (QDecomp, Figure 1C). Similar to chainof-thought prompting, QDecomp generates a sequence of reasoning steps followed by the natural language question in one pass. Instead of generating the intermediate execution steps, we instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Furthermore, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp+InterCOL, Figure 1D) by incrementally including the table and column names involved in each sub-question.\nWe conduct comprehensive evaluations on two cross-domain text-to-SQL datasets, Spider (Yu et al., 2018) and Spider Realistic (Deng et al., 2021). Compared to the standard prompting method without reasoning steps, QDecomp + In-terCOL brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively. It also brings 2.4 and 1.5 point absolute gains compared to least-to-most prompting. Our results suggest that it may be unnecessary to perform iterative prompting, which is also computationally costly (RQ1). Besides, our analysis shows that our QDecomp+InterCOL method reduces the chance of error propagation by providing less detailed reasoning steps and generating the SQL query in one pass (RQ2). Meanwhile, it includes key schema information in reasoning, which is still beneficial to database grounding. Further, we evaluate the robustness of our proposed methods by varying the number, selection, and format of in-context examples, providing useful guidelines for designing text-to-SQL prompting strategies. We also extend our evaluation to three single-domain datasets (Zelle and Mooney, 1996;Iyer et al., 2017;Yaghmazadeh et al., 2017) and show our proposed method can achieve strong performance consistently across different datasets." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b8", "b44", "b13", "b21", "b35", "b34", "b9", "b41", "b40", "b31", "b39", "b10", "b5", "b28", "b14", "b27", "b26", "b24" ], "table_ref": [], "text": "LLM and CoT-Style Prompting. As large language models (LLMs) advance (Brown et al., 2020;Chowdhery et al., 2022), in-context learning emerged as a new paradigm in natural language processing (Liu et al., 2023b). Although LLMs can achieve outstanding performance by prompting them with few-shot examples in context, they struggle with tasks that require multi-step reasoning. As a solution, Wei et al. (2022b) proposed chain-ofthought prompting. By explicitly describing intermediate reasoning steps to answer a complex question in the prompts, chain-of-thought prompting improves the accuracy of LLMs by a large margin across many natural language reasoning tasks. Besides, Zhou et al. (2023) proposed least-to-most prompting to solve complex problems in two stages. The method first prompts LLMs to generate a list of sub-questions as a decomposition of the given problem. Then, it uses the sub-questions to guide LLMs to incrementally solve each of them and derive a correct final answer. However, how to apply these two CoT-style prompting methods to text-to-SQL parsing remains under-explored.\nWe fill this gap by systematically exploring several CoT-style prompting methods for the task. In particular, we propose a new CoT-style prompting method that guides LLMs to perform reasoning via question decomposition. Question decomposition is a method that converts a complex problem into a sequence of simpler sub-questions (Gupta and Lewis, 2018;Min et al., 2019). Our work refers to existing question decomposition methods for textto-SQL parsing (Wolfson et al., 2020(Wolfson et al., , 2022) ) and presents a novel CoT-style prompting method to improve LLMs' performance. We conduct comprehensive experiments and show that our question decomposition prompting outperforms chain-of-thought prompting and least-to-most prompting on several text-to-SQL datasets. Our experiments validate our hypothesis that text-to-SQL parsing indeed requires multi-step reasoning, and carefully designed CoT-style prompting can help LLMs achieve higher parsing accuracy.\nText-to-SQL Semantic Parsing. Text-to-SQL semantic parsing has long been studied to build language agents for database applications (Dahl et al., 1994;Zelle and Mooney, 1996). Since the release of Spider (Yu et al., 2018), a cross-database text-to-SQL benchmark, many parsers have been developed on top of language models to better understand various database schemas (Wang et al., 2020;Yu et al., 2021;Deng et al., 2021). Recent work starts to explore the potential of LLMs, such as Codex (Chen et al., 2021), in text-to-SQL parsing by including database schemas in the prompts (Rajkumar et al., 2022) or retrieving similar questions as few-shot examples (Hongjin et al., 2023). Orthogonal to these methods, our question decomposition prompting teaches LLM to perform multistep reasoning for text-to-SQL parsing without additional engineering efforts. With a few in-context examples, an LLM, such as Codex in our experiments, can learn to decompose natural language questions and predict table and column names (Section 3) incrementally in each step.\nOur method demonstrates comparable performance to RASAT+PICARD (Qi et al., 2022), a fine-tuned text-to-SQL parser, on the Spider development set without using relational structures or constrained decoding. Compared to other LLMbased methods, it achieves better execution accuracy than DIN-SQL (Pourreza and Rafiei, 2023) on the Spider development set in a single pass, while DIN-SQL requires iterative prompting. Although our method shows lower execution accuracy than LEVER (Ni et al., 2023), we note that LEVER's verifier model is fine-tuned on the full Spider training set, which may have extra advantages over our method. Also, LEVER uses the execution results of SQL queries, which provides extra information for better database grounding. We leave the incorporation of database contents beyond table and column names into our method as future work." }, { "figure_ref": [ "fig_0" ], "heading": "Prompting for Multi-Step Reasoning in", "publication_ref": [ "b5" ], "table_ref": [], "text": "Text-to-SQL\nIn this section, we outline three CoT-style prompting methods that teach an LLM to perform multi-step reasoning. We first describe how we adapt chain-of-thought and least-to-most prompting for text-to-SQL parsing. Then, we propose a novel prompting method, question decomposition prompting (QDecomp), and its variant QDecomp+InterCOL. Figure 1 demonstrates different prompting methods, and we provide more examples in Appendix A. For all experiments, we use Codex (Chen et al., 2021), code-davinci-002, as the LLM. The experiments were conducted between January and March 2023 through OpenAI API, using greedy decoding with temperature 0." }, { "figure_ref": [ "fig_0" ], "heading": "Chain-of-Thought Prompting", "publication_ref": [ "b23" ], "table_ref": [], "text": "Chain-of-thought prompting (Wei et al., 2022b) aims to improve LLMs' reasoning ability by generating a series of intermediate steps before predicting the final answer. For text-to-SQL parsing, one challenge is how to come up with the reasoning steps to predict the SQL query (i.e., final answer in our case). In our work, we use each clause in the SQL query to compose a reasoning step in CoT prompting. Specifically, inspired by Narechania et al. (2021), we use natural language templates to describe each SQL clause and chain them in the logical execution order of the SQL query. For example, the logical execution order for the SQL query in Figure 1A is first the FROM clause, then the WHERE clause, and finally the SELECT clause. Following this order, we assemble the natural language description of each clause in the query to compose its CoT reasoning steps." }, { "figure_ref": [ "fig_0" ], "heading": "Least-to-Most Prompting", "publication_ref": [ "b44", "b34" ], "table_ref": [], "text": "Unlike chain-of-thought prompting, which instructs LLMs to generate all reasoning steps in a single pass, least-to-most prompting (Zhou et al., 2023) tackles complex questions by prompting LLMs in two stages: problem reduction and problem solving. During problem reduction, it prompts the LLM to generate a series of sub-questions from the original complex question. During problem solving, it prompts the LLM with one sub-question at a time and iteratively builds up the final solution.\nTo derive the sub-questions for problem reduction, we segment the original question following three principles: (1) If the question has multiple sentences, we treat each sentence as a sub-question.\n(2) We further decompose each sentence by conjunction words (such as \"and,\" \"or,\" and \"but\") and prepositions (such as \"for,\" \"with,\" and \"without\").\n(3) For each decomposition, we remove words and phrases that may leak the information in any subsequent questions. This segmentation allows the LLM to focus on parsing each sub-question, thereby decreasing the complexity of the original problem (Wolfson et al., 2022).\nFor instance, the question \"Show first name, last name, age for all female students. Their sex is F.\" in Figure 1B would derive two sub-questions: (a) \"Show first name, last name, age for all students.\" (b) \"Show first name, last name, age for all female students. Their sex is F.\" This decomposition follows principle ( 1) and ( 3) by removing the second sentence and the information-leaking word \"female\" from the original question to construct the first step. For the first sub-question, the LLM only needs to construct the SELECT and FROM clauses. Then for the second sub-question, the LLM can build upon the SQL query generated for the first sub-question, and focus solely on the WHERE clause." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Question Decomposition Prompting", "publication_ref": [ "b31" ], "table_ref": [], "text": "We propose a new prompting method, question decomposition prompting (QDecomp). Similar to chain-of-thought, QDecomp generates intermediate reasoning steps and the final SQL query in a single pass. Instead of using the logical execution procedure of SQL as in CoT, we follow the problem reduction stage in least-to-most prompting and instruct the LLM to decompose the original complex question as the reasoning steps. Through this design, we hope to explore (1) the potential advantage of using question decomposition over the logical execution procedure of SQL clauses for composing reasoning steps; (2) whether an iterative process as in least-to-most prompting is necessary.\nOn top of that, we propose a variant, QDe-comp+InterCOL, to alleviate the well-known table/column linking issue in text-to-SQL parsing (Wang et al., 2020). Specifically, we augment the in-context examples to prompt the LLM to identify any corresponding table/column names when generating each sub-question. Given a subquestion and its corresponding SQL parse, we annotate all table-column pairs mentioned in the parse as ground-truth. For star operators ( * ), we sample a random column from tables mentioned in the same (sub-)query. If a table-column pair has been mentioned in the SQL parse of a sub-question, we would exclude it from the annotations of all subsequent steps. If a sub-question does not have any table-column pairs to annotate, we randomly choose one pair from preceding steps.\nWe include examples of these two methods in Figure 1C and 1D. Following the same decomposition method in least-to-most prompting, the example has two sub-questions. In Figure 1D, for the first sub-question, \"Show first name, last name, age for all students,\" we expect the model to highlight the table \"student\" and its columns \"fname,\" \"lname,\" and \"age,\" as they appear in the SQL parse of this sub-question. Then, for the follow-up question, the model is expected to identify the table \"student\" and its column \"sex,\" which is not mentioned in the previous step.\nIn addition to the prompting methods mentioned above, we also include the standard prompting method as the baseline in our experiments. It uses question-SQL pairs as in-context examples to prompt LLMs to directly parse a natural language question to its corresponding SQL query without generating any intermediate reasoning step." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b40", "b10" ], "table_ref": [], "text": "Spider (Yu et al., 2018). Spider is a commonly used benchmark to evaluate text-to-SQL parsing in a cross-database setting, which requires models to generalize to novel database schemas. The dataset consists of 7,000 question-query pairs in the training set and 1,034 pairs in the development set, covering 200 different databases and 138 domains. In this paper, due to the unavailability of the test set, we evaluate on the Spider development set to demonstrate the effectiveness of our question decomposition prompting methods.\nSpider Realistic (Deng et al., 2021). Spider Realistic is a more challenging version of the Spider development set. It modifies the natural language questions in Spider by removing or paraphrasing explicit mentions of column names to generate a more realistic dataset that reflects real-world scenarios, where questions rarely contain explicit mentions of column names. The final dataset comprises a total of 508 question-query pairs." }, { "figure_ref": [], "heading": "In-context Example Selection", "publication_ref": [], "table_ref": [], "text": "To show the robustness of question decomposition prompting, we consider two ways of choosing in-context examples: random selection and difficulty-based selection. In our main results, we use random selection for its simplicity and ease of replication. Additionally, in Section 5.3, we compare results obtained using random selection with those obtained using difficulty-based selection.\nFor " }, { "figure_ref": [], "heading": "Prompt Formats", "publication_ref": [ "b28" ], "table_ref": [], "text": "We also experiment with two prompt formats introduced by Rajkumar et al. (2022), API Docs and Create Table + Select 3. Both formats have their own advantages and can be utilized together with any prompting method in Section 3.\nAPI Docs format represents database schemas as Python API comments, which only includes the table and column names. This format reduces the prompt length for each example, so we may include more in-context demonstrations from databases in different domains to increase diversity. In comparison, Create Table + Select 3 format adheres more closely to the SQLite standards, but with much longer prompts2 . It represents a database schema using the CREATE TABLE command, which provides more information, such as column data types and foreign key declaration. Besides, this format includes the results of executing SELECT * FROM T LIMIT 3 for each table T in the database as SQL comments. In Section 5.3, we show that API Docs format can achieve competitive performance compared to the Create Table + Select 3 format. Thus, we primarily use the API Docs format in our experiments due to its efficiency." }, { "figure_ref": [], "heading": "Evaluation Metric", "publication_ref": [ "b43", "b20" ], "table_ref": [ "tab_0" ], "text": "We use test-suite execution accuracy (Zhong et al., 2020) formats. Leveraging the idea of \"code coverage\" in software testing (Miller and Maloney, 1963), the metric synthesizes a large number of databases as \"test cases\" and compares the execution results of the predicted and gold SQL queries on all of them. In this way, test-suite accuracy reduces the number of false positives (i.e., semantically different SQL queries that happen to have the same execution result) in standard execution accuracy, which compares execution results on only one database.\nAs shown in Table 1, standard prompting's testsuite accuracy falls behind least-to-most prompting. However, their standard execution accuracy results are very close, which might be misleading." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "By analyzing our experimental results, we seek to answer the following two research questions:\n• RQ1: Which prompting style is better, generating all reasoning steps in one pass, or iterative prompting and problem solving?\n• RQ2: Do more detailed reasoning steps lead to better results for text-to-SQL parsing?" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "Through comprehensive experiments on Spider Dev and Spider Realistic ( ). We will present more analysis on this contrast in Section 5.3.\nAdditionally, the experiments show that iteratively solving a series of sub-questions may not be necessary for text-to-SQL parsing (RQ1). Although chain-of-thought prompting (56.8%) underperforms least-to-most prompting (66.0%) on the Spider development set, these two methods have several distinct designs other than iterative prompting, so we cannot directly answer RQ1 by comparing them. With our QDecomp prompting, Question Which dogs are of the rarest breed? Show their names and treatment dates." }, { "figure_ref": [], "heading": "Chain-of-Thought", "publication_ref": [], "table_ref": [], "text": "# This query chooses records from the dogs table, followed by joining the breeds table on the breed_code column. It then joins the treatments table on the dog_id column. It then groups the results by breed_name. we show that generating sub-questions and the SQL query in a single pass can also achieve improved accuracy. Thus, iterative prompting, which is computationally costly, is not necessary when prompting LLMs to reason for text-to-SQL parsing.\nAnother interesting finding is that chain-ofthought prompting performs even worse than the standard prompting method. We analyze the reason in the next section, which helps answer RQ2." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [ "b40" ], "table_ref": [ "tab_1", "tab_1", "tab_3" ], "text": "We conduct a quantitative error analysis of all four prompting methods with the component matching accuracy (Yu et al., 2018) on the Spider development set. Component matching accuracy is a finegrained exact match metric that evaluates five SQL components, including SELECT clauses, WHERE clauses, GROUP BY clauses, ORDER BY clauses, and KEYWORDS (all SQL keywords, operators, and column names). Since exact match is too strict, we also consider a component to be correct if the whole SQL query's test-suite accuracy is 1.\nAs shown in Table 2, our QDecomp and QDe-comp+InterCOL prompts achieve better performance than other CoT-style prompting methods across all five SQL components. Further analysis shows that chain-of-thought prompting underperforms standard prompting because it provides very detailed reasoning steps. Translating such detailed steps is error-prone and incurs more error propagation issues. For example, in Table 3, Codex follows its reasoning steps faithfully to generate the corresponding SQL query, but the reasoning steps themselves have several errors, such as choosing the \"breed_name\" column instead of the \"name\" column in the SELECT clause. Least-tomost prompting makes improvements by providing reasoning steps at a higher level (via the problem reduction phase). However, it sometimes still cannot translate a sub-question into the correct SQL query, especially when involving hard components, such as JOIN clauses, GROUP BY clauses, and ORDER BY clauses (Table 2). We include an error example in Table 4. As a result, the errors are propagated to subsequent reasoning steps, leading to an incorrect final SQL parse. In contrast, QDe-comp+InterCOL prompting outperforms these two methods because it does not instruct Codex to generate detailed reasoning steps or intermediate SQL queries. In this way, it reduces the possibility of accumulating mistakes in reasoning steps." }, { "figure_ref": [], "heading": "Robustness to Prompt Design", "publication_ref": [], "table_ref": [], "text": "To further validate our conclusions in the main experiments, we conduct additional experiments to test the robustness of all four prompting methods in this section. Because chain-of-thought prompting already under-performs the standard prompting without reasoning, we omit this method in this and the next section. Base on the conjecture, we extend this experiment to compare QDecomp+InterCOL and other prompting methods. As shown in Table 6, QDe-comp+InterCOL prompting achieves the best performance across all settings, demonstrating its robustness. However, least-to-most prompting does not benefit from G1 or G3 examples and shows decreased accuracy. We believe this performance drop is because its iterative prompting generates one reasoning step at a time, which is relatively independent of the overall reasoning step length." }, { "figure_ref": [], "heading": "Number of In-Context Examples.", "publication_ref": [], "table_ref": [], "text": "Intuitively, performances of all prompting methods improve as the number of in-context examples increases (Table 7). We found that our QDecomp prompting is the most robust and consistently achieves better performance than standard prompting. However, least-to-most prompting underperforms standard prompting when the number of examples is less than 8. In addition, we note that in our preliminary experiments, further increasing the number of examples only leads to minor gains. Hence, we use 8 in-context examples in our main experiments. " }, { "figure_ref": [], "heading": "Results on Other Text-to-SQL Datasets", "publication_ref": [ "b41", "b15", "b36" ], "table_ref": [], "text": "Besides the Spider datasets, we further compare QDecomp (+InterCOL) to standard and least-tomost prompting on other datasets including Geo-Query (Zelle and Mooney, 1996;Iyer et al., 2017), IMDB (Yaghmazadeh et al., 2017), andYelp (Yaghmazadeh et al., 2017). Since the database schema and SQL queries in these datasets are more complex than those in the Spider datasets, we also use 4-shot in-context examples in this experiment.\nAs shown in Table 9, QDecomp (+InterCOL) consistently achieves the best performance for all three datasets. Moreover, we observe that least-tomost prompting underperforms standard prompting on IMDB and Yelp, which may be related to both iterative prompting and error propagation (Section 5.2). For example, least-to-most prompting would decompose the question \"Find all movies that were produced by Netflix\" into two sub-questions: 1) \"Find all movies\" and 2) \"Find all movies that were produced by Netflix.\" Then, in the iterative solving stage, there are many correct SQL queries using different tables and columns for the first sub-question. Without seeing the second sub-question, it is hard for the LLM to pinpoint the correct ones. As a result, the LLM would include redundant or wrong schema items in the SQL parse for the first subquestion, which are propagated to subsequent steps. Since QDecomp (+InterCOL) instructs the LLM to generate the SQL query after all sub-questions are derived, it maintains a global view of all reasoning steps and mitigates such error propagation issues." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b37", "b38", "b17", "b42", "b11", "b12", "b23", "b22" ], "table_ref": [], "text": "In this paper, we systematically explore CoT-style prompting to enhance LLMs' reasoning capability for text-to-SQL parsing. We design reasoning steps in order to apply two existing methods, chain-ofthought and least-to-most prompting, and propose new question decomposition prompting methods.\nThrough comprehensive experiments, we demonstrate: (1) Iterative prompting may be not necessary for reasoning in text-to-SQL parsing.\n(2) Using detailed reasoning steps (in CoT) or intermediate SQL queries (in least-to-most prompting) is error-prone and aggravates error propagation.\nOur question decomposition prompting serves as one of the first attempts to mitigate the error propagation issue in LLMs' multi-step reasoning, and we highlight this problem as a meaningful future direction. For example, we can further reduce errors in intermediate reasoning steps by incorporating our method into an interactive semantic parsing framework (Yao et al., 2019(Yao et al., , 2020;;Li et al., 2020;Zeng et al., 2020;Chen et al., 2023a,b). Since the decomposed sub-questions are in natural language, this interactive approach enables database users to easily spot the errors in each sub-question. Then, they can collaborate with LLMs by editing the subquestions directly or providing natural language feedback (Elgohary et al., 2020(Elgohary et al., , 2021;;Narechania et al., 2021;Mo et al., 2022), which should further improve text-to-SQL parsing accuracy." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Experiments on other large language models. Our study focused on conducting experiments using Codex as the LLM, since it was available at no cost and showed impressive performance in textto-SQL parsing among LLMs before GPT-4 (Rajkumar et al., 2022). To gain a comprehensive understanding of different CoT-style promptings for text-to-SQL, future research should explore the effects of these promptings on more recent, more powerful LLM models, such as GPT-4 (if budget allows). By doing so, we can determine whether the improvements achieved by our proposed promptings are consistent across different LLMs.\nExperiments on robustness. In our work, we mainly test robustness from the prompt design perspective such as how to select in-context examples, the number of in-context examples and the prompt format of in-context examples. It would also be valuable to investigate our prompting methods under different databases, natural language questions, or SQL perturbations (Chang et al., 2023a). This broader exploration would enable us to evaluate the robustness of our prompting methods across diverse scenarios." }, { "figure_ref": [], "heading": "A Example Prompts", "publication_ref": [], "table_ref": [], "text": "### SQLite SQL tables, with their properties: # # medicine (id, name, trade_name, fda_approved) # enzyme (id, name, location, product, chromosome, omim, porphyria) # medicine_enzyme_interaction (enzyme_id, medicine_id, interaction_type) # ### SQLite SQL tables, with their properties: # class (class_code, crs_code, class_section, class_time, class_room, prof_num) # course (crs_code, dept_code, crs_description, crs_credit) # department (dept_code, dept_name, school_code, emp_num, dept_address) # employee (emp_num, emp_lname, emp_initial, emp_jobcode, emp_hiredate, emp_dob) # enroll (class_code, stu_num, enroll_grade) # professor (emp_num, dept_code, prof_office, prof_extension, prof_high_degree) # student (stu_num, stu_lname, stu_fname, stu_init, stu_dob, stu_hrs, stu_class, stu_gpa, stu_transfer, dept_code, stu_phone, prof_num) # To answer the question \"Find the first names and offices of all instructors who have taught some course and the course description and the department name.\", we need to know: \"Find the first names and offices of all instructors.\", \"Find the first names and offices of all instructors who have taught some course.\", \"Find the first names and offices of all instructors who have taught some course and the course description.\". ### SQLite SQL tables, with their properties: # station (station_id, name, annual_entry_exit, annual_interchanges, total_passengers, location, main_services, number_of_platforms) # train (train_id, name, time, service) # train_station (train_id, station_id) # To answer the question \"Show all train names and times in stations in London in descending order by train time.\", we need to know: ### SQLite SQL tables, with their properties: # document_types (document_type_code, document_description) # documents (document_id, document_type_code, grant_id, sent_date, response_-received_date, other_details) # grants (grant_id, organisation_id, grant_amount, grant_start_date, grant_end_date, other_details) # organisation_types (organisation_type, organisation_type_description) # organisations (organisation_id, organisation_type, organisation_details) # project_outcomes (project_id, outcome_code, outcome_details) # project_staff (staff_id, project_id, role_code, date_from, date_to, other_details) # projects (project_id, organisation_id, project_details) # research_outcomes (outcome_code, outcome_description) # research_staff (staff_id, employer_organisation_id, staff_details) # staff_roles (role_code, role_description) # tasks (task_id, project_id, task_details, eg agree objectives) # ### Question: Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". decompose the question 1. Find out the send dates of the documents. 2. Find out the send dates of the documents with the grant amount of more than 5000. 3. Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". # Thus, the answer for the question is: Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". SELECT T1.sent_date FROM documents AS T1 JOIN Grants AS T2 ON T1.grant_id = T2.grant_id JOIN Organisations AS T3 ON T2.organisation_id = T3.organisation_id JOIN organisation_Types AS T4 ON T3.organisation_type = T4.organisation_type WHERE T2.grant_amount > 5000 AND T4.organisation_type_description = 'Research' ### SQLite SQL tables, with their properties: # stadium (stadium_id, location, name, capacity, highest, lowest, average) # singer (singer_id, name, country, song_name, song_release_year, age, is_male) # concert (concert_id, concert_name, theme, stadium_id, year) # singer_in_concert (concert_id, singer_id) # ### Question: How many singers do we have? decompose the question ### SQLite SQL tables, with their properties: # document_types (document_type_code, document_description) # documents (document_id, document_type_code, grant_id, sent_date, response_-received_date, other_details) # grants (grant_id, organisation_id, grant_amount, grant_start_date, grant_end_date, other_details) # organisation_types (organisation_type, organisation_type_description) # organisations (organisation_id, organisation_type, organisation_details) # project_outcomes (project_id, outcome_code, outcome_details) # project_staff (staff_id, project_id, role_code, date_from, date_to, other_details) # projects (project_id, organisation_id, project_details) # research_outcomes (outcome_code, outcome_description) # research_staff (staff_id, employer_organisation_id, staff_details) # staff_roles (role_code, role_description) # tasks (task_id, project_id, task_details, eg agree objectives) # ### Question: Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". decompose the question 1. Find out the send dates of the documents. SQL table (column): documents (sent_date) 2. Find out the send dates of the documents with the grant amount of more than 5000. SQL table (column): grants (grant_amount, grant_id) 3. Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". SQL table (column): organisation_Types (organisation_type_description, organisation_type), organisations (organisation_type, organisation_id) # Thus, the answer for the question is: Find out the send dates of the documents with the grant amount of more than 5000 were granted by organisation type described as \"Research\". SELECT T1.sent_date FROM documents AS T1 JOIN Grants AS T2 ON T1.grant_id = T2.grant_id JOIN Organisations AS T3 ON T2.organisation_id = T3.organisation_id JOIN organisation_Types AS T4 ON T3.organisation_type = T4.organisation_type WHERE T2.grant_amount > 5000 AND T4.organisation_type_description = 'Research' ### SQLite SQL tables, with their properties: # stadium (stadium_id, location, name, capacity, highest, lowest, average) # singer (singer_id, name, country, song_name, song_release_year, age, is_male) # concert (concert_id, concert_name, theme, stadium_id, year) # singer_in_concert (concert_id, singer_id) # ### Question: How many singers do we have? decompose the question " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the anonymous reviewers and colleagues from the OSU NLP group for their thoughtful comments. This research was sored in part by a sponsored research award by Cisco Research, NSF IIS-1815674, NSF CAREER #1942980, NSF OAC-2112606, and Ohio Supercomputer Center (Center, 1987). The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein." } ]
In-context learning with large language models (LLMs) has recently caught increasing attention due to its superior few-shot performance on various tasks. However, its performance on text-to-SQL parsing still has much room for improvement. In this paper, we hypothesize that a crucial aspect of LLMs to improve for text-to-SQL parsing is their multi-step reasoning ability. Thus, we systematically study how to enhance LLMs' reasoning ability through chain of thought (CoT) style prompting, including the original chain-of-thought prompting (Wei et al., 2022b) and least-to-most prompting (Zhou et al., 2023). Our experiments demonstrate that iterative prompting as in Zhou et al. ( 2023) may be unnecessary for text-to-SQL parsing, and using detailed reasoning steps tends to have more error propagation issues. Based on these findings, we propose a new CoT-style prompting method for text-to-SQL parsing. It brings 5.2 and 6.5 point absolute gains on the Spider development set and the Spider Realistic set, respectively, compared to the standard prompting method without reasoning steps; 2.4 and 1.5 point absolute gains, compared to the least-to-most prompting method 1 .
Exploring Chain of Thought Style Prompting for Text-to-SQL
[ { "figure_caption": "Figure 1 :1Figure 1: Example model input and expected outputs for four CoT style prompting methods applied to text-to-SQL parsing: A. Chain-of-Thought, B. Least-to-Most, C. QDecomp, and D. QDecomp + InterCOL, where C and D are our proposed methods.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "random selection, we uniformly sample incontext examples from the Spider training set at random. For difficulty-based selection, we first group the Spider training examples into four difficulty levels, pre-defined by Yu et al. (2018), including easy, medium, hard, and extra-hard. Then, we devise three methods to randomly select in-context examples based on their difficulties: (G1) sampling an equal number of examples at each difficulty level, (G2) sampling the same number of examples from the hard level and the extra-hard level respectively, and (G3) sample all examples from the extra-hard level.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "It then orders the results by COUNT(DISTINCT T1.dog_id) in descending order. It then selects the breed_name and date_of_treatment columns. # Thus, the answer for the question is: Which dogs are of the rarest breed? Show their names and treatment dates. SELECT T2.breed_name , T3.date_of_treatment FROM dogs AS T1 JOIN breeds AS T2 ON T1.breed_code = T2.breed_code JOIN treatments AS T3 ON T1.dog_id = T3.dog_id GROUP BY T2.breed_name ORDER BY COUNT(DISTINCT T1.dog_id) DESC Gold SELECT T1.name, T2.date_of_treatment FROM dogs AS T1 JOIN Treatments AS T2 ON T1.dog_id = T2.dog_id WHERE T1.breed_code = (SELECT breed_code FROM Dogs GROUP BY breed_code ORDER BY count( * ) ASC LIMIT 1) Table 3: An example error made by Codex with chain-of-thought prompting. Question How many United Airlines flights go to City 'Aberdeen'? Least-to-Most Q: How many United Airlines flights go to City? A:SELECT count( * ) FROM flights WHERE airline = 'United Airlines' AND destairport = 'City' Q: How many United Airlines flights go to City 'Aberdeen'? A:SELECT count( * ) FROM flights WHERE airline = 'United Airlines' AND destairport = 'Aberdeen' Gold Q: How many United Airlines flights? A:SELECT count( * ) FROM flights AS T1 JOIN airlines AS T2 ON T2.uid = T1.airline WHERE T2.airline = 'United Airlines' Q: How many United Airlines flights go to City 'Aberdeen'? A:SELECT count( * ) FROM flights AS T1 JOIN airports AS T2 ON T1.destairport = T2.airportcode JOIN airlines AS T3 ON T3.uid = T1.airline WHERE T2.city = 'Aberdeen' AND T3.airline = 'United Airlines'", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example for API docs prompt format, introduced by Rajkumar et al. (2022), on", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example prompt under chain-of-thought + API docs prompting for 1-shot on Spider.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example prompt under least-to-most + API docs prompting (problem reduction) for 1-shot on Spider. ### SQLite SQL tables, with their properties: # class (class_code, crs_code, class_section, class_time, class_room, prof_num) # course (crs_code, dept_code, crs_description, crs_credit) # department (dept_code, dept_name, school_code, emp_num, dept_address) # employee (emp_num, emp_lname, emp_initial, emp_jobcode, emp_hiredate, emp_dob) # enroll (class_code, stu_num, enroll_grade) # professor (emp_num, dept_code, prof_office, prof_extension, prof_high_degree) # student (stu_num, stu_lname, stu_fname, stu_init, stu_dob, stu_hrs, stu_class, stu_gpa, stu_transfer, dept_code, stu_phone, prof_num) # Q: Find the first names and offices of all instructors. A: SELECT T1.emp_fname , T2.prof_office FROM employee AS T1 JOIN professor AS T2 ON T1.emp_num = T2.emp_num Q: Find the first names and offices of all instructors who have taught some course. A: SELECT T2.emp_fname , T4.prof_office FROM CLASS AS T1 JOIN employee AS T2 ON T1.prof_num = T2.emp_num JOIN course AS T3 ON T1.crs_code = T3.crs_code JOIN professor AS T4 ON T2.emp_num = T4.emp_num Q: Find the first names and offices of all instructors who have taught some course and the course description. A: SELECT T2.emp_fname , T4.prof_office , T3.crs_description FROM CLASS AS T1 JOIN employee AS T2 ON T1.prof_num = T2.emp_num JOIN course AS T3 ON T1.crs_code = T3.crs_code JOIN professor AS T4 ON T2.emp_num = T4.emp_num Q: Find the first names and offices of all instructors who have taught some course and the course description and the department name. A: SELECT T2.emp_fname , T4.prof_office , T3.crs_description , T5.dept_name FROM CLASS AS T1 JOIN employee AS T2 ON T1.prof_num = T2.emp_num JOIN course AS T3 ON T1.crs_code = T3.crs_code JOIN professor AS T4 ON T2.emp_num = T4.emp_num JOIN department AS T5 ON T4.dept_code = T5.dept_code ### SQLite SQL tables, with their properties: # station (station_id, name, annual_entry_exit, annual_interchanges, total_passengers, location, main_services, number_of_platforms) # train (train_id, name, time, service) # train_station (train_id, station_id) # Q: Show all train names and times.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: An example prompt under least-to-most + API docs prompting (problem solving) for 1-shot on Spider. The same prompt will be used to solve the next sub-question after we get the generated SQL query for the first sub-question.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: An example prompt under QDecomp + API docs prompting for 1-shot on Spider.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: An example prompt under QDecomp+InterCOL + API docs prompting for 1-shot on Spider.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "to evaluate different prompting methods, incontext example selection strategies, and prompt 8-shot test-suite (TS) accuracy of Codex on Spider Dev and Spider Realistic using different prompting methods and API Doc format. In-context examples are randomly selected except for the two rows marked with G3, where we only use extra-hard SQL queries (Section 4.2). We also include the overall standard execution accuracy (EX) in parenthesis for reference. For each method, we repeat the experiments with 5 different sets of in-context examples and report the average performances with their standard deviation. * We were not able to run G3 example selection on Spider Realistic before Codex became unavailable.", "figure_data": "MethodSpider DevSpider RealisticEasy Medium Hard Extra HardOverall TS (Overall EX)Overall TS (Overall EX)Standard86.865.350.336.063.2 ± 2.51 (68.7 ± 4.08) 51.0 ± 4.29 (62.5 ± 4.01)Chain-of-Thought73.964.544.623.456.8 ± 5.83 (53.9 ± 7.21) 50.3 ± 4.94 (53.4 ± 9.19)Least-to-Most88.168.752.939.566.0 ± 2.48 (68.9 ± 3.44) 55.0 ± 2.51 (63.3 ± 2.73)Least-to-Most (G3) 80.364.652.845.363.3 ± 1.95 (73.8 ± 1.72)- *QDecomp89.871.353.138.667.4 ± 1.89 (70.7 ± 2.80) 55.8 ± 2.01 (65.8 ± 2.29)+ InterCOL89.674.152.438.168.4 ± 2.05 (69.7 ± 5.82) 56.5 ± 2.05 (63.3 ± 4.19)+ InterCOL (G3)88.771.156.845.768.8 ± 1.16 (78.2 ± 1.07)- *SELECT WHERE GROUP BY ORDER BY KEYWORDSStandard89.866.174.783.084.2Chain-of-Thought83.570.767.172.876.9Least-to-Most90.070.772.582.484.3QDecomp91.270.777.285.186.4+ InterCOL91.472.476.685.386.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "8-shot component matching accuracy of Codex on the Spider development set.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "), we show thatour proposed question decomposition (QDecomp)", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "An example error made by Codex with least-to-most prompting.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table 6: 8-shot test-suite accuracy of Codex on the Spider dev set using different in-context example selection methods.", "figure_data": "Spider DevSelection Method Easy Medium Hard Extra HardOverallRandom89.674.152.438.168.4 ± 2.05G189.875.651.738.869.0 ± 2.18G287.472.250.439.466.9 ± 2.31G388.771.156.845.768.8 ± 1.16Table 5: 8-shot test-suite accuracy of Codex on Spider Dev using QDecomp+InterCOL prompting with different in-contextexample selection methods.RandomG1G2G3Standard63.264.1 60.2 58.2Least-to-Most65.862.6 61.2 63.3QDecomp67.468.2 65.2 66.6+ InterCOL68.469.0 66.9 68.8", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Test-suite accuracy of Codex on the Spider dev set using different numbers of in-context examples. We do not have 0-shot results for the proposed methods as they need at least one example to learn how to solve the task step by step.", "figure_data": "Selection of In-Context Examples. Besides ran-dom selection, we evaluate the efficacy of QDe-comp+InterCOL and other prompting methodswith in-context examples at various difficulty lev-els. As Table 5 suggests, QDecomp+InterCOLenables Codex to learn to reason for SQL queries atdifferent difficulty levels from in-context examples.When using G1 examples, Codex learns to generateSQL queries and reasoning steps of various lengthsfrom G1 examples. Thus, it is less likely to gener-ate redundant SQL clauses or reasoning steps andachieves the highest accuracy for SQL queries ateasy and medium level. When using G3 examples,", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Format of In-Context Examples. Finally, we show the performance of Codex using two prompt formats, API docs and Create Table + Select 3 API docs Create Table + Select 3", "figure_data": "Standard63.964.1Least-to-Most62.163.8QDecomp66.666.2+ InterCOL66.564.3Table 8: 4-shot test-suite accuracy of Codex on the Spider devset using different prompt formats.(Table 8). Due to OpenAI's prompt length restric-tions, we use 4 in-context examples in this exper-iment. Although Create Table + Select 3 formatincludes foreign key information and database con-tent, compared with API docs, it brings a negligibleimprovement in performance for standard prompt-ing and a (slight) decrease for QDecomp and QDe-comp+InterCOL prompting methods. Nonetheless,QDecomp is still the best prompting method underthis format. Therefore, we use API docs as our de-fault format due to its efficiency and leave furtherexperiments for future work.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Figure 3: An example for Create Table + Select 3 prompt format, introduced by Rajkumar et al. (2022), on Spider.Figure4: An example prompt under the standard API docs prompting for 2-shot on Spider. This query chooses records from the Book_Club table, followed by a WHERE clause that selects records where the year column is greater than 1989. It then groups the results by the category column. It then filters the results where the count of each category is greater than or equal to 2. It then selects the category column.", "figure_data": "### SQLite SQL tables, with their properties: ### SQLite SQL tables, with their properties:# ## medicine (id, name, trade_name, fda_approved) # book_club (book_club_id, year, author_or_editor, book_title, publisher, category,# enzyme (id, name, location, product, chromosome, omim, porphyria) result)# medicine_enzyme_interaction (enzyme_id, medicine_id, interaction_type) # movie (movie_id, title, year, director, budget_million, gross_worldwide)# # culture_company (company_name, type, incorporated_in, group_equity_shareholding,### What is the total count of enzymes? book_club_id, movie_id)SELECT count( * ) FROM enzyme #### List categories that have at least two books after year 1989.### SQLite SQL tables, with their properties: # Let's think step by step#(ID INTEGER PRIMARY KEY, Grape TEXT UNIQUE, Color TEXT ) / * 3 example rows: # buildings (id, name, city, height, stories, status) # companies (id, name, headquarters, industry, sales_billion, profits_billion, assets_billion, market_value_billion) # office_locations (building_id, company_id, move_in_year) # ### Show the industries shared by companies whose headquarters are \"USA\" and # # Thus, the answer for the question is: List categories that have at least twoSELECT * FROM grapes LIMIT 3; companies whose headquarters are \"China\". books after year 1989.ID Grape Color SELECT Industry FROM Companies WHERE Headquarters = \"USA\" INTERSECT SELECT Industry SELECT category FROM book_club WHERE YEAR > 1989 GROUP BY category HAVING count( * )1 Barbera Red FROM Companies WHERE Headquarters = \"China\" >= 22 Cabernet Franc Red3 Cabernet Sauvingnon Red ### SQLite SQL tables, with their properties:/ # ### SQLite SQL tables, with their properties:# stadium (stadium_id, location, name, capacity, highest, lowest, average) #CREATE TABLE appellations ( # singer (singer_id, name, country, song_name, song_release_year, age, is_male) # stadium (stadium_id, location, name, capacity, highest, lowest, average)No INTEGER PRIMARY KEY, # concert (concert_id, concert_name, theme, stadium_id, year) # singer (singer_id, name, country, song_name, song_release_year, age, is_male)Appelation TEXT UNIQUE, # singer_in_concert (concert_id, singer_id) # concert (concert_id, concert_name, theme, stadium_id, year)County TEXT, # # singer_in_concert (concert_id, singer_id)State TEXT, ### How many singers do we have? #Area TEXT, ### How many singers do we have?isAVA TEXT)/ *3 example rows:SELECT * FROM appellations LIMIT 3;No Appelation County State Area isAVA1 Alexander Valley Sonoma California North Coast Yes2 Amador County Amador California Sierra Foothills No3 Amador-Mendocino-Sonoma Counties N/A California N/A No/CREATE TABLE wine (No INTEGER,Grape TEXT,Winery TEXT,Appelation TEXT,State TEXT,Name TEXT,Year INTEGER,Price INTEGER,Score INTEGER,Cases INTEGER,Drink TEXT,FOREIGN KEY (Grape) REFERENCES grapes(Grape),FOREIGN KEY (Appelation) REFERENCES appellations(Appelation))/ *3 example rows:SELECT * FROM wine LIMIT 3;No Grape Winery Appelation State Name Year Price Score Cases Drink1 Zinfandel Robert Biale St. Helena California Old Kraft Vineyard 2008 44 93 275now2 Zinfandel Chiarello Family Napa Valley California Giana 2008 35 93 480 now3 Zinfandel Robert Biale Napa Valley California Black Chicken 2008 40 91 2700 2012/", "figure_id": "tab_8", "figure_label": "grapes", "figure_type": "table" } ]
Chang-You Tai; Ziru Chen; Tianshu Zhang; Xiang Deng; Huan Sun
[ { "authors": "Anthony Brohan; Yevgen Chebotar; Chelsea Finn; Karol Hausman; Alexander Herzog; Daniel Ho; Julian Ibarz; Alex Irpan; Eric Jang; Ryan Julian", "journal": "", "ref_id": "b0", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Shuaichen Chang; Jun Wang; Mingwen Dong; Lin Pan; Henghui Zhu; Alexander Hanbo Li; Wuwei Lan; Sheng Zhang; Jiarong Jiang; Joseph Lilien; Steve Ash; William Yang Wang; Zhiguo Wang; Vittorio Castelli; Patrick Ng; Bing Xiang; ; ", "journal": "", "ref_id": "b3", "title": "Dr.spider: A diagnostic evaluation benchmark towards text-to-SQL robustness", "year": "2023" }, { "authors": "Shuaichen Chang; Jun Wang; Mingwen Dong; Lin Pan; Henghui Zhu; Alexander Hanbo Li; Wuwei Lan; Sheng Zhang; Jiarong Jiang; Joseph Lilien", "journal": "", "ref_id": "b4", "title": "Dr. spider: A diagnostic evaluation benchmark towards text-to-sql robustness", "year": "2023" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b5", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Shijie Chen; Ziru Chen; Huan Sun; Yu Su", "journal": "", "ref_id": "b6", "title": "Error detection for text-to-sql semantic parsing", "year": "2023" }, { "authors": "Ziru Chen; Shijie Chen; Michael White; Raymond Mooney; Ali Payani; Jayanth Srinivasa; Yu Su; Huan Sun", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Text-to-SQL error correction with language models of code", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b8", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Deborah A Dahl; Madeleine Bates; William M Michael K Brown; Kate Fisher; David S Hunicke-Smith; Christine Pallett; Alexander Pao; Elizabeth Rudnicky; Shriberg", "journal": "", "ref_id": "b9", "title": "Expanding the scope of the atis task: The atis-3 corpus", "year": "1994-03-08" }, { "authors": "Xiang Deng; Ahmed Hassan; Christopher Meek; Oleksandr Polozov; Huan Sun; Matthew Richardson", "journal": "", "ref_id": "b10", "title": "Structure-grounded pretraining for text-to-sql", "year": "2021" }, { "authors": "Ahmed Elgohary; Saghar Hosseini; Ahmed Hassan; Awadallah ", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Speak to your parser: Interactive text-to-SQL with natural language feedback", "year": "2020" }, { "authors": "Ahmed Elgohary; Christopher Meek; Matthew Richardson; Adam Fourney; Gonzalo Ramos; Ahmed Hassan; Awadallah ", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "NL-EDIT: Correcting semantic parse errors through natural language interaction", "year": "2021" }, { "authors": "Nitish Gupta; Mike Lewis", "journal": "", "ref_id": "b13", "title": "Neural compositional denotational semantics for question answering", "year": "2018" }, { "authors": "Jungo Su Hongjin; Chen Henry Kasai; Weijia Wu; Tianlu Shi; Jiayi Wang; Rui Xin; Mari Zhang; Luke Ostendorf; Noah A Zettlemoyer; Smith", "journal": "", "ref_id": "b14", "title": "Selective annotation makes language models better fewshot learners", "year": "2023" }, { "authors": "Srinivasan Iyer; Ioannis Konstas; Alvin Cheung; Jayant Krishnamurthy; Luke Zettlemoyer", "journal": "", "ref_id": "b15", "title": "Learning a neural semantic parser from user feedback", "year": "2017" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b16", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Yuntao Li; Bei Chen; Qian Liu; Yan Gao; Jian-Guang Lou; Yan Zhang; Dongmei Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "what do you mean by that?\" a parser-independent interactive approach for enhancing text-to-SQL", "year": "2020" }, { "authors": "Aiwei Liu; Xuming Hu; Lijie Wen; Philip S Yu", "journal": "", "ref_id": "b18", "title": "a. A comprehensive evaluation of chatgpt's zero-shot text-to-sql capability", "year": "2023" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Computing Surveys", "ref_id": "b19", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Joan C Miller; Clifford J Maloney", "journal": "Commun. ACM", "ref_id": "b20", "title": "Systematic mistake analysis of digital computer programs", "year": "1963" }, { "authors": "Sewon Min; Victor Zhong; Luke Zettlemoyer; Hannaneh Hajishirzi", "journal": "", "ref_id": "b21", "title": "Multi-hop reading comprehension through question decomposition and rescoring", "year": "2019" }, { "authors": "Lingbo Mo; Ashley Lewis; Huan Sun; Michael White", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Towards transparent interactive semantic parsing via step-by-step correction", "year": "2022" }, { "authors": "Arpit Narechania; Adam Fourney; Bongshin Lee; Gonzalo Ramos", "journal": "", "ref_id": "b23", "title": "Diy: Assessing the correctness of natural language to sql systems", "year": "2021" }, { "authors": "Ansong Ni; Srini Iyer; Dragomir Radev; Veselin Stoyanov; Wen-Tau Yih; Sida Wang; Xi Victoria; Lin ", "journal": "", "ref_id": "b24", "title": "LEVER: Learning to verify language-to-code generation with execution", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "Mohammadreza Pourreza; Davood Rafiei", "journal": "", "ref_id": "b26", "title": "Din-sql: Decomposed in-context learning of textto-sql with self-correction", "year": "2023" }, { "authors": "Jiexing Qi; Jingyao Tang; Ziwei He; Xiangpeng Wan; Yu Cheng; Chenghu Zhou; Xinbing Wang; Quanshi Zhang; Zhouhan Lin", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "RASAT: Integrating relational structures into pretrained Seq2Seq model for text-to-SQL", "year": "2022" }, { "authors": "Nitarshan Rajkumar; Raymond Li; Dzmitry Bahdanau", "journal": "", "ref_id": "b28", "title": "Evaluating the text-to-sql capabilities of large language models", "year": "2022" }, { "authors": "Torsten Scholak; Nathan Schucher; Dzmitry Bahdanau", "journal": "", "ref_id": "b29", "title": "Picard: Parsing incrementally for constrained auto-regressive decoding from language models", "year": "2021" }, { "authors": "Peter Shaw; Ming-Wei Chang; Panupong Pasupat; Kristina Toutanova", "journal": "", "ref_id": "b30", "title": "Compositional generalization and natural language variation: Can a semantic parsing approach handle both", "year": "2021" }, { "authors": "Bailin Wang; Richard Shin; Xiaodong Liu; Oleksandr Polozov; Matthew Richardson", "journal": "", "ref_id": "b31", "title": "Rat-sql: Relation-aware schema encoding and linking for textto-sql parsers", "year": "2020" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler", "journal": "Transactions on Machine Learning Research", "ref_id": "b32", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b33", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Tomer Wolfson; Daniel Deutch; Jonathan Berant", "journal": "", "ref_id": "b34", "title": "Weakly supervised text-to-sql parsing through question decomposition", "year": "2022" }, { "authors": "Tomer Wolfson; Mor Geva; Ankit Gupta; Matt Gardner; Yoav Goldberg; Daniel Deutch; Jonathan Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b35", "title": "Break it down: A question understanding benchmark", "year": "2020" }, { "authors": "Navid Yaghmazadeh; Yuepeng Wang; Isil Dillig; Thomas Dillig", "journal": "OOPSLA", "ref_id": "b36", "title": "Sqlizer: query synthesis from natural language", "year": "2017" }, { "authors": "Ziyu Yao; Yu Su; Huan Sun; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study", "year": "2019" }, { "authors": "Ziyu Yao; Yiqi Tang; Wen-Tau Yih; Huan Sun; Yu Su", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "An imitation game for learning semantic parsers from user interaction", "year": "2020" }, { "authors": "Tao Yu; Chien-Sheng Wu; Xi Victoria Lin; Yi Chern Tan; Xinyi Yang; Dragomir Radev; Caiming Xiong", "journal": "", "ref_id": "b39", "title": "Grappa: Grammar-augmented pre-training for table semantic parsing", "year": "2021" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman", "journal": "", "ref_id": "b40", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task", "year": "2018" }, { "authors": "M John; Raymond J Zelle; Mooney", "journal": "", "ref_id": "b41", "title": "Learning to parse database queries using inductive logic programming", "year": "1996" }, { "authors": "Jichuan Zeng; Xi Victoria Lin; C H Steven; Richard Hoi; Caiming Socher; Michael Xiong; Irwin Lyu; King", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Photon: A robust cross-domain textto-SQL system", "year": "2020" }, { "authors": "Ruiqi Zhong; Tao Yu; Dan Klein", "journal": "", "ref_id": "b43", "title": "Semantic evaluation for text-to-sql with distilled test suites", "year": "2020" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Claire Cui; Olivier Bousquet; Ed H Quoc V Le; Chi", "journal": "", "ref_id": "b44", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2023" } ]
[]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b6", "b13", "b8", "b8", "b14", "b19", "b2", "b1", "b15", "b1", "b15", "b9" ], "table_ref": [], "text": "In recent years, reinforcement learning (RL) has achieved huge success in various aspects (Le et al., 2022;Li et al., 2022;Silver et al., 2018), especially in the field of games. However, due to the increased safety requirements in practice, researchers are starting to consider the constraint satisfaction in RL. Compared with unconstrained RL, constrained RL (CRL) incorporates certain constraints during the process of maximizing cumulated rewards, which provides a framework to model several important topics in RL, such as safe RL (Paternain et al., 2022), highlighting the importance of this problem in industrial applications.\nThe current methods for solving the CRL problem can be mainly classified into two categories: primal-dual method (Paternain et al., 2022;Stooke et al., 2020;Zhang et al., 2020;Altman, 1999) and feasible region method (Achiam et al., 2017;Yang et al., 2020). The primal-dual method introduces the Lagrangian multiplier to convert the constrained optimization problem into an unconstrained dual problem by penalizing the infeasible behaviours, promising the CRL problem to be resolved in a first-order manner. Despite the primal-dual framework providing a way to solve CRL in first-order manner, the update of the dual variable, i.e., the Lagrangian multiplier, tends to be slow and unstable, affecting the overall convergent speed of the algorithms. In contrast, the feasible region method provides a faster learning method by introducing the concept of the feasible region into the trust region method. With either searching in the feasible region (Achiam et al., 2017) or projecting into the feasible region (Yang et al., 2020), the feasible region method can guarantee the generated policies stay in the feasible region. However, the introduction of the feasible region in the proposed method relies on computationally expensive second-order optimization using the inverse Fisher information matrix. This approach can lead to inaccurate estimations of the feasible region and potential constraint violations, as reported in previous studies (Ray et al., 2019).\nTo address the existing issues mentioned above, this paper proposed the Constrained Proximal Policy Optimization (CPPO) algorithm to solve the CRL problem in a first-order, easy-to-implement way. CPPO employs a two-step Expectation-Maximization approach to solve the problem by firstly calculating the optimal policy (E-step) and then conducting a first-order update to reduce the distance between the current policy and the optimal policy (M-step), eliminating the usage of the Lagrangian multiplier and the second-order optimization. The main contributions of this work are summarized as follows:\n• To our best knowledge, the proposed method is the first first-order feasible region method without using dual variables or second-order optimization, which significantly reduces the difficulties in tuning hyperparameters and the computing complexity.\n• An Expectation-Maximization (EM) framework based on advantage value and probability ratio is proposed for solving the CRL problem efficiently. By converting the CRL problem into a probabilistic inference problem, the CRL problem can solved in first order manner without dual variables.\n• To solve the convex optimization problem in E-step, we established the relationship between the probability ratios and KL divergence, and developed an iterative heuristic algorithm from a geometric perspective.\n• A recovery update is developed when the current policy encounters constraint violation. Inspired by Bang-bang control, this update strategy can improve the performance of constraint satisfaction and reduce the switch frequency between normal update and recovery update.\n• The proposed method is evaluated in several benchmark environments. The results manifest its comparable performance over other baselines in complex environments.\nThis paper is organized as follows. Section 2 introduces the concept of constrained markov decision process and present an overview of related works in the field. The Expectation-Maximization framework and the technical details about the proposed constrained proximal policy optimization method are proposed in Section 3. Section 4 verifies the effectiveness of the proposed method through several testing scenarios and an ablation study is conducted to show the effectiveness of the proposed recovery update. Section 5 states the limitations and the boarder impact of the proposed method. Finally, a conclusion is drawn in Section 6.\n2 Preliminary and Related Work" }, { "figure_ref": [], "heading": "Constrained Markov Decision Process", "publication_ref": [], "table_ref": [], "text": "Constrained Markov Decision Process(CMDP) is a mathematical framework for modelling decisionmaking problems subjected to a set of cost constraints. A CMDP can be defined by a tuple (S, A, P, r, γ, µ, C), where S is the state space, A is the action space, P : S × A × S → (0, 1) is the transition kernel, r : S × A → R is the reward function, γ → (0, 1) is the discount factor, µ : S → (0, 1) is the initial state distribution, and\nC := {c i ∈ C | c i : S × A → R, i = 1, 2, . . . , m}\nis the set of m cost functions. For simplicity, we only consider a CRL problem with one constraint in the following paper and use c to represent the cost function. Note that, although we restrict our discussion to the case with only one constraint, the method proposed in this paper can be naturally extended to the multiple constraint case. However, the result may not as elegant as the one constraint case.\nCompared with the common Markov Decision Process(MDP), CMDP introduces a constraint on the cumulated cost to restrict the agent's policies. Considering a policy π(s | a) : S×A → (0, 1), the goal of MDP is to find the π that maximizes the expected discounted returns J r (π\n) = E τ [ ∞ t=0 γ t r(s t )],\nwhere τ is the trajectories generated based on π. Based on these settings, CMDP applied a threshold d on the expected discounted cost returns\nJ c (π) = E τ [ ∞ t=0 γ t c(s t )].\nThus, the CMDP problem can be formed as finding policy π * that π * = argmax π J r (π) s.t. J c (π * ) ≤ d. The advantage function A and the cost advantage function A c is defined as\nA(s t , a t ) = Q(s t , a t ) -V (s t ) and A c (s t , a t ) = Q c (s t , a t ) -V c (s t ) where Q(s t , a t ) = E τ [ ∞ t=0 γ t r | s 0 = s t , a 0 = a t ] and V (s t ) = E τ [ ∞ t=0 γ t r | s 0 = s t ]\nare the corresponding Q-value and V-value for reward function, and\nQ c (s t , a t ) = E τ [ ∞ t=0 γ t c | s 0 = s t , a 0 = a t ] and V c (s t ) = E τ [ ∞ t=0 γ t c | s 0 = s t ]\nare the corresponding Q-value and V-value for cost function. Note that both A and A c in the batch are centered to moves theirs mean to 0, respectively." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Proximal Policy Optimization (PPO)", "publication_ref": [ "b12", "b17", "b18" ], "table_ref": [], "text": "Proximal policy optimization (PPO) (Schulman et al., 2017) is a renowned on-policy RL algorithm for its stable performance and easy implementation. Based on the first-order optimization methodology, PPO addresses the challenge of the unconstrained RL problem through the surrogate objective function that proposed in Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a). With the clipping and early stop trick, PPO can keep the new policy to stay within the trust region. Thanks to its stability and superior performance, the PPO algorithm has been employed in various subfields of RL like multi-agent RL (Yu et al., 2021), Meta-RL (Yu et al., 2020). However, due to the extra constraint requirements, the direct application of PPO in CRL problems is not feasible. The extra constraint requirements cause PPO not only restricted by the trust region but also the constraint feasible region, which significantly increases the challenge in conducting first-order optimization. Despite the difficulties in the direct application of PPO in CRL, researchers are still searching for a PPO-like method to solve CRL problems with stable and superior performance." }, { "figure_ref": [], "heading": "Constrained Reinforcement Learning", "publication_ref": [ "b8", "b14", "b19", "b1", "b15", "b14", "b1", "b9", "b19", "b7", "b19", "b7", "b0" ], "table_ref": [], "text": "The current methods for solving the CRL problem can be mainly divided into two categories: primaldual method (Paternain et al., 2022;Stooke et al., 2020;Zhang et al., 2020) and feasible region method (Achiam et al., 2017;Yang et al., 2020). The primal-dual method converts the original problem into a convex dual problem by introducing the Lagrangian multiplier. By updating the policy parameters and Lagrangian multiplier iteratively, the policies obtained by the primal-dual method will gradually converge towards a feasible solution. However, the usage of the Lagrange multiplier introduces extra hyperparameters into the algorithm and slows down the convergence speed of the algorithm due to the characteristic of the integral controller. Stooke et al. (2020) tries to solve this issue by introducing PID control into the update of the Lagrangian multiplier, but this modification will introduce more hyperparameters and cause the algorithm to be complex. Different from the primal-dual method, the feasible region method estimates the feasible region within the trust region using linear approximation and subsequently determines the new policy based on the estimated feasible region. A representative method is constrained policy optimization (CPO). By converting the CRL to a quadratically constrained linear program, CPO (Achiam et al., 2017) can solve the problem efficiently. However, the uncertainties inside the environment may cause an inaccurate cost assessment, which will affect the estimation of the feasible region and cause the learned policy to fail to meet the constraint requirements, as shown in Ray et al. (2019). Another issue of CPO is that it uses the Fisher information matrix to estimate the KL divergence in quadratic approximation, which is complex in computing and inflexible in network structure.\nTo address the second-order issue in CRL, several researchers (Zhang et al., 2020;Liu et al., 2022) proposed the EM-based algorithm in a first-order manner. FOCOPS (Zhang et al., 2020) obtain the optimal policy from advantage value, akin to the maximum entropy RL, and perform a first-order update to reduce the KL divergence between the current policy and the optimal policy. Despite its significant improvement in performance compared to CPO, FOCOPS still necessitates the use of a primal-dual method to attain a feasible optimal policy, which introduces a lot of hyperparameters for tuning, resulting in a more complex tuning process. CVPO (Liu et al., 2022) extends the maximum a posteriori policy optimization (MPO) (Abdolmaleki et al., 2018) method to the CRL problem, allowing for the efficient calculation of the optimal policy from Q value in an off-policy manner. However, this algorithm still requires the primal-dual framework in optimal policy calculation and necessitates additional samplings during the training, increasing the complexity of implementation. Thus, the development of a simple-to-implement, first-order algorithm with superior performance, remains a foremost goal for researchers in the CRL subfield." }, { "figure_ref": [], "heading": "Constrained Proximal Policy Optimization (CPPO)", "publication_ref": [], "table_ref": [], "text": "As mentioned in Section 2, existing CRL methods often require second-order optimization for feasible region estimation or the use of dual variables for cost satisfaction. These approaches can be computationally expensive or result in slow convergence. To address these challenges, we proposed a two-step approach in an EM fashion named Constrained Proximal Policy Optimization (CPPO), the details will be shown in this section." }, { "figure_ref": [], "heading": "Modelling CRL as Inference", "publication_ref": [ "b1" ], "table_ref": [], "text": "Instead of directly pursuing an optimal policy to maximize rewards, our approach involves conceptualizing the problem of Constrained Reinforcement Learning (CRL) as a probabilistic inference problem. This is achieved by assessing the reward performance and constraint satisfaction of stateaction pairs and subsequently increasing the likelihood of those pairs that demonstrate superior reward performance while adhering to the constraint requirement. Suppose the event of state-action pairs under policy π θ can maximize reward is represented by optimality variable O, we assume the likelihood of state-action pairs being optimal is proportional to the exponential of its advantage value: p(O = 1|(s, a)) ∝ exp(A(s, a)/α) where α is a temperature parameter. Denote q(a | s) is the feasible posterior distribution estimated from the sampled trajectories under current policy π, p π (a | s) is the probability distribution under policy π, and θ is the policy parameters. We can have following evidence lower bound(ELBO) J (q, θ) using surrogate function(see Appendix B for detailed proof)\nlog p π θ (O = 1) ≥ E s∼d π ,a∼π q(a|s) p π (a|s) A(s, a) -αD KL (q ∥ π θ ) + α log p(θ) = J (q, θ),(1)\nwhere d π is the state distribution under current policy π, p(θ) is a prior distribution of policy parameters. Considering q(a | s) is a feasible policy distribution, we also have following constraint (Achiam et al., 2017)\nJ c (π) + 1 1 -γ E s∼d π ,a∼π q(a|s) p π (a|s) A c (s, a) ≤ d,(2)\nwhere d is the cost constraint. By performing iterative optimization of the feasible posterior distribution q (E-step) and the policy parameter θ (M-step), the lower bound J (q, θ) can be increased, resulting in an enhancement in the likelihood of state-action pairs that have the potential to maximize rewards." }, { "figure_ref": [], "heading": "E-Step", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Surrogate Constrained Policy Optimization", "publication_ref": [ "b7" ], "table_ref": [], "text": "As mentioned in the previous section, we will firstly optimize the feasible posterior distribution q to maximize ELBO in E-step. The feasible posterior distribution q plays a crucial role in determining the upper bound of the ELBO since the KL divergence is non-negative. Consequently, q needs to be theoretically optimal to maximize the ELBO. By converting the soft KL constraint in Equation ( 1) into a hard constraint and combining the cost constraint in Equation ( 2),the optimization problem of q can be expressed as follows:\nmaximize q E s∼d π ,a∼π q(a|s) p π (a|s) A(s, a) s.t. J c (π) + 1 1 -γ E s∼d π ,a∼π q(a|s) p π (a|s) A c (s, a) ≤ d, D KL (q ∥ π) ≤ δ,(3)\nwhere δ is the reverse KL divergence constraint that determine the trust region. During the E-step, it is important to note that the optimization is independent of θ, meaning that the policy π θ remains fixed to the current sampled policy π. Even we know the closed-form expression of p π θ , it is impractical to solve the closed-form expression of q from Equation (3), as we still needs the closed-form expression of d π for calculating. Therefore, we we opt to represent the solution of q in a non-parametric manner by calculating the probability ratio v = q(a|s) pπ(a|s) for the sampled state-action pairs, allowing us to avoid explicitly parameterizing q and instead leverage the probability ratio to guide the optimization process. After relaxing the reverse KL divergence constraint with the estimated reverse KL divergence calculated through importance sampling, we can obtain 4) is convex optimization problem that can be directly solved through existing convex optimization algorithm, the existence of non-polynomial KL constraint tends to cause the optimization to be computationally expensive. To overcome this issue, the following proposition is proposed to relax Equation ( 4) into an linear optimization problem with quadratic constraint.\nmaximize v E s∼d π ,a∼π [vA(s, a)] s.t. E s∼d π ,a∼π [vA c (s, a)] ≤ d ′ E s∼d π a∼π [v log v] ≤ δ. (4) where d ′ the scaled cost margin d ′ = (1 -γ)(d -J c (π)). Although Equation (\nProposition 3.1. Denote v as the probability ratios q(a|s) pπ(a|s) calculated from sampled trajectories. If there are a sufficient number of sampled v, we have\nE[v] = 1 and E [v log v] ≤ Var(v -1).\nWith Proposition 3.1, the relationship between reverse KL divergence and l 2 -norm of vector v -1 is constructed. Also, consider that the expectation of v equals 1, the optimization variable can be changed from v to v -1. Let v denote the vector consists of v -1 and replace the reverse KL divergence constraint with the l 2 -norm constraint, Equation ( 4) can be rewritten in the form of vector multiplication maximize\nv v • A s.t. v • A c ≤ N d ′ , ∥v∥ 2 ≤ 2N δ ′ E(v) = 0, v > -1 element-wise,(5)\nwhere A and A c are the advantage value vectors for reward and cost (for all sampled state-action pairs in one rollout) respectively, N is the number of state-action pair samples, δ ′ is l 2 -norm constraint, and the element-wise lower bound of v is -1, as v > 0. Thus, the optimal feasible posterior distribution q expressed through v can be obtained by solving the aforementioned optimization problem. Remark 3.2. By replacing the non-polynomial KL constraint with an l 2 -norm constraint, the original optimization problem in Equation ( 4) can be reformulated as a geometric problem. This reformulation enables the use of the proposed heuristic method to efficiently solve the problem without the need for dual variables. Remark 3.3. Our proposed method builds upon the idea presented in CVPO (Liu et al., 2022) of treating the CRL problem as a probabilistic inference problem. However, our approach improves upon their idea in two significant ways. Firstly, the probabilistic inference problem in our method is constructed based on advantage value, which is more effective in reducing the bias in estimating the cost return, compared to the Q-value used in CVPO. Secondly, while CVPO tries to directly calculate the value of q(a|s), our method employs the probability ratio v to represent q. By replacing q(a|s) with v, our method only needs to find a vector of v whose elements are positive and E[v] = 1, thereby negating the need to sample multiple actions in one state to calculate the extra normalizer that ensures q is a valid distribution. This results in a significant reduction in computational complexity." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Recovery update", "publication_ref": [ "b9", "b4" ], "table_ref": [], "text": "Although the optimal solution q in Section 3.2.1 is applicable when the current policy is out of the feasible region, the inconsistent between optimal q and π θ and the inaccurate cost evaluations tends to result in the generation of infeasible policies, as demonstrated in Ray et al. (2019) where CPO fail to satisfy constraint. To overcome this issue, a recovery update strategy is proposed for pushing the agent back to the feasible region. This strategy aims to minimize costs while preserving or minimizing any reduction in overall reward return. In the event that it is not possible to recover from the infeasible region without compromising the reward return, the strategy aims to identify an optimal policy within the feasible region that minimizes the adverse impact on the reward return. The optimization problem in recovery update can be expressed as\nif v • A ≥ 0 not exists when v • A c ≤ N d ′ : maximize v v • A else: minimize v v • A c s.t. ∥v∥ 2 ≤2N δ ′ , E(v) = 0, v > -1 element-wise. (6\n)\nFigure 1 illustrates the recovery update strategy from the perspective of geometry. The blue, red, and yellow arrows represent the direction of minimizing cost, maximizing reward and the recovery update, respectively. The reward preservation region is defined by the zero reward boundary, which is depicted as the dashed line perpendicular to the red arrow. As a result, the semi-circle encompassing the red arrow indicates a positive increment in reward. Case 1 and Case 3 illustrate the case when the reward preservation region has an intersection with the feasible region. In these cases, we choose the direction of minimizing cost within the reward preservation region, e.g., the recovery update direction is coincident with the dashed line in Case 1, and the recovery update direction is coincident with the blue arrow in Case 3. Case 2 shows the case when there is no intersection between the reward preservation region and the feasible region. In this case, the direction with the least damage to reward is chosen. If we use an angle α to represent the direction of update, then we can have α = Clip(α, max(θ f , θ A + π/2), π), where θ A represents the direction of A, θ f is the minimum angle that can point toward the feasible region. To further improve the constraint satisfaction performance, a switching mechanism inspired by bangbang control (Lasalle, 1960) is introduced. As shown in Figure 2, the agent will initially conduct normal update in Section 3.2.1; when the agent violates the cost constraint, it will switch to recovery update to reduce the cost until the cost is lower than the lower switch cost. By incorporating this switching mechanism, a margin is created between the lower switch cost and the cost constraint. This margin allows for a period of normal updates before the recovery update strategy is invoked.\nAs a result, this mechanism prevents frequent switching between the two strategies, leading to improved performance in both reward collection and cost satisfaction. This switching mechanism effectively balances the exploration of reward-maximizing actions with the need to maintain constraint satisfaction." }, { "figure_ref": [], "heading": "Heuristic algorithm from geometric interpretation", "publication_ref": [], "table_ref": [], "text": "Section 3.2 and Section 3.4 provide a framework for solving CRL problem in theory. However, solving Equation ( 5) and Equation (6) in Section 3.2 is a tricky task in practice. To reduce the computation complexity, an iterative heuristic algorithm is proposed to solve this optimization problem from geometric interpretation. Recall Equation ( 5), the l 2 -norm can be interpreted as a radius constraint from the geometric perspective. Additionally, both the objective function and the cost function are linear, indicating that the optimal solution lies on the boundary of the feasible region. By disregarding the element-wise bounds in Equation ( 5), we can consider the optimization problem as finding a optimal angle θ ′ on the A-A c plane, in accordance with Theorem 3.4. The optimal solution can be expressed as v = 2N δ ′ (cos θ ′ Ãc + sin θ ′ Ã), where à and Ãc are the orthogonal unit vectors of A and A c respectively. Considering Assumption 3.5, we proposed a iterative heuristic algorithm to solve Equation ( 5) by firstly calculating the optimal angle θ ′ regardless the element-wise bound and obtain a initial solution v, then clip v according to the element-wise bound and mask the clipped value, and iteratively update the rest unmasked elements according to aforementioned steps until all elements in v are satisfy the element-wise bound. The detailed steps are outlined in Appendix C. For the recovery update in Section 3.2.2, the same algorithm can be used to find the angle that satisfy v\n• A c = N d ′ or v • A = 0.\nTheorem 3.4. Given a feasible optimization problem of the form:\nmaximize v v • A s.t. v • A c ≤ D, ∥v∥ 2 ≤ 2N δ ′ E(v) = E(A) = E(A c ) = 0\nwhere v, A, and A c are N -dimensional vectors, then the optimal solution v will lie in the A-A c plane determined by A c and A. Assumption 3.5. If the optimization problem in Theorem 3.4 has a optimal solution v opt = [v 1 , v 2 , . . . ], and the same problem with element-wise lower bound constraint b has a optimal solution v ′ opt = [v ′ 1 , v ′ 2 , . . . ], then v ′ t = b where v t ≤ b. Remark 3.6. By utilizing the proposed heuristic algorithm, the optimal solution to Equation ( 5) can be obtained in just a few iterations. The time complexity of each iteration is O(n), where n represents the number of unmasked elements. As a result, the computational complexity is significantly reduced compared to conventional convex optimization methods." }, { "figure_ref": [], "heading": "M-Step", "publication_ref": [ "b7", "b0", "b16" ], "table_ref": [], "text": "After determining the optimal feasible posterior distribution q to maximize the upper bound of ELBO, an M-step is implemented to maximize ELBO by updating policy parameters θ in a supervised learning manner. Recall the definition of ELBO in Equation (1) in Section 3.1, by dropping the part that independent from θ, we will obtain following optimization problem maximize θ -αD KL (q ∥ π θ ) + α log p(θ).\n(7)\nNote that if we assume p(θ) is a Gaussian distribution, then log p(θ) can be converted into D KL (π ∥ π θ ) (see Appendix B for details). Using the same trick in Section 3.2.1 to convert soft KL constraint to hard KL constraint, the supervised learning problem in M-step can be expressed as\nminimize θ D KL (q ∥ π θ ) s.t. D KL (π ∥ π θ ) ≤ δ,(8)\nNote that D KL (π ∥ π θ ) is chosen to lower than δ so that the current policy π can be reached during the E-step in next update iteration to achieve robust update.\nFor Equation ( 7), it is a common practice for researchers to directly minimize the KL divergence, like CVPO (Liu et al., 2022) and MPO (Abdolmaleki et al., 2018). However, recall Equation ( 6), it is evident that the value of surrogate reward and cost are deeply connected to the projection of v onto the A-A c plane, while KL divergence can hardly reflect this kind of relationship between v and surrogate value. Consequently, Consequently, we choose to replace the original KL objective function with the l 2 -norm E [∥v -p π θ /p π ∥ 2 ], where v is the optimal probability ratio obtained in E-step and p π θ /p π is the probability ratio under policy parameter θ. With this replacement, the optimization problem can be treated as a fixed-target tracking control problem. This perspective enables us to plan tracking trajectories that can consistently satisfy the cost constraint, enhancing the ability to maintain cost satisfaction throughout the learning process. The optimization problem after replacement can be rewritten as\nminimize θ E ∥v - p π θ p π ∥ 2 s.t. D KL (π ∥ π θ ) ≤ δ,(9)\nTo ensure the tracking trajectories can satisfy cost constraint at nearly all locations, we calculated the several recovery v ′ under different δ ′′ and guide pπ θ pπ to different v according to the l 2 -norm of pπ θ pπ , so that even ∥ pπ θ pπ ∥ 2 is much smaller than 2N δ ′ , the new policy can still satisfy the cost constraint. Moreover, inspired by the proportional navigation (Yanushevsky, 2018), we also modify the recovery update gradient from (v -\npπ θ pπ ) ∂π θ ∂θ to ((β(v - pπ θ pπ ) + (1 -β)A ′ c ) ∂π θ\n∂θ to reduce the cost during the tracking, where A ′ c is the projection of v -pπ θ pπ on cost advantage vector A c . In according with Theorem 3.7, the lower-bound clipping mechanism similar with PPO is applied on updating pπ θ pπ in M-step to satisfy the forward KL constraint (see Appendix C for details).\nTheorem 3.7. For a probability ratio vector v, if the variance of v is constant, then the upper bound of the approximated forward KL divergence D KL (π ∥ π θ ), will decrease as the element-wise lower bound of v increase.\nApart from E-step and M-step introduced in Section 3.2 and Section 3.4, our method shares the same Generalized Advantage Estimator (GAE) technique (Schulman et al., 2015b) with PPO in calculating the advantage value A and A c . The main steps of CPPO are summarized in Appendix C." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Experiment", "publication_ref": [ "b9", "b1", "b1", "b9", "b9" ], "table_ref": [], "text": "In this section, Safety Gym (Ray et al., 2019) benchmark environments and Circle environment (Achiam et al., 2017) are used to verify and evaluate the performance of the proposed method. Five test scenarios, namely CarPush, PointGoal, PointPush, PointCircle, and AntCircle are evaluated. The detailed information about the test scenarios can be seen in Appendix D. Three algorithms are chosen as the benchmarks to compare the learning curves and the constraint satisfaction: CPO (Achiam et al., 2017), PPO-Lagrangian method (simplified as PPO_lag), and TRPO-Lagrangian method (simplified as TRPO_lag) (Ray et al., 2019). CPO is chosen as the representative of the feasible region method. PPO_lag and TRPO_lag are treated as the application of the primal-dual method in first-order optimization and second-order optimization. TRPO and PPO are also used in this section as unconstrained performance references. For a fair comparison, all of the algorithms use the same policy network and critic network. The detail of the hyperparameter setting is listed in Appendix E. Performance and Constraint Satisfaction: Figure 3 compares the learning curves of the proposed method and other benchmark algorithms in terms of the episodic return and the episodic cost. The first row records the undiscounted episodic return for performance comparison, and the second row is the learning curves of the episodic cost for constraint satisfaction analysis, where the red dashed line indicates the cost constraint. The learning curves for the Push and Goal environments are averaged over 6 random seeds, while those for the Circle environments are averaged over 4 random seeds.\nThe curve itself represents the mean value, and the shadow indicates the standard deviation. In terms of performance comparison, it was observed that CPO can achieve the highest reward return in PointGoal and PointCircle. The proposed CPPO method, on the other hand, achieves similar or even higher reward return in the remaining test scenarios. However, when considering constraint satisfaction, CPO fails to satisfy the constraint in all four tasks due to approximation errors, as previously reported in Ray et al. (2019). In contrast, CPPO successfully satisfies the constraint in all five environments, showing the effectiveness of the proposed recovery update . Referring to the learning curves in Circle scenarios, it can be seen that the primal-dual based CRL methods, i.e., PPO_lag and TRPO_lag, suffer from the slow and unstable update of the dual variable, causing the conservative performance in PointCircle and slow cost satisfaction in AntCircle. On the other hand, CPPO can achieves a faster learning speed in Circle environment by eliminating the need for the dual variable. Overall, the experimental results demonstrate the effectiveness of CPPO in solving the CRL problem.\nAblation Study: An ablation study was conducted to investigate the impact of the recovery update in CPPO. Figure 4 presents the reward performance and cost satisfaction of CPPO with and without the recovery update in the PointCircle environment. The results indicate that without the recovery update, CPPO achieves higher reward performance; however, the cost reaches 15, which significantly violates the cost constraint. In contrast, when the recovery update is applied, CPPO successfully satisfies the constraint, thereby demonstrating the importance of the recovery update in ensuring constraint satisfaction. " }, { "figure_ref": [], "heading": "Limitations and Boarder Impact", "publication_ref": [], "table_ref": [], "text": "Although our proposed method has shown its ability in test scenarios, there still exist some limitations. Firstly, CPPO method is an on-policy constrained RL, which suffers from lower sampling efficiency compared to other off-policy algorithms, potentially limiting its applicability in real-world scenarios.\nAdditionally, the convergence of our method is not yet proven. However, we believe that our work will offer researchers a new EM perspective for using PPO-like algorithms to solve the problem of constrained RL, thereby leading to the development of more efficient and stable constrained RL algorithms." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b3" ], "table_ref": [], "text": "In this paper, we have introduced a novel first-order Constrained Reinforcement Learning (CRL) method called CPPO. Our approach avoids the use of the primal-dual framework and instead treats the CRL problem as a probabilistic inference problem. By utilizing the Expectation-Maximization (EM) framework, we address the CRL problem through two key steps: the E-step, which focuses on deriving a theoretically optimal policy distribution, and the M-step, which aims to minimize the difference between the current policy and the optimal policy. Through the non-parametric representation of the policy using probability ratios, we convert the CRL problem into a convex optimization problem with a clear geometric interpretation. As a result, we propose an iterative heuristic algorithm that efficiently solves this optimization problem without relying on the dual variable. Furthermore, we introduce a recovery update strategy to handle approximation errors in cost evaluation and ensure constraint satisfaction when the current policy is infeasible. This strategy mitigates the impact of approximation errors and strengthens the capability of our method to satisfy constraints. Notably, our proposed method does not require second-order optimization techniques or the use of the primal-dual framework, which simplifies the optimization process. Empirical experiments have been conducted to validate the effectiveness of our proposed method. The results demonstrate that our approach achieves comparable or even superior performance compared to other baseline methods. This showcases the advantages of our method in terms of simplicity, efficiency, and performance in the field of Constrained Reinforcement Learning.\nA Proof for Propositions and Theorems Proposition 3.1 Denote v as the probability ratios q(a|s) pπ (a|s) calculated from sampled trajectories. If there are sufficient number of sampled v, we have E\n[v] = 1 and E [v log v] ≤ Var(v -1).\nProof. Denote pπ(s, a) and q(s, a) are the probability density function of the state-action distribution under different policies. Considering the divergence between q and pπ are small, we assume that the policy change will not cause the change in state distribution. We denote d(s) as the probability density function of the state distribution. From the definition of the probability density function, we know that pπ(s, a) d (s, a) = 1, Considering the current trajectories are sampled under policy pπ, we can obtain that\nE[v] = pπ(s, a) q(a | s) pπ(a | s) d (s, a) = pπ(s, a) q(a | s) × d(s) pπ(a | s) × d(s) d (s, a) = pπ(s, a) q(s, a) pπ(s, a) d (s, a) = q(s, a) d (s, a) = 1,(10)\nas p(s, a) is a probability density function. Therefore,\nE[v] = 1 is proven. Q.E.D\nTheorem 3.7 For a probability ratio vector v, if the variance of v is constant, then the upper bound of the approximated forward KL divergence DKL(π ∥ π θ ), will decrease as the element-wise lower bound of v increase.\nProof. Using the same symbol in Proof of Proposition 3.1, v is the vector consists of v = pπ θ (s,a)\npπ (s,a) and the definition of the forward KL divergence DKL(π ∥ π θ ) can be expressed as\nDKL(π ∥ π θ ) = pπ(s, a) log pπ(s, a) pπ θ (s, a) = -E [log v] = - log v N = -log( N i=1 vi) 1 N , (11\n)\nwhere N is the number of elements in v. According to the Theorem in Cartwright & Field (1978), we obtain that\nE(v) - N i=1 v 1 N i ≤ 1 2N min vi N i=1 (vi -E(v)) 2 . (12\n)\nAs we know\nE(v) = 1 from Proposition 3.1, N i=1 v 1 N i > 0, and N i=1 (vi -E(v)) 2 = N • Var(v) , we have N i=1 v 1 N i ≥ 1 - Var(v) 2 min vi log N i=1 v 1 N i ≥ log 1 - Var(v) 2 min vi DKL(π ∥ π θ ) ≤ -log 1 - Var(v) 2 min vi ≈ Var(v) 2 min vi . (13\n)\nAs Var(v) is a constant, Equation (13) proves the upper bound of DKL(π ∥ π θ ) is Var(v) 2 min v i , showing that the upper bound of DKL(π ∥ π θ ), will decrease as the element-wise lower bound of v, min vi, increase.\nQ.E.D Theorem 3.4 Given a feasible optimization problem of the form:\nmaximize v v • A s.t. v • Ac ≤ D ∥v∥2 ≤ 2N δ E(v) = E(A) = E(Ac) = 0\nwhere v, A, and Ac are N -dimensional vectors, then the optimal solution v will lie in the A-Ac plane determined by Ac and A.\nProof. Assuming v, A, and Ac can be represented by three orthonormal basis vectors i, j, and k, where v = a1i + b1j + c1k, A = a2i + b2j, and Ac = a3i, then the optimization problem becomes:\nmaximize a 1 ,b 1 a1a2 + b1b2 s.t. a1 ≤ D/a3 a 2 1 + b 2 1 ≤ 4N 2 δ 2 -c 2 1 (14)\nFrom the geometric interpretation, we can find the optimal solution of the above problem always exists on the circle a 2 1 + b 2 1 = 4N 2 δ 2 -c 2 1 . By increasing the radius of the circle, the line a1a2 + b1b2 will have a larger intercept. Thus, the aforementioned problem will get its optimal solution when c1 = 0, i.e., v will lie in the A-Ac plane determined by Ac and A. = log E s∼d q ,a∼q p(O = 1|(s, a)) * pπ θ (s, a) * p(θ) q(s, a)\n≥ E s∼d q ,a∼q log p(O = 1|(s, a)) + log pπ(s, a) q(s, a) + log p(θ)(15)\nwhere d q is the state distribution under theoretical optimal distribution q. If we assume that the sampled policy π and q is enough close that d π = d q , then log pπ θ (O = 1) ≥ E s∼d q ,a∼q log p(O = 1|(s, a)) + E s∼d q ,a∼q log pπ θ (s, a) q(s, a) + log p(θ)\n∝ E s∼d π ,a∼q [A(s, a)] + αE s∼d π ,a∼q log pπ θ (a|s) q(a|s) + α log p(θ)\n= E s∼d π ,a∼π q(a|s) pπ(a|s) A(s, a) -αDKL(q ∥ π θ ) + α log p(θ)\nThus, the ELBO in Equation ( 1) is obtained." }, { "figure_ref": [], "heading": "B.2 Derivation in M-step", "publication_ref": [], "table_ref": [], "text": "Recall Equation (7) in Section 3.4, we have following optimization problem\nmaximize θ -αDKL(q ∥ π θ ) + α log p(θ). (17\n)\nConsider θ is a Gaussian prior around the policy parameter of sampled policy θ, i.e., θ ∼ N ( θ, F θ β ). Therefore, the problem above will become maximize\nθ -αDKL(q ∥ π θ ) -αβ(θ -θ) T F -1 θ (θ -θ).(18)\nNote that (θ -θ) T F -1 θ (θ -θ) is the second order estimation of DKL(π ∥ π θ ), we have maximize\nθ -DKL(q ∥ π θ ) -βDKL(π ∥ π θ ).(19)\nBy converting the soft KL constraint into a hard constraint, we can obtain\nminimize θ DKL(q ∥ π θ ) s.t. DKL(π ∥ π θ ) ≤ δ,(20)\nwhich is the same optimization problem as in Equation ( 8)." }, { "figure_ref": [], "heading": "C Details in heuristic algorithm and M-step C.1 Heuristic algorithm", "publication_ref": [], "table_ref": [], "text": "The detailed steps of iteratively heuristic algorithm are shown in Algorithm 1. Note that, after masking, the masked elements are removed from the original vector, which means the size of v ′ ,A ′ c , and A ′ is smaller than v ,Ac, and A. \n′ c & A ′ . Subtract the mean of A ′ c and A ′ to obtain A ′′ c & A ′′ . Initial a new zero vector v ′′ with the same size of v ′ Calculate the l 2 -norm bound of v ′′ , i.e., δ ′ , using D(v ′′ ) = E(v ′′2 ) -E(v ′′ ) 2 . Using QR decomposition to orthonormalize A ′′ and A ′′ c into orthogonal unit vectors Ã′′ c = kA ′′ c and Ã′′ . Find θ ′ that maximize v ′′ A ′′ while satisfy v ′′ A ′′ c ≤ N d ′ -v m A m c -M • mean(v ′ ) • mean(A ′ c ), where M is the number of element in v m , v ′′ = 2N δ ′ (cos θ ′ Ã′′ c + sin θ ′ Ã′′ ).\nConcatenate v m and v ′′ + mean(v ′ ) according to the recorded location to obtain the new v end while Obtain optimal probability ratio v = v + 1." }, { "figure_ref": [ "fig_6" ], "heading": "C.2 Modified update gradient in M-step when conducting recovery update", "publication_ref": [], "table_ref": [], "text": "In the recovery update process described in Section 3.2.2, the gradient update in the M-step is modified from\n(v - pπ θ pπ ) ∂π θ ∂θ to ((β(v - pπ θ pπ ) + (1 -β)A ′ c) ∂πθ ∂θ ,\nwhere A ′ c is the projection of v -pπ θ pπ onto the cost advantage vector Ac.\nIn Figure 5, the tracking trajectories with and without gradient modification are compared. The yellow target point represents the location of v, and the blue start point represents the initial location of pπ θ pπ . The dashed optimal trajectory demonstrates that the optimal way to approach v is to first enter the feasible region quickly and then follow the zero reward boundary. This approach allows the agent to satisfy the constraint while preserving the reward return for most of the trajectory. The blue line represents the trajectory before gradient modification. In this case, pπ θ pπ directly heads towards v, leading to a violation of the cost constraint during the initial part of the tracking. On the other hand, the orange line represents the trajectory after gradient modification, which closely follows the optimal path at the beginning of the tracking. This modification ensures that the agent can satisfy the constraint throughout the entire tracking path. " }, { "figure_ref": [], "heading": "C.3 Clipping in M-step for constrain KL divergence", "publication_ref": [], "table_ref": [], "text": "To satisfy the KL constraint in Equation ( 9), we employ a clipping technique similar to PPO to constrain the KL divergence in the M-step. In this case, we clip the lower bound of pπ θ pπ to 0.6. Considering the original loss function E (v -pπ θ pπ ) 2 , after taking the derivative, it can be rewritten as -E (v - " }, { "figure_ref": [], "heading": "D Details about test environments", "publication_ref": [ "b1" ], "table_ref": [ "tab_1" ], "text": "The environment parameters used in our experiments are listed in Table 1. The implementation of the Safety Gym environment can be found at https://github.com/openai/safety-gym as an open-source project. Similarly, the open-source implementation of the Circle environment can be found at https://github.com/ymzhang01/mujococircle. The PointCircle environment was created based on this open-source implementation, following the same settings as described in Achiam et al. (2017). " }, { "figure_ref": [], "heading": "E Details for experiments", "publication_ref": [], "table_ref": [], "text": "The hyperparameters of proposed method and baseline methods are shown in Table 2. The baseline methods are modified from https://github.com/openai/safety-starter-agents to a Pytorch version. The experiments are conducted on a HPC with 24 nodes, each node has 32 CPU cores and 2 Nvidia A100 GPUs.\nNote that, in CPPO, setting the KL divergence constraint to 0.02 does not directly determine the value of δ ′ in Equation (5). Although Proposition 3.1 states that Var(v) determines the upper bound of the reverse KL divergence, it does not provide a lower bound for the reverse KL divergence. Consequently, the update step may become very small. To address this issue, we can consider the inequality\n(2 log 2 -1)(x -1) 2 + (x -1) ≤ x log x,\nwhich holds for x values smaller than 2. This inequality implies that (2 log 2 -1)Var(v) could serve as a lower bound for the reverse KL divergence. In order to prevent the KL divergence from becoming too small, we" }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by King's College London. The work has been performed using resources from the Cirrus UK National Tier-2 HPC Service at EPCC funded by the University of Edinburgh and EPSRC (EP/P020267/1) and King's Computational Research, Engineering and Technology Environment (CREATE) in King's College London from https://doi.org/10.18742/rnvf-m076." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "choose δ ′ = 0.02/(2 log 2 -1), ensuring that the reverse KL divergence of the optimal v lies within the range (0.02, 0.02/(2 log 2 -1)).\nRemark E.1. By applying Cantelli's inequality, we can derive the inequality Pr(v ≥ 2) ≤ Var(v) Var(v)+1 . In the case where Var(v) is sufficiently small, this upper bound can be approximated as Var(v). Since Var(v) = 0.02 is a small value, it validates the aforementioned assumption that v is smaller than 2." } ]
The problem of constrained reinforcement learning (CRL) holds significant importance as it provides a framework for addressing critical safety satisfaction concerns in the field of reinforcement learning (RL). However, with the introduction of constraint satisfaction, the current CRL methods necessitate the utilization of second-order optimization or primal-dual frameworks with additional Lagrangian multipliers, resulting in increased complexity and inefficiency during implementation. To address these issues, we propose a novel first-order feasible method named Constrained Proximal Policy Optimization (CPPO). By treating the CRL problem as a probabilistic inference problem, our approach integrates the Expectation-Maximization framework to solve it through two steps: 1) calculating the optimal policy distribution within the feasible region (E-step), and 2) conducting a firstorder update to adjust the current policy towards the optimal policy obtained in the E-step (M-step). We establish the relationship between the probability ratios and KL divergence to convert the E-step into a convex optimization problem. Furthermore, we develop an iterative heuristic algorithm from a geometric perspective to solve this problem. Additionally, we introduce a conservative update mechanism to overcome the constraint violation issue that occurs in the existing feasible region method. Empirical evaluations conducted in complex and uncertain environments validate the effectiveness of our proposed method, as it performs at least as well as other baselines.
Constrained Proximal Policy Optimization
[ { "figure_caption": "Figure 1 :1Figure 1: The illustration of recovery update.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: The switch mechanism inspired by bang-bang control. Once the current policy violates the cost constraint, the agent will switch to recovery update until it reaches the switch cost.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The learning curves for comparison, CPPO is the method proposed in this paper.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The comparison between CPPO with and without recovery update in PointCircle.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "BDerivation in EM framework B.1 Derivation of evidence lower bound Following the definition in Section 3.1, we have p(O = 1|(s, a)) ∝ exp(A(s, a)/α). Assume the likelihood of acting a under s and θ is p(a|s, θ) = pπ θ (a|s) * p(θ) Then we can obtain following evidence lower bound(ELBO) log pπ θ (O = 1) = log p(O = 1|(s, a)) * pπ θ (s, a) * p(θ) d (s, a)", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Algorithm 11Iteratively Heuristic Algorithm Input: Advantage vector A, A c Using QR decomposition to orthonormalize Aand A c into orthogonal unit vectors Ãc = kA c and Ã. Find θ ′ that makes v = 2N δ ′ (cos θ ′ Ãc + sin θ ′ Ã) become the optimal solution of the problem in Theorem 3.4. while v violates element-wise lower bound constraint do Clip the value in v to element-wise lower bound. Record the clipped values and corresponding cost advantage value in v m and A m c , mask these clipped values and their corresponding advantage values to obtain new vector v ′ and corresponding advantage vectors A", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The tracking trajectories with and without modification.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "this form, (v -pπ θ pπ ) can be treated as the advantage value in PPO, which does not require gradient. Therefore, following the clip technique in PPO, the new loss function can be expressed as -E min (va function that clips the value smaller than 0.6 to 0.6.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "C.4 The outline of CPPO method Algorithm 2 CPPO Outline Input: Policy network π θ , Value V , V c while Stopping criteria not met do Rollout sampling from the environment, generate trajectories τ ∼ π θ . Calculate advantage value A and A c from τ . if Current policy violates the constraint then Conduct recovery update in E-step to optimal policy v. else Conduct normal update in E-step to optimal policy v. end if Conduct M-step according to Equation (9) to update policy parameter θ based on v. Update value networks using GAE. end while", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "THE ENVIRONMENT PARAMETERS", "figure_data": "ENVIRONMENTCARPUSH POINTGOAL POINTPUSH POINTCIRCLE ANTCIRCLEBATCH SIZE3 × 10 43 × 10 43 × 10 410003 × 10 4TOTAL STEPS1 × 10 71 × 10 71 × 10 72 × 10 51 × 10 7ROLLOUT LENGTH10001000100050500CONSTRAINT252525550", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Chengbin Xuan; Feng Zhang; Faliang Yin; Hak-Keung Lam
[ { "authors": "A Abdolmaleki; J T Springenberg; Y Tassa; R Munos; N Heess; M Riedmiller", "journal": "", "ref_id": "b0", "title": "Maximum a posteriori policy optimisation", "year": "2018" }, { "authors": "J Achiam; D Held; A Tamar; P Abbeel", "journal": "PMLR", "ref_id": "b1", "title": "Constrained policy optimization", "year": "2017" }, { "authors": "E Altman", "journal": "CRC press", "ref_id": "b2", "title": "Constrained Markov decision processes", "year": "1999" }, { "authors": "D Cartwright; M Field", "journal": "", "ref_id": "b3", "title": "A refinement of the arithmetic mean-geometric mean inequality", "year": "1978" }, { "authors": "J Lasalle", "journal": "", "ref_id": "b4", "title": "The 'bang-bang' principle", "year": "1960" }, { "authors": "N Le; V S Rathour; K Yamazaki; K Luu; M Savvides", "journal": "Artificial Intelligence Review", "ref_id": "b5", "title": "Deep reinforcement learning in computer vision: a comprehensive survey", "year": "2022" }, { "authors": "Q Li; Z Peng; L Feng; Q Zhang; Z Xue; B Zhou; Metadrive", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b6", "title": "Composing diverse driving scenarios for generalizable reinforcement learning", "year": "2022" }, { "authors": "Z Liu; Z Cen; V Isenbaev; W Liu; S Wu; B Li; D Zhao", "journal": "PMLR", "ref_id": "b7", "title": "Constrained variational policy optimization for safe reinforcement learning", "year": "2022" }, { "authors": "S Paternain; M Calvo-Fullana; L F Chamon; A Ribeiro", "journal": "IEEE Transactions on Automatic Control", "ref_id": "b8", "title": "Safe policies for reinforcement learning via primal-dual methods", "year": "2022" }, { "authors": "A Ray; J Achiam; D Amodei", "journal": "", "ref_id": "b9", "title": "Benchmarking safe exploration in deep reinforcement learning", "year": "2019" }, { "authors": "J Schulman; S Levine; P Abbeel; M Jordan; P Moritz", "journal": "PMLR", "ref_id": "b10", "title": "Trust region policy optimization", "year": "2015" }, { "authors": "J Schulman; P Moritz; S Levine; M Jordan; P Abbeel", "journal": "", "ref_id": "b11", "title": "High-dimensional continuous control using generalized advantage estimation", "year": "2015" }, { "authors": "J Schulman; F Wolski; P Dhariwal; A Radford; O Klimov", "journal": "", "ref_id": "b12", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "D Silver; T Hubert; J Schrittwieser; I Antonoglou; M Lai; A Guez; M Lanctot; L Sifre; D Kumaran; T Graepel", "journal": "Science", "ref_id": "b13", "title": "A general reinforcement learning algorithm that masters chess, shogi, and go through self-play", "year": "2018" }, { "authors": "A Stooke; J Achiam; P Abbeel", "journal": "PMLR", "ref_id": "b14", "title": "Responsive safety in reinforcement learning by pid lagrangian methods", "year": "2020" }, { "authors": "T.-Y Yang; J Rosca; K Narasimhan; P J Ramadge", "journal": "", "ref_id": "b15", "title": "Projection-based constrained policy optimization", "year": "2020" }, { "authors": "R Yanushevsky", "journal": "CRC Press", "ref_id": "b16", "title": "Modern missile guidance", "year": "2018" }, { "authors": "C Yu; A Velu; E Vinitsky; Y Wang; A Bayen; Y Wu", "journal": "", "ref_id": "b17", "title": "The surprising effectiveness of ppo in cooperative, multi-agent games", "year": "2021" }, { "authors": "T Yu; D Quillen; Z He; R Julian; K Hausman; C Finn; S Levine", "journal": "PMLR", "ref_id": "b18", "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "year": "2020" }, { "authors": "Y Zhang; Q Vuong; K Ross", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "First order constrained optimization in policy space", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 303.51, 702.29, 200.49, 9.65 ], "formula_id": "formula_0", "formula_text": "C := {c i ∈ C | c i : S × A → R, i = 1, 2, . . . , m}" }, { "formula_coordinates": [ 3, 410.41, 126.74, 94.84, 14.11 ], "formula_id": "formula_1", "formula_text": ") = E τ [ ∞ t=0 γ t r(s t )]," }, { "formula_coordinates": [ 3, 279.63, 148.55, 114.21, 14.11 ], "formula_id": "formula_2", "formula_text": "J c (π) = E τ [ ∞ t=0 γ t c(s t )]." }, { "formula_coordinates": [ 3, 108, 173.34, 397.17, 34.05 ], "formula_id": "formula_3", "formula_text": "A(s t , a t ) = Q(s t , a t ) -V (s t ) and A c (s t , a t ) = Q c (s t , a t ) -V c (s t ) where Q(s t , a t ) = E τ [ ∞ t=0 γ t r | s 0 = s t , a 0 = a t ] and V (s t ) = E τ [ ∞ t=0 γ t r | s 0 = s t ]" }, { "formula_coordinates": [ 3, 125.62, 205.27, 347.65, 14.11 ], "formula_id": "formula_4", "formula_text": "Q c (s t , a t ) = E τ [ ∞ t=0 γ t c | s 0 = s t , a 0 = a t ] and V c (s t ) = E τ [ ∞ t=0 γ t c | s 0 = s t ]" }, { "formula_coordinates": [ 4, 118.79, 422.19, 385.88, 23.22 ], "formula_id": "formula_5", "formula_text": "log p π θ (O = 1) ≥ E s∼d π ,a∼π q(a|s) p π (a|s) A(s, a) -αD KL (q ∥ π θ ) + α log p(θ) = J (q, θ),(1)" }, { "formula_coordinates": [ 4, 203.48, 484.36, 301.19, 23.22 ], "formula_id": "formula_6", "formula_text": "J c (π) + 1 1 -γ E s∼d π ,a∼π q(a|s) p π (a|s) A c (s, a) ≤ d,(2)" }, { "formula_coordinates": [ 4, 141.94, 674.08, 362.72, 51.12 ], "formula_id": "formula_7", "formula_text": "maximize q E s∼d π ,a∼π q(a|s) p π (a|s) A(s, a) s.t. J c (π) + 1 1 -γ E s∼d π ,a∼π q(a|s) p π (a|s) A c (s, a) ≤ d, D KL (q ∥ π) ≤ δ,(3)" }, { "formula_coordinates": [ 5, 107.64, 185.06, 397.03, 72.93 ], "formula_id": "formula_8", "formula_text": "maximize v E s∼d π ,a∼π [vA(s, a)] s.t. E s∼d π ,a∼π [vA c (s, a)] ≤ d ′ E s∼d π a∼π [v log v] ≤ δ. (4) where d ′ the scaled cost margin d ′ = (1 -γ)(d -J c (π)). Although Equation (" }, { "formula_coordinates": [ 5, 315.65, 323.09, 155.66, 9.3 ], "formula_id": "formula_9", "formula_text": "E[v] = 1 and E [v log v] ≤ Var(v -1)." }, { "formula_coordinates": [ 5, 230.25, 398.93, 274.42, 42.65 ], "formula_id": "formula_10", "formula_text": "v v • A s.t. v • A c ≤ N d ′ , ∥v∥ 2 ≤ 2N δ ′ E(v) = 0, v > -1 element-wise,(5)" }, { "formula_coordinates": [ 6, 178.84, 124.72, 321.95, 50.22 ], "formula_id": "formula_11", "formula_text": "if v • A ≥ 0 not exists when v • A c ≤ N d ′ : maximize v v • A else: minimize v v • A c s.t. ∥v∥ 2 ≤2N δ ′ , E(v) = 0, v > -1 element-wise. (6" }, { "formula_coordinates": [ 6, 500.8, 146.49, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 7, 116.42, 291.42, 104.15, 11.23 ], "formula_id": "formula_13", "formula_text": "• A c = N d ′ or v • A = 0." }, { "formula_coordinates": [ 7, 222.61, 324.84, 166.79, 43 ], "formula_id": "formula_14", "formula_text": "maximize v v • A s.t. v • A c ≤ D, ∥v∥ 2 ≤ 2N δ ′ E(v) = E(A) = E(A c ) = 0" }, { "formula_coordinates": [ 7, 247.69, 626.72, 256.98, 28.77 ], "formula_id": "formula_15", "formula_text": "minimize θ D KL (q ∥ π θ ) s.t. D KL (π ∥ π θ ) ≤ δ,(8)" }, { "formula_coordinates": [ 8, 190.81, 164.16, 313.86, 23.22 ], "formula_id": "formula_16", "formula_text": "minimize θ E ∥v - p π θ p π ∥ 2 s.t. D KL (π ∥ π θ ) ≤ δ,(9)" }, { "formula_coordinates": [ 8, 219.19, 243.44, 178.91, 15.38 ], "formula_id": "formula_17", "formula_text": "pπ θ pπ ) ∂π θ ∂θ to ((β(v - pπ θ pπ ) + (1 -β)A ′ c ) ∂π θ" }, { "formula_coordinates": [ 12, 258.06, 111.29, 138.77, 8.37 ], "formula_id": "formula_18", "formula_text": "[v] = 1 and E [v log v] ≤ Var(v -1)." }, { "formula_coordinates": [ 12, 224.83, 186.15, 279.77, 87.87 ], "formula_id": "formula_19", "formula_text": "E[v] = pπ(s, a) q(a | s) pπ(a | s) d (s, a) = pπ(s, a) q(a | s) × d(s) pπ(a | s) × d(s) d (s, a) = pπ(s, a) q(s, a) pπ(s, a) d (s, a) = q(s, a) d (s, a) = 1,(10)" }, { "formula_coordinates": [ 12, 303.94, 284.86, 95.81, 8.37 ], "formula_id": "formula_20", "formula_text": "E[v] = 1 is proven. Q.E.D" }, { "formula_coordinates": [ 12, 228.69, 375.61, 272.17, 88.78 ], "formula_id": "formula_21", "formula_text": "DKL(π ∥ π θ ) = pπ(s, a) log pπ(s, a) pπ θ (s, a) = -E [log v] = - log v N = -log( N i=1 vi) 1 N , (11" }, { "formula_coordinates": [ 12, 500.87, 416.02, 3.73, 7.77 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 12, 218.25, 481.82, 282.62, 26.84 ], "formula_id": "formula_23", "formula_text": "E(v) - N i=1 v 1 N i ≤ 1 2N min vi N i=1 (vi -E(v)) 2 . (12" }, { "formula_coordinates": [ 12, 500.87, 491.28, 3.73, 7.77 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 12, 154.47, 512.59, 349.53, 102.53 ], "formula_id": "formula_25", "formula_text": "E(v) = 1 from Proposition 3.1, N i=1 v 1 N i > 0, and N i=1 (vi -E(v)) 2 = N • Var(v) , we have N i=1 v 1 N i ≥ 1 - Var(v) 2 min vi log N i=1 v 1 N i ≥ log 1 - Var(v) 2 min vi DKL(π ∥ π θ ) ≤ -log 1 - Var(v) 2 min vi ≈ Var(v) 2 min vi . (13" }, { "formula_coordinates": [ 12, 500.87, 569.8, 3.73, 7.77 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 12, 199.83, 685.33, 212.35, 38.9 ], "formula_id": "formula_27", "formula_text": "maximize v v • A s.t. v • Ac ≤ D ∥v∥2 ≤ 2N δ E(v) = E(A) = E(Ac) = 0" }, { "formula_coordinates": [ 13, 239.57, 132.55, 265.04, 42.76 ], "formula_id": "formula_28", "formula_text": "maximize a 1 ,b 1 a1a2 + b1b2 s.t. a1 ≤ D/a3 a 2 1 + b 2 1 ≤ 4N 2 δ 2 -c 2 1 (14)" }, { "formula_coordinates": [ 13, 220.98, 359.68, 283.62, 38.37 ], "formula_id": "formula_29", "formula_text": "≥ E s∼d q ,a∼q log p(O = 1|(s, a)) + log pπ(s, a) q(s, a) + log p(θ)(15)" }, { "formula_coordinates": [ 13, 229.36, 562.01, 271.51, 12.93 ], "formula_id": "formula_31", "formula_text": "maximize θ -αDKL(q ∥ π θ ) + α log p(θ). (17" }, { "formula_coordinates": [ 13, 500.87, 562.3, 3.73, 7.77 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 13, 220.38, 608.96, 284.22, 15.3 ], "formula_id": "formula_33", "formula_text": "θ -αDKL(q ∥ π θ ) -αβ(θ -θ) T F -1 θ (θ -θ).(18)" }, { "formula_coordinates": [ 13, 241.29, 648.7, 263.31, 12.93 ], "formula_id": "formula_34", "formula_text": "θ -DKL(q ∥ π θ ) -βDKL(π ∥ π θ ).(19)" }, { "formula_coordinates": [ 13, 252.09, 681.58, 252.51, 25.95 ], "formula_id": "formula_35", "formula_text": "minimize θ DKL(q ∥ π θ ) s.t. DKL(π ∥ π θ ) ≤ δ,(20)" }, { "formula_coordinates": [ 14, 127.57, 270.53, 377.68, 93.86 ], "formula_id": "formula_36", "formula_text": "′ c & A ′ . Subtract the mean of A ′ c and A ′ to obtain A ′′ c & A ′′ . Initial a new zero vector v ′′ with the same size of v ′ Calculate the l 2 -norm bound of v ′′ , i.e., δ ′ , using D(v ′′ ) = E(v ′′2 ) -E(v ′′ ) 2 . Using QR decomposition to orthonormalize A ′′ and A ′′ c into orthogonal unit vectors Ã′′ c = kA ′′ c and Ã′′ . Find θ ′ that maximize v ′′ A ′′ while satisfy v ′′ A ′′ c ≤ N d ′ -v m A m c -M • mean(v ′ ) • mean(A ′ c ), where M is the number of element in v m , v ′′ = 2N δ ′ (cos θ ′ Ã′′ c + sin θ ′ Ã′′ )." }, { "formula_coordinates": [ 14, 106.92, 453.01, 196, 13.83 ], "formula_id": "formula_37", "formula_text": "(v - pπ θ pπ ) ∂π θ ∂θ to ((β(v - pπ θ pπ ) + (1 -β)A ′ c) ∂πθ ∂θ ," } ]
10.18653/v1/2023.acl-long.245
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b25", "b28", "b20" ], "table_ref": [], "text": "In our pursuit of knowledge and understanding, we often rely on factual questions to uncover the truth about the world around us. However, a critical aspect that is often overlooked is the presence of underlying temporal constraints within these questions. Time, an important dimension in the physical world, emerges as a ubiquitous and formidable constraint that can significantly impact the accuracy and relevance of the answer. Consider, for instance, the seemingly straightforward question: \"Who is Figure 1: An example of a time-sensitive factual question from TimeQA (Chen et al., 2021): (a) illustrates the conventional process of question answering with LLMs, and (b) presents the proposed approach QAaP. The temporal information is colored with light blue and the potential answers are in green.\nthe current president of the United States?\". While the answer may appear apparent, its validity depends entirely on the specific point in time when the question is posed. This realization underscores the importance of recognizing and accounting for these temporal constraints in our quest for reliable and meaningful information.\nRecently, LLMs have exhibited excellent intelligence in question answering and challenged the dominance of traditional search engines. Nevertheless, despite their impressive performance, relying solely on LLMs for question answering still faces many shortcomings and limitations. As illustrated in Fig. 1 (a), current approaches to utilizing LLMs for question answering can be primarily categorized into two types: (1) relying on the internal knowledge of LLMs to generate direct answers, such as Chain-of-Thought (CoT) prompt (Wei et al., 2022) and (2) presenting LLMs with retrieved documents for reading and subsequent answering, such as ReAct (Yao et al., 2022). However, there are several challenges in answering time-sensitive factual questions with LLMs: 1) LLMs are insensitive to numbers (Nye et al., 2022) and struggle to comprehend the sequential, overlapping and inclusive relationships between dates, which can be attributed to their nature as probabilistic models and their inability to perform rigorous reasoning as symbolic systems; 2) it is difficult for LLMs to directly locate the relevant information and provide the correct answer, as relevant facts may often be scattered across a long document and expressed in diverse ways, as a result, simply searching for keywords in the question or relying on retrievers may fail to identify the relevant paragraphs; 3) it is very likely that the answer provided by LLMs fails to satisfy the constraint and it is hard to verify the correctness of the answer since they are given in natural language.\nIn this work, we endeavor to bridge the gap between existing approaches and the specific challenge posed by time-sensitive factual questions that require both rich world knowledge and intricate reasoning. To tackle the aforementioned challenges, we seek to harness LLMs' strong ability in natural language and programming language understanding to reframe the Question Answering task as Programming (QAaP), depicted in Fig. 1 (b). Specifically, our method consists of two phases: 1.Represent all as codes, since LLMs cannot rigorously reason by themselves, we endeavor to harness their strong coding ability to transform the question and context into well-structured codes. We first Parse the given question into a python dict, then Extract relevant information from the provided context and store these items in a python list. This allows a comprehensive gathering and organizing of relevant information dispersed throughout the documents. Besides, the collected code format information helps to facilitate the subsequent processing and avoid reasoning based on surface-level text semantics, 2.Choose answer through programming, due to the notorious hallucination inherent in LLMs, it is necessary to check if the extracted contents are faithful to the corresponding context. Furthermore, as there may be multiple potential answers, we need to reason out the best-matching answer to the question. Since all the obtained in-formation is represented as codes, we can easily construct two functions Check and Match to reduce hallucination and ensure accuracy.\nWith our approach, we move beyond the traditional paradigm of using LLMs to directly generate an answer or read a given context and then provide an answer, avoiding the inability to verify whether the answer satisfies the constraints set out in the question. Furthermore, by storing intermediate information in code, we overcome the length limit of the model input, thus empowering LLMs to read through the passages and uncover the eligible answer concealed within the lengthy documents. Experimental evaluation on multiple time-sensitive question-answering datasets shows that our approach consistently outperforms strong baselines and approximates the performance of supervised methods. Notably, we achieve up to 14.5%, 10.5% and 8.6% absolute improvements on TimeQA, TempQuestions and TimeQuestions over state-of-the-art few-shot methods.\nIn summary, our contributions are as follows.\n• Our experimental results reveal that LLMs cannot accurately answer factual questions with time constraints, even induced with a Chain-of-Thought prompt. 2 Related Work" }, { "figure_ref": [], "heading": "LLMs augmented with tools", "publication_ref": [ "b0", "b11", "b3", "b22", "b4", "b8", "b1", "b18" ], "table_ref": [], "text": "Recently, the rapid advancement of LLMs has brought great progress to the field of natural language processing (Brown et al., 2020;Hoffmann et al., 2022;Chowdhery et al., 2022). However, relying solely on LLMs' own capabilities is still limited, considering that human intelligence lies in the use of tools, many works have explored augmenting LLMs with tools (Schick et al., 2023;Mi-alon et al., 2023). Cobbe et al. (2021) introduces an extra calculator to LLMs for effectively solving math word problems, and Gao et al. (2022a) applies retriever to obtain evidence for verifying the truthfulness of contents generated by LLMs. Similarly, Gou et al. (2023) employs various kinds of tools to correct the outputs of LLMs.\nThere have also been some works utilizing LLMs' coding ability to offload the solution to an external solver (Gao et al., 2022b;Chen et al., 2022;Lyu et al., 2023). However, there are several differences between those code-prompt works and ours: 1) prior works mostly were solving mathematical and symbolic problems, while in this work we mainly focus on answering the factual question that requires temporal reasoning; 2) prior works only utilized LLMs' own knowledge for problem solving, while we propose to apply LLMs' coding ability to represent both LLMs' internal knowledge and external knowledge (e.g., Wikipedia articles) as same-format codes, which enables us to easily enhance LLMs with different sources of knowledge, and also facilitates desired processing; 3) prior works did not explore verifying the correctness of LLMs-generated codes, while we propose to incorporate Check and Match steps to mitigate LLMs' hallucination and ensure accuracy." }, { "figure_ref": [], "heading": "Reasoning with LLMs", "publication_ref": [ "b20", "b36", "b25", "b17", "b18", "b1", "b28", "b25", "b37", "b36" ], "table_ref": [], "text": "Reasoning ability is a hallmark of human intelligence, which allows adapting from limited data and accomplishing unseen sophisticated tasks. Nye et al. (2022) shows that letting language model output intermediate steps improves performance. Zhou et al. (2022) introduces Least-to-Most prompt that instructs LLMs to decompose a problem into multiple sub-problems and solve them one by one. Wei et al. (2022) proposes Chain-of-Thought (CoT) prompting, which induces LLMs to reason step by step and greatly boosts LLMs' reasoning ability. Following works try to improve CoT in many ways, including reducing human efforts in exemplar construction (Kojima et al., 2022;Zhang et al., 2022), improving faithfulness of reasoning process (Lyu et al., 2023), code style CoT (Gao et al., 2022b;Chen et al., 2022) and allowing LLMs to interact with outside environment (Yao et al., 2022).\nAll above works have showcased LLMs' remarkable capabilities in solving a wide range of complex reasoning tasks, including commonsense reasoning (Wei et al., 2022), mathematical reasoning (Zhu et al., 2023) and symbolic reasoning (Zhou et al., 2022) tasks. In this work, we mainly solve timesensitive factual questions, which are more like a cross of the former tasks. Although the content that the question asks may only relate to commonsense and world knowledge, it contains a strict symbolic constraint, which is time, and understanding temporal relationships requires advanced reasoning ability." }, { "figure_ref": [], "heading": "Temporal reasoning", "publication_ref": [ "b23", "b21", "b30", "b12", "b33", "b26", "b10" ], "table_ref": [], "text": "There are many works that focus on temporal reasoning before the era of LLMs. Those methods are mainly based on KBQA systems (Talukdar et al., 2012;Jia et al., 2018b;Saxena et al., 2021) or MRC models (Zaheer et al., 2020;Izacard and Grave, 2021;Zhang and Yamana, 2022;Yang et al., 2022;Zhang et al., 2023b). There are many aspects in which our work differs from these earlier studies: 1) our method is few-shot, while they are mainly supervised; 2) we only use a simple Wikipedia search engine, while KBQA requires high-quality annotated knowledge bases; 3) we are able to verify if the constraint is satisfied, while MRC methods cannot. Nevertheless, these supervised methods are very strong and we include their results in the experiments." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Task definition", "publication_ref": [], "table_ref": [], "text": "In this work, we focus on solving time-sensitive factual questions with LLMs. The questions predominantly fall into four kinds of Wh-questions: 'What', 'Which', 'Where' and 'Who'. These types of questions seek specific information related to entities, events, and temporal aspects, forming the backbone of many real-world information-seeking scenarios.\nAs shown in the top left of Fig. 2, given a factual question Q \"Salomón Rondón played for which team in Mar 2019?\", which contains a time constraint Q t \"in Mar 2019\", LLMs need to provide the appropriate answer A \"Newcastle United\", which best matches the constraint Q t set out in Q. We use K i and K e to refer to models' internal knowledge and external knowledge (e.g. Wikipedia) respectively. The context C presented to a model can come from either K i or K e as shown in the left of Fig. 2. We use EI to refer to the extracted information. Query & answer_key query = {\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": None, \"time\": { \"start\": datetime(2019, 3, 1), \"end\": datetime(2019, 3, 31) }} answer_key = \"object\" Item 2 {\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": \"Newcastle United\", \"time\": {\"start\": datetime (2018,8,6), \"end\": datetime(2019, 7, 18)}}" }, { "figure_ref": [], "heading": "Context 2", "publication_ref": [], "table_ref": [], "text": "In his first match at Arena CSKA, Rondón scored his first goal for the Moscow team and also gave an assist , …, he was voted the best CSKA player of the month of March ." }, { "figure_ref": [], "heading": "Item 4", "publication_ref": [], "table_ref": [], "text": "{\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": \"Arena CSKA\", \"time\": {\"start\": datetime(2019, 3, 1), \"end\": datetime(2019, 3, 31) }}" }, { "figure_ref": [], "heading": "Constraint", "publication_ref": [], "table_ref": [], "text": "\"time\": {\"start\": datetime(2019, 3, 1), \"end\": datetime(2019, 3, 31)}}" }, { "figure_ref": [], "heading": "Candidates", "publication_ref": [], "table_ref": [], "text": "{\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": \"Newcastle United\", \"time\": {\"start\": datetime (2018,8,6), \"end\": datetime(2019, 7, 18) }} Match Score: 0.09 {\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": \"Newcastle United\", \"time\": {\"start\": datetime (2018,8,6), \"end\": None }} Match Score: 0.001 " }, { "figure_ref": [], "heading": "Internal Knowledge 𝑲𝑲 𝒊𝒊", "publication_ref": [], "table_ref": [], "text": "Extracted Information 𝑬𝑬𝑬𝑬 Item 1\n{\"subject\": \"Salomón Rondón\", \"relation\": \"play for\", \"object\": \"Newcastle United\", \"time\": {\"start\": datetime(2018, 8, 1), \"end\": None }}" }, { "figure_ref": [], "heading": "…", "publication_ref": [], "table_ref": [], "text": "Figure 2: The whole framework of QAaP. Relevant and irrelevant temporal information is highlighted in light blue and blue, respectively. The correct answer is green; otherwise, it is red. Text related to potential answers is in bold." }, { "figure_ref": [], "heading": "Represent all as codes", "publication_ref": [], "table_ref": [], "text": "Parse. We prompt LLMs to parse the given question Q into a python dictionary variable named query q, which contains four keys: subject, relation, object, time and their corresponding value s, r, o and t. In addition to the query, we define another variable answer_key to specify where the final answer should be placed at.\nExtract. Given a text segment from the document either generated by LLMs or retrieved from Wikipedia as context C i , we prompt LLMs to extract information related to the question as illustrated in the middle of Fig. 2. Each item of the ex-\ntracted information EI i = s i j , r i j , o i j , t i j N j=1\nis also represented as a python dictionary similar to the query and stored in a predefined python list information, so as to comprehensively gather relevant information that appears in different parts of a long document." }, { "figure_ref": [], "heading": "Choose answer through programming", "publication_ref": [ "b2", "b16" ], "table_ref": [], "text": "As mentioned before, LLMs are not sensitive to numbers and often suffer from hallucinations. For example, LLMs may trickily fill in the time that appeared in the question into the extracted items regardless of the context, resulting in a perfectly temporal matching but incorrect answer. To cope with such a phenomenon, thanks to the fact that the extracted information is presented in code form, we can easily construct two functions Check and Match to verify the faithfulness of extraction and find the best matching answer.\nCheck. We first check if the extracted item is in the same format as query, which ensures the answer is correctly placed at answer_key, and that the extracted time should appear in the corresponding context. When external knowledge K e is accessible, we also check whether the items extracted from LLMs' internal knowledge have shown up in those extracted from K e because LLMs often generate fake facts.\nAs exemplified in Fig. 2, the answer Newcastle United generated by LLMs also appears in the Wikipedia passage, therefore we keep the extracted item. Note the middle bottom of Fig. 2, the extracted time 2019 of the 4th item does not show up in the corresponding Context 2 and therefore it is removed during the Check step. Match. In order to choose the best matching answer from numerous candidate answers, we employ the intersection over union (IoU) of time between the question Q and the candidate X as the match score, which is defined in Eq. ( 1), and sort the candidates according to it. When the question specifies both the start Q ts and the end Q te of the time constraint, this measurement quantifies the degree of temporal alignment between the question and each candidate. If the time constraint in query only contains a start or an end, we choose the inverse of the absolute value of the differences between Q ts and X ts or Q te and X te as match score. We consider several widely-used temporal reasoning datasets: TimeQA (Chen et al., 2021), Tem-pQuestions (Jia et al., 2018a) and TimeQuestions (Jia et al., 2021).\nTimeQA is a time-sensitive question answering dataset curated by extracting time-evolving facts from WikiData and aligning them with Wikipedia pages with the help of human workers. The dataset comprises two subsets with different levels of difficulty. The easy subset contains documents with asked temporal information explicitly stated, while the hard subset requires more advanced reasoning skills as the temporal information is implicit and cannot be retrieved through keyword searching.\nTempQuestions and TimeQuestions are two similar temporal QA datasets compiled from a variety of general-purpose KG-QA benchmarks. Questions are divided into four classes: explicit temporal, implicit temporal, temporal answer and ordinal constraints. We choose the explicit subset and remove the questions without any temporal signal. This is because implicit questions require extra steps to uncover the implicit time indicated by the things/events, while we only use a simple Wikipedia search engine and there are so many ambiguous things that have the same name (e.g., song, movie, person name, ...), it is difficult for LLMs to determine which entity the question is referring to. Therefore, strong retrievers are needed to retrieve relevant documents. It is worth noting that our method is agnostic to the way of retrieving external documents and can be seamlessly combined with those off-the-shelf retrievers, which we leave for future work." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b25", "b28" ], "table_ref": [], "text": "For comparison under a few-shot setting, we employ the well-known method CoT (Wei et al., 2022) as the baseline. In addition, we include a recently popular method ReAct (Yao et al., 2022), which incorporates external knowledge and allows LLMs to execute search and lookup actions to interact with the Wikipedia website. Finally, we present results from state-of-the-art fine-tuning models for a comprehensive comparison." }, { "figure_ref": [], "heading": "Implement details", "publication_ref": [ "b29" ], "table_ref": [], "text": "To fully utilize LLMs' internal knowledge K i , following Yu et al. (2023) we elicit K i by prompting LLMs to generate a background document for the given question. Additionally, in order to incorporate external knowledge K e , we add a search process to the prompt and apply a Wikipedia search engine similar to ReAct. We treat the documents from K i and K e in the same way. For all experiments, we use gpt-3.5-turbo as the backbone model unless otherwise specified. Details of prompt can be found in Appendix C." }, { "figure_ref": [], "heading": "Main results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Table 1 presents the main results on the three temporal reasoning datasets. provided knowledge sources and store it efficiently, which facilitates the subsequent processing steps. The other is that rather than relying solely on LLMs to reason over raw context, QAaP chooses the final answer through programming. Since all the things are represented as codes, QAaP minimizes the need for human intervention to check the extracted contents and find the best matching answer. However, there is still a non-negligible gap compared to the best-supervised models, which validates the difficulty of this task and indicates more efforts are needed in future work." }, { "figure_ref": [], "heading": "Ablation study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Can LLMs find the answer provided enough context?", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To investigate if LLMs can answer factual questions with time constraints provided enough context, we implement a variant of few-shot CoT by feeding it with the same context used for our approach. Specifically, we present the text segments to them sequentially and ask if they can answer the question based on the context until they say yes and give the answer. Moreover, we lower the difficulty by using golden paragraphs as context, which consists of the paragraph containing the ground truth and three paragraphs before and after it. As demonstrated in Table 2, though given the same context, QAaP achieves a much higher exact match score compared to directly reading and answering. The performance of the CoT-variant is even worse than CoT, which exposes LLMs' limitations in discriminating the answer that meets the constraint from the other irrelevant ones. Significant performance improvements are observed when golden paragraphs are provided. This result highlights the challenges faced by LLMs in accurately identifying the correct answer from lengthy documents. Notably, QAaP surpasses CoT-variant * , even though the latter directly accesses the golden paragraphs. This finding further underscores the advantage of our proposed method." }, { "figure_ref": [], "heading": "How much does Check alleviate hallucination?", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To examine the effect of Check-in alleviating hallucination, we explore it along with two axes: 1) check if the time t of extracted items appears in context and 2) check if the items extracted from LLMs' internal knowledge K i are also included in those extracted from external knowledge K e . As shown in Table 3, compared to checking the extracted time, checking the contents obtained from K i can bring more improvement. This discovery implies that the degree of hallucination is more severe when LLMs are allowed to generate content directly. However, even when conditioned on a given context, LLMs may still make up contents that do not align with the provided context. The experimental results affirm the vital role played by the Check step in mitigating hallucination, shedding light on the substantial presence of hallucination in LLMs. These findings emphasize the importance of verifying the contents generated by LLMs to guarantee accuracy and reliability." }, { "figure_ref": [], "heading": "Can LLMs directly identify the best matching answer to the question?", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We are curious about whether it is trivial for LLMs to find the best matching answer given the parsed query and extracted information. To this end, we remove the Match step and prompt LLMs to provide the answer through in-context learning.\nThe results are presented in Table 4, it can be observed that without the Match step, the exact match score relatively decreased by 8.1% on TimeQA-Hard, which stresses the indispensability of the Match step when the constraint necessitates rigorous reasoning to satisfy. In contrast, there is only a slight drop in performance on TimeQA-Easy and TempQuestions. This can be explained by the fact that we did not design a very sophisticated Match function for selecting the best answer. Additionally, the strong result of directly matching further substantiates the advantages of representing the question and context as codes, which can facilitate better reasoning for LLMs and significantly improve the accuracy compared to reading then answering methods.\nIt is worth noting that the Check step plays a key role in the final performance when the Match step is absent. Without the Check and Match steps, LLMs face challenges in directly identifying the most appropriate answer from multiple candidates, as there are many interfering items. The results indicate that these two steps are complementary and indispensable." }, { "figure_ref": [], "heading": "Are LLMs capable of writing qualified", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Check and Match functions?\nIn our previous experiments, the Check and Match functions were manually constructed. However, in this section, we aim to explore the potential of leveraging LLMs to perform these tasks. Specifically, we provide clear instructions for the Check and Match processes, then we randomly select and concatenate an instance {Q, q, C, EI} obtained from previous steps to the instruction as input prompt, finally we present the prompt to the LLMs and require them to write the desired functions. The results, as shown in Table 5 demonstrate the feasibility of replacing human effort with LLMs. Both ChatGPT and GPT-4 exhibit the capability to comprehend the task and provide an eligible solution.\nThis promising result suggests the potential direction for future research to explore how LLMs can self-verify their answers through programming. While LLMs have shown impressive performance in many NLP tasks, ensuring the accuracy and reliability of their generated responses remains a challenge. Exploring the integration of these verification mechanisms within LLMs can lead to more robust and trustworthy question-answering systems." }, { "figure_ref": [ "fig_2" ], "heading": "Case study", "publication_ref": [], "table_ref": [], "text": "A typical exemplar is presented in Fig. 3. CoT encounters hallucinations and makes up facts that don't exist, which exposes its limitation when the question asked is beyond its scope of knowledge. Even given the golden paragraphs, CoT-variant * fails to reason correctly and mistakes Margrete Aamot Øverland as the final answer. ReAct tries to find the relevant sentences by looking up keywords, however, neither the year nor the keyword in the question appears in the passage. This is one of the challenges we mentioned earlier, without strong retrievers LLMs are usually unable to solve these questions, resulting in performance upper bounded by external retrievers.\nAs a comparison, results generated by QAaP are shown at the bottom. We omit the process of Check and Match steps for the simplicity of illustration. First, LLMs parse the question into a Python dictionary named query and generate a relevant background document with their internal knowledge.\nQUESTION: Who was Arnulf Øverland 's spouse between May 1944 and Jun 1944? CONTEXT: Arnulf Øverland was a Norwegian author, poet, and critic. ... Personal life . In 1918 he had married the singer Hildur Arntzen . Their marriage was dissolved in 1939 . In 1940 , he married Bartholine Eufemia Leganger ( 1903Leganger ( -1995 ) ) . They separated shortly after , and were officially divorced in 1945. Øverland was married to journalist Margrete Aamot Øverland ```python query = {\"subject\": \"Arnulf Øverland\", \"relation\": \"spouse\", \"object\": None, \"time\": {\"start\": datetime(1944, 5, 1), \"end\": datetime(1944, 6, 30)}} answer_key = \"object\" ``G enerate a background document from Wikipedia to answer the given question: Arnulf Øverland was a Norwegian author, poet, and critic. He was married to the author and translator Berit Ås from 1945 until his death in 1968. Extract information relevant to the query: ```python information.append({\"subject\": \"Arnulf Øverland\", \"relation\": \"spouse\", \"object\": \"Berit Ås\", \"time\": {\"start\": datetime(1945, 1, 1), \"end\": datetime(1968, 12, 31)}}) ``C ontext: <CONTEXT> Extract information relevant to the query: ```python information.append({\"subject\": \"Arnulf Øverland\", \"relation\": \"spouse\", \"object\": \"Hildur Arntzen\", \"time\": {\"start\": datetime(1918, 1, 1), \"end\": datetime(1939, 12, 31)}}) information.append({\"subject\": \"Arnulf Øverland\", \"relation\": \"spouse\", \"object\": \"Bartholine Eufemia Leganger\", \"time\":{\"start\": datetime(1940, 1, 1), \"end\": datetime(1945, 12, 31)}}) information.append({\"subject\": \"Arnulf Øverland\", \"relation\": \"spouse\", \"object\": \"Margrete Aamot Øverland\", \"time\": {\"start\": datetime( 1945, Here again, the LLMs hallucinated a non-existing fact that \"Arnulf Øverland was married to Berit Ås\". After extracting the relevant information, we perform the Check step to verify the faithfulness of extraction and the reliability of contents generated by LLMs. As the extracted information is structured as code, we can easily filter the wrong and fake items and therefore mitigate the hallucination, which is proved in Sec. 4.3.2. Finally, we choose the best matching answer through the Match step.\n5 Discussion and Future Work\nOur findings reveal the inherent difficulty faced by LLMs in accurately answering seemingly straightforward factual questions when they involve specific time constraints. This can be attributed to the nature of neural networks. A well-known problem with neural networks is that they are black-box models and do not have symbolic reasoning capabilities. Consequently, even the most powerful language models may struggle to perform rigorous reasoning. However, in this work, instead of relying solely on LLMs to directly provide the answer, we reframe the question-answering task as programming. By leveraging LLMs to represent the question and context as codes, we can easily alleviate the hallucination and improve answer accuracy through the subsequent Check and Match processes.\nWe hope our work can shed light on future work in exploring how to enhance LLMs' reasoning ability with more advanced means and tools. Additionally, we anticipate there will be more human-in-theloop approaches to effectively reduce the hallucination while only requiring minimal labor efforts, for example, incorporating structured knowledge to automate the verification process. Moreover, we mainly focus on solving factual questions with time constraints, but many factual questions in real life may contain various types of constraints, such as number, order, location, etc. For different tasks, our method can be easily adapted to deal with other types of constraints, we just need to represent the constraint into an appropriate class in python and define the metric that measures how well is the constraint satisfied in the Match function. Furthermore, it is convenient to cope with different kinds of hallucination by incorporating additional check processes into the Check function. We leave that for future work.\nWe view our effort as the first step towards solving open-domain questions with some kinds of strict constraints and we hope this work can inspire other work to tackle questions with constraints in more general scenarios. A promising research direction is to enable LLMs to achieve comparable accuracy rates to search engines while maintaining the simplicity of interaction, which will greatly boost the practicality of LLMs." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work we propose a novel approach, QAaP (Question Answering as Programming), to tackle the challenges posed by time-sensitive factual questions. By leveraging LLMs' exceptional abilities in natural language understanding and programming, QAaP can transform diversely expressed text into well-structured codes, enabling LLMs to capture both the desired knowledge and the underlying constraints, particularly the temporal aspects. Experiments demonstrate that existing LLMs face significant difficulty in effectively comprehending the temporal constraint stated in the question. While our approach consistently demonstrates superior performance over strong baselines with LLMs. We hope this work can shed light on the future research direction on enhancing LLMs' reasoning ability to tackle real-world questions with various constraints and developing more efficient methods to reduce the hallucinations LLMs frequently encounter." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b27", "b18", "b7", "b24" ], "table_ref": [ "tab_3" ], "text": "The results on multiple datasets demonstrate the effectiveness of our proposed framework QAaP, which can improve the accuracy of LLMs in answering time-sensitive factual questions. However, there are still limitations in our work:\n1) We do not evaluate our method on other question-answering datasets with different kinds of constraints. The main reason is that there are limited relevant datasets. We also do not include question-answering datasets that require multi-hop retrieval to collect enough related documents like hotpotQA (Yang et al., 2018) since a strong retriever is needed and the accuracy may be mainly depended on the retriever, but our approach is agnostic to the way of introducing external knowledge and can be combined with off-the-shelf retriever seamlessly.\n2) When solving questions requiring only commonsense or world knowledge, QAaP may not be necessary because there is no constraint in the question that needs rigorous reasoning to satisfy. This limitation can be also found in Table 1 of Lyu et al. (2023) where faithful-CoT does not help on Strate-gyQA dataset (Geva et al., 2021).\n3) We only report experimental results with one backbone model gpt-3.5-turbo. This is mainly due to the high cost of other OpenAI models. However, to further prove the effectiveness of our method, we conduct some small-scale experiments with text-davinci-003 and include the results in Appendix B, which verifies that our approach performs better than other baselines with different backbone models. We leave exploring other open-source models (Wang et al., 2022) " }, { "figure_ref": [], "heading": "A Dataset Statistics", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Dataset statistics are shown in Table 6." }, { "figure_ref": [], "heading": "B Additional Experiments", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "We conduct experiments with text-davinci-003 and present the results in this section. Due to the high cost of OpenAI APIs, we only sample 100 questions for each dataset with random seed set to 0. As shown in Table 7, the results confirm the effectiveness of our method." }, { "figure_ref": [], "heading": "C Full Prompts", "publication_ref": [], "table_ref": [], "text": "TimeQA prompt is 3-shots and divided into three parts illustrated in Fig. 4, Fig. 5, Fig. 6. We have included other prompts in the supplementary materials and will open source all of them for further research. Solve a question answering task by first parsing the question, figuring out what the question is asking, representing the query as a python dictionary with the key where the answer should be stored specified as either the subject or object, with the initial value of None. Secondly, decide what entities to search to find the answer. Thirdly, generate a background document that contains information relevant to the question being asked. Then read the generated document and the searched passage, step by step extract the information that directly and explicitly relates to the question, place the answer as the value of the answer key in the dictionary. For example, 'XXX joined A team in 2017, ..., in 2019, B team signed a contract with XXX', it is easy to know that B team and A team are mutually exclusive, therefore the termination time of A team is in 2019. Represent the extracted information as a dictionary and adding it to a list. If the context does not tell any useful information, extract nothing.\nHere are some examples. Question:Which school did David Jolly go to in Jan 1989? Question parsing: ```python query = {\"subject\": \"David Jolly\", \"relation\": \"go to school\", \"object\": None, \"time\": {\"start\": datetime(1989, 1, 1), \"end\": datetime(1989, 1, 31)}} answer_key = \"object\" ``S earch: ```python entities_to_search = [\"David Jolly\"] ``G enerate a background document from Wikipedia to answer the given question: David Jolly is an American politician who served as the U.S. Representative for Florida's 13th congressional district from 2014 to 2017. He graduated from Indian Rocks Christian School in 1990. Extract information relevant to the query: ```python information.append({\"subject\": \"David Jolly\", \"relation\": \"go to school\", \"object\": \"Indian Rocks Christian School\", \"time\": {\"start\": datetime(1986, 1, 1), \"end\": datetime (1990,12,31) Extract information relevant to the query: ```python information.append({\"subject\": \"David Jolly\", \"relation\": \"go to school\", \"object\": \"Emory University\", \"time\": {\"start\": datetime(1990, 1, 1), \"end\": datetime(1994, 12, 31)}}) information.append({\"subject\": \"David Jolly\", \"relation\": \"go to school\", \"object\": \"George Mason University\", \"time\": {\"start\": datetime(1995, 1, 1), \"end\": datetime(2001, 12, 31)}}) ```F igure 4: TimeQA prompt part 1." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b35", "b13" ], "table_ref": [], "text": "In this work, our proposed approach QAaP can effectively solve factual questions requiring temporal reasoning, however, there is still a potential social risk. The main concerning risk of our work may be the hallucination of utilized LLMs (Zhou et al., 2021;Ji et al., 2023;Zhang et al., 2023a), since the method we develop is aimed at answering factual questions, when it is applied to LLMs that are deployed into real production environments, it may provide erroneous and misleading answers to users due to the hallucination happened during the Parsing and Extract step. Similar challenges exist in the computer vision field too (He et al., 2023a,b). This can be tackled by designing more sophisticated Check and Match steps, which we also highlight in previous experiments and is proved effective in alleviating hallucination, thus helping reduce potential risks posed to society. Extract information relevant to the query: ```python information.append({\"subject\": \"Crispin Blunt\", \"relation\": \"hold position\", \"object\": \"Member of Parliament\", \"time\": {\"start\": datetime(1997, 1, 1), \"end\": None}}) information.append({\"subject\": \"Crispin Blunt\", \"relation\": \"hold position\", \"object\": \" Extract information relevant to the query: ```python information.append({\"subject\": \"Crispin Blunt\", \"relation\": \"hold position\", \"object\": \"Member of Parliament\", \"time\": {\"start\": datetime(1997, 1, 1), \"end\": None}}) information.append({\"subject\": \"Crispin Blunt\", \"relation\": \"hold position\", \"object\": \"Parliamentary Under-Secretary of State for Prisons and Youth Justice\", \"time\": {\"start\": datetime(2010, 5, 1), \"end\": datetime(2012, 9, 30)}}) ```F igure 5: TimeQA prompt part 2.\nQuestion:What's the capital of Klamath County, California between Aug 1854 and Jun 1855? Question parsing: ```python query = {\"subject\": None, \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1854, 8, 1), \"end\": datetime( 1855 ``G enerate a background document from Wikipedia to answer the given question: Klamath County, California was a county of California from 1851 to 1874. The county seat was Crescent City from 1851 to 1854 and then Yreka from 1854 to 1874. Extract information relevant to the query: ```python information.append({\"subject\": \"Crescent City\", \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1851, 1, 1), \"end\": datetime(1854, 12, 31)}}) information.append({\"subject\": \"Yreka\", \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1854, 1, 1), \"end\": datetime(1874, 12, 31)}}) ``C ontext: Klamath County , California Klamath County was a county of California from 1851 to 1874 . During its existence , the county seat moved twice and ultimately portions of the territory it once had were carved up and added to nearby counties . The original county seat was Trinidad , on the countys southwestern coast . In 1854 the county seat was moved to Crescent City , because of its larger population . But the western portion of the county was unrepresentative of the mining interests in the eastern portion of the county , and so , in 1856 , the county seat was moved inland , to Orleans Bar , now Orleans . In 1857 , Del Norte County , including Crescent City , was split off from Klamath County . In 1874 Klamath County was finally abolished , divided between Siskiyou and Humboldt counties . Extract information relevant to the query: ```python information.append({\"subject\": \"Trinidad\", \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1851, 1, 1), \"end\": datetime(1853, 12, 31)}}) information.append({\"subject\": \"Crescent City\", \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1854, 1, 1), \"end\": datetime(1855, 12, 31)}}) information.append({\"subject\": \"Orleans\", \"relation\": \"capital of\", \"object\": \"Klamath County, California\", \"time\": {\"start\": datetime(1856, 1, 1), \"end\": datetime(1856, 12, 31)}}) ```F igure 6: TimeQA prompt part 3." } ]
Question answering plays a pivotal role in human daily life because it involves our acquisition of knowledge about the world. However, due to the dynamic and ever-changing nature of real-world facts, the answer can be completely different when the time constraint in the question changes. Recently, Large Language Models (LLMs) have shown remarkable intelligence in question answering, while our experiments reveal that the aforementioned problems still pose a significant challenge to existing LLMs. This can be attributed to the LLMs' inability to perform rigorous reasoning based on surfacelevel text semantics. To overcome this limitation, rather than requiring LLMs to directly answer the question, we propose a novel approach where we reframe the Question Answering task as Programming (QAaP). Concretely, by leveraging modern LLMs' superior capability in understanding both natural language and programming language, we endeavor to harness LLMs to represent diversely expressed text as wellstructured code and select the best matching answer from multiple candidates through programming. We evaluate our QAaP framework on several time-sensitive question answering datasets and achieve decent improvement, up to 14.5% over strong baselines.
Question Answering as Programming for Solving Time-Sensitive Questions
[ { "figure_caption": "2018 , Rondón joined Newcastle United on a one-year loan swap , with Dwight Gayle heading in the opposite direction. ... On 19 July 2019 , Rondón signed with Dalian Professional F.C . ( then Dalian Yifang ) of the Chinese Super League. Question Salomón Rondón played for which team in Mar 2019?", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "during June 1945. GOLDEN ANSWER: Bartholine Eufemia Leganger COT (WITHOUT CONTEXT): Arnulf Øverland was a Norwegian author, poet, and social commentator. He was married to pianist and composer Harriet Cohen from 1937 until her death in 1967. There is no answer. ✗ REACT: Thought 1: I need to search Arnulf Øverland and find out who his spouse was between May 1944 and Jun 1944. Action 1: Search[Arnulf Øverland] Observation 1: Arnulf Øverland. Ole Peter Arnulf Øverland ( 27 April 1889 -25 March 1968 ) was a Norwegian poet and artist. He is principally known for his poetry which served to inspire the Norwegian resistance movement during the German occupation of Norway during World War II . Thought 2: The paragraph does not tell who Arnulf Øverland's spouse was between May 1944 and Jun 1944, maybe I can look up the years between 1944 and 1944. First I can look up \"1944\". Action 2: Lookup[1944] Observation 2: No more results. Thought 3: There is no results for \"1944\", let's look up \"spouse\". Action 3: Lookup[spouse] Observation 3: No more results. Thought 4: The passage does not tell who Arnulf Øverland's spouse was between May 1944 and Jun 1944. Action 4: Finish[] ✗ QAAP: Question Parsing:", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Examples of correct and incorrect answers were obtained with different methods on TimeQA. Interference answers or wrong parts and time information not related to the question are highlighted in red and blue respectively. Correct answer and relevant time are in green and light blue. The text implies potential answers is in bold.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "MS(Q, X) = min (Qt e , Xt e ) -max (Qt s , Xt s ) max (Qt e , Xt e ) -min (Qt s , Xt s )", "figure_data": "(1)Finally, we determine the answer by selecting thecandidate that achieves the highest score. TheMatch step ensures that the chosen answer alignsclosely with the time constraint specified by thequestion, thus guaranteeing the accuracy and rel-evance of the answer. We also discuss applyingLLMs for accomplishing the Check and Matchfunctions in Sec. 4.3.4.4 Experiments4.1 Experimental setup4.1.1 Datasets", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "QAaP consistently demonstrates superior performance across all the datasets under a few-shot setting. While CoT performs well on the TempQuestions dataset by relying solely on the LLMs' internal knowledge, it falls short on TimeQA and TimeQuestions. This discrepancy can be attributed to the limited coverage of the LLMs' K i in memorizing the rich facts of the open world. On the other hand, despite having access to external knowledge K e , ReAct still un-Results on temporal reasoning tasks. The previous SoTA supervised models for TimeQA, TempQuestions and TimeQuestions are FiD(Chen et al., 2021), TEQUILA(Jia et al., 2018b) and EXAQT(Jia et al., 2021) respectively. The best scores are in bold and the second are underlined.", "figure_data": "derperforms, indicating its potential shortcomingsin locating relevant information from provided doc-uments. Impressively, QAaP outperforms othermethods on all the datasets, exhibiting substan-tial improvements. Specifically, it achieves a re-markable enhancement of 13.9%, 14.5%, 10.5%,and 8.6% in exact match scores on TimeQA-Easy,TimeQA-Hard, TempQuestions, and TimeQues-tions, respectively. The above experimental resultsclearly demonstrate the effectiveness of QAaP.The improvement might be explained by tworeasons: one is that with QAaP we are able to com-prehensively extract relevant information from the", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study of providing enough context in CoT. Results are exact match scores. K e refers to external knowledge.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of checking different aspects.", "figure_data": "Check t Ki Easy Hard TimeQATempQuestions✗✗32.827.655.2✗✔45.037.059.3✔✗39.832.156.6✔✔48.239.660.3", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of letting LLMs choose the best answer from extracted candidates.", "figure_data": "Match CheckTimeQA Easy HardTempQuestions✗✗34.226.754.2✗✔48.136.459.3ModelTimeQA Easy HardTempQuestionsChatGPT 47.938.656.9GPT-448.238.556.9Manual48.239.660.3", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Employing LLMs for constructing Check and Match functions. ChatGPT and GPT-4 can be accessed via https://chat.openai.com.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "for future work. Dataset statistics.", "figure_data": "DatasetSplit# of samplesTimeQAEasy2997TimeQAHard3078TempQuestions Explicit297TimeQuestionsExplicit942", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results on temporal reasoning tasks with text-davinci-003. The best scores are in bold and the second are underlined.", "figure_data": "BackboneMethodTimeQA-Easy EM F1TimeQA-Hard EM F1TempQuestions EM F1TimeQuestions EM F1Few-shottext-davinci-003CoT17.026.215.028.039.049.633.044.8ReAct30.042.726.037.433.042.222.031.6QAaP38.050.138.049.447.051.940.048.9", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": ": David Jolly David Wilson Jolly ( born October 31 , 1972 ) is an American attorney , former lobbyist , and politician who served as the U.S . Representative for Floridas 13th congressional district , based in Pinellas County , from 2014 to 2017 . He was subsequently reelected in November 2014 , winning 75 percent of the vote , but was unseated in 2016 by former Governor Charlie Crist after court-ordered redistricting made his district more Democratic . In September 2018 , Jolly announced he had left the Republican Party .Extract information relevant to the query:There is nothing relevant to the query. Context: Early life . Jolly was born in Dunedin , Florida , the son of Judith and Lawson Jolly , a Baptist pastor . He received his B.A . degree from Emory University in 1994 and his J.D . degree from the George Mason University School of Law in 2001 .", "figure_data": "", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" } ]
Xinyu Zhu; Cheng Yang; Bei Chen; Siheng Li; ♢ Jian-Guang Lou; Yujiu Yang
[ { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Wenhu Chen; Xueguang Ma; Xinyi Wang; William W Cohen", "journal": "", "ref_id": "b1", "title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks", "year": "2022" }, { "authors": "Wenhu Chen; Xinyi Wang; William Yang; Wang ", "journal": "", "ref_id": "b2", "title": "A dataset for answering time-sensitive questions", "year": "2021" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Jacob Hilton; Reiichiro Nakano; Christopher Hesse; John Schulman", "journal": "", "ref_id": "b4", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Luyu Gao; Zhuyun Dai; Panupong Pasupat; Anthony Chen; Arun Tejasvi Chaganty; Yicheng Fan; Y Vincent; Ni Zhao; Hongrae Lao; Da-Cheng Lee; Juan", "journal": "", "ref_id": "b5", "title": "Rarr: Researching and revising what language models say, using language models", "year": "2022" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b6", "title": "PAL: program-aided language models", "year": "2022" }, { "authors": "Mor Geva; Daniel Khashabi; Elad Segal; Tushar Khot; Dan Roth; Jonathan Berant", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b7", "title": "Did aristotle use a laptop? A question answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "Zhibin Gou; Zhihong Shao; Yeyun Gong; Yelong Shen; Yujiu Yang; Nan Duan; Weizhu Chen", "journal": "", "ref_id": "b8", "title": "Critic: Large language models can self-correct with tool-interactive critiquing", "year": "2023" }, { "authors": "Chunming He; Kai Li; Yachao Zhang; Guoxia Xu; Longxiang Tang; Yulun Zhang; Zhenhua Guo; Xiu Li", "journal": "", "ref_id": "b9", "title": "Weakly-supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping", "year": "2023" }, { "authors": "Chunming He; Kai Li; Yachao Zhang; Yulun Zhang; Zhenhua Guo; Xiu Li; Martin Danelljan; Fisher Yu", "journal": "", "ref_id": "b10", "title": "Strategic preys make acute predators: Enhancing camouflaged object detectors by generating camouflaged objects", "year": "2023" }, { "authors": "Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Tom Clark; Eric Hennigan; Katie Noland; George Millican; Bogdan Van Den Driessche; Aurelia Damoc; Simon Guy; Karen Osindero; Erich Simonyan; Jack W Elsen; Oriol Rae; Laurent Vinyals; Sifre", "journal": "", "ref_id": "b11", "title": "Training compute-optimal large language models", "year": "2022" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Leveraging passage retrieval with generative models for open domain question answering", "year": "2021" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Yejin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Comput. Surv", "ref_id": "b13", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Zhen Jia; Abdalghani Abujabal; Rishiraj Saha Roy; Jannik Strötgen; Gerhard Weikum", "journal": "ACM", "ref_id": "b14", "title": "Tempquestions: A benchmark for temporal question answering", "year": "2018" }, { "authors": "Zhen Jia; Abdalghani Abujabal; Rishiraj Saha Roy; Jannik Strötgen; Gerhard Weikum", "journal": "ACM", "ref_id": "b15", "title": "TEQUILA: temporal question answering over knowledge bases", "year": "2018" }, { "authors": "Zhen Jia; Soumajit Pramanik; Rishiraj Saha Roy; Gerhard Weikum", "journal": "ACM", "ref_id": "b16", "title": "Complex temporal question answering on knowledge graphs", "year": "2021" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b17", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Qing Lyu; Shreya Havaldar; Adam Stein; Li Zhang; Delip Rao; Eric Wong; Marianna Apidianaki; Chris Callison-Burch", "journal": "", "ref_id": "b18", "title": "Faithful chain-ofthought reasoning", "year": "2023" }, { "authors": "Grégoire Mialon; Roberto Dessì; Maria Lomeli; Christoforos Nalmpantis; Ramakanth Pasunuru; Roberta Raileanu; Timo Baptiste Rozière; Jane Schick; Asli Dwivedi-Yu; Edouard Celikyilmaz; Yann Grave; Thomas Lecun; Scialom", "journal": "", "ref_id": "b19", "title": "Augmented language models: a survey", "year": "2023" }, { "authors": "Maxwell Nye; Anders Johan Andreassen; Guy Gur-Ari; Henryk Michalewski; Jacob Austin; David Bieber; David Dohan; Aitor Lewkowycz; Maarten Bosma; David Luan; Charles Sutton; Augustus Odena", "journal": "", "ref_id": "b20", "title": "Show your work: Scratchpads for intermediate computation with language models", "year": "2022" }, { "authors": "Apoorv Saxena; Soumen Chakrabarti; Partha P Talukdar", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Question answering over temporal knowledge graphs", "year": "2021" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b22", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Partha Pratim Talukdar; Derry Wijaya; Tom M Mitchell", "journal": "ACM", "ref_id": "b23", "title": "Acquiring temporal constraints between relations", "year": "2012" }, { "authors": "Junjie Wang; Yuxiang Zhang; Lin Zhang; Ping Yang; Xinyu Gao; Ziwei Wu; Xiaoqun Dong; Junqing He; Jianheng Zhuo; Qi Yang; Yongfeng Huang; Xiayu Li; Yanghan Wu; Junyu Lu; Xinyu Zhu; Weifeng Chen; Ting Han; Kunhao Pan; Rui Wang; Hao Wang; Xiaojun Wu; Zhongshen Zeng; Chongpei Chen; Ruyi Gan; Jiaxing Zhang", "journal": "", "ref_id": "b24", "title": "Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed H Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b25", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Ping Yang; Junjie Wang; Ruyi Gan; Xinyu Zhu; Lin Zhang; Ziwei Wu; Xinyu Gao; Jiaxing Zhang; Tetsuya Sakai", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Zero-shot learners for natural language understanding via a unified multiple choice perspective", "year": "2022" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William W Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Karthik Narasimhan; Yuan Cao", "journal": "", "ref_id": "b28", "title": "React: Synergizing reasoning and acting in language models", "year": "2022" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b29", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2023" }, { "authors": "Manzil Zaheer; Guru Guruganesh; Avinava Kumar; Joshua Dubey; Chris Ainslie; Santiago Alberti; Philip Ontañón; Anirudh Pham; Qifan Ravula; Li Wang; Amr Yang; Ahmed", "journal": "", "ref_id": "b30", "title": "Big bird: Transformers for longer sequences", "year": "2020" }, { "authors": "Muru Zhang; Ofir Press; William Merrill; Alisa Liu; Noah A Smith", "journal": "", "ref_id": "b31", "title": "a. How language model hallucinations can snowball", "year": "2023" }, { "authors": "Yuxiang Zhang; Junjie Wang; Xinyu Zhu; Tetsuya Sakai; Hayato Yamana", "journal": "", "ref_id": "b32", "title": "Ner-to-mrc: Namedentity recognition completely solving as machine reading comprehension", "year": "2023" }, { "authors": "Yuxiang Zhang; Hayato Yamana", "journal": "European Language Resources Association", "ref_id": "b33", "title": "HRCA+: advanced multiple-choice machine reading comprehension method", "year": "2022" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b34", "title": "Automatic chain of thought prompting in large language models", "year": "2022" }, { "authors": "Chunting Zhou; Graham Neubig; Jiatao Gu; Mona T Diab; Francisco Guzmán; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Detecting hallucinated content in conditional neural sequence generation", "year": "2021" }, { "authors": "Denny Zhou; Nathanael Scharli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Olivier Bousquet; Quoc Le; Ed Huai Hsin; Chi ", "journal": "", "ref_id": "b36", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2022" }, { "authors": "Xinyu Zhu; Junjie Wang; Lin Zhang; Yuxiang Zhang; Yongfeng Huang; Ruyi Gan; Jiaxing Zhang; Yujiu Yang", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Solving math word problems via cooperative reasoning induced language models", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 70.87, 558.87, 217.77, 23.97 ], "formula_id": "formula_0", "formula_text": "tracted information EI i = s i j , r i j , o i j , t i j N j=1" } ]
10.18653/v1/n19-1423
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b25", "b33" ], "table_ref": [], "text": "Multilingual pre-trained models (Conneau et al., 2020a;Xue et al., 2021) have demonstrated impressive performance on natural language understanding (NLU) tasks across different languages (Hu et al., 2020;Ruder et al., 2021). These models are typically trained on large amounts of unlabeled data in hundreds of languages. Recent large language models (Brown et al., 2020;Chowdhery et al., 2022) display surprising multilingual capabilities despite being pre-trained predominantly on English data. However, all of these models share a key limitation: representations of all languages compete for the model's limited capacity. As a result, models perform poorly with an increasing number of pre-training languages and on languages with less pre-training data. This is also known as the \"curse of multilinguality\" (Conneau et al., 2020a). Natural language generation (NLG) tasks present another challenge for current multilingual models, which may overfit to the training languages and partially forget their generation ability in the target language (Vu et al., 2022), generating text with the correct meaning in the wrong language. We refer to this as the \"source language hallucination problem\".\nTo address these two limitations, we propose the modular multilingual T5 (mmT5, Figure 1), the first modular multilingual generative model. During pre-training, mmT5 allocates a small amount of language-specific parameters to increase capacity for multilingual modeling. At fine-tuning time, we freeze the language-specific modules while tuning the shared parameters, allowing direct adaptation to a target language by swapping to the corresponding language-specific module.\nHowever, we observe an additional challenge for mmT5: the fine-tuned shared representations may drift away from the frozen modular representations in the decoder. The modular model is thus susceptible to generating text in the incorrect language, similar to its non-modular counterparts. To ameliorate this, we propose to freeze a subset of shared decoder parameters, which shows large improvements in zero-shot cross-lingual generation for modular generative models.\nIn general, we find that mmT5 is an effective model that overcomes the two limitations of multilingual sequence-to-sequence models: 1) mmT5 alleviates the curse of multilinguality by adding additional model capacity to different languages during pre-training. It outperforms both standard baselines as well as mT5 (Xue et al., 2021) at the same parameter sizes on a representative set of multilingual NLU and NLG tasks; 2) mmT5 resolves the source language hallucination problem with impressive ability on zero-shot cross-lingual text generation. Our analysis ( §6.4) shows that mT5 only generates text in the target language 7% of the time for a zero-shot multilingual summarization task, while mmT5 generates text in the correct language for 99% of examples." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b21", "b22", "b9", "b23", "b28", "b23", "b31", "b32", "b18", "b30", "b27", "b5", "b1", "b24", "b26", "b3", "b6", "b20" ], "table_ref": [], "text": "Modular language models Much work has focused on post-hoc modularity of pre-trained multilingual models, i.e., modular representations are added to existing dense models. The most commonly used modules are known as adapters (Rebuffi et al., 2017(Rebuffi et al., , 2018;;Houlsby et al., 2019). They enable specialization to new data settings (Chen et al., 2019;Rücklé et al., 2020), combination of new and existing knowledge (Stickland and Murray, 2019;Wang et al., 2021a;Pfeiffer et al., 2021a;Lauscher et al., 2020a;Mahabadi et al., 2021;Poth et al., 2021), and adaptation to new cross-lingual (Pfeiffer et al., 2020(Pfeiffer et al., , 2021c;;Üstün et al., 2020;Vidoni et al., 2020;Ansell et al., 2021b,a;Wang et al., 2021b) and NMT scenarios (Bapna and Firat, 2019;Philip et al., 2020;Chronopoulou et al., 2020;Le et al., 2021;Üstün et al., 2021;Stickland et al., 2021;Garcia et al., 2021;Dua et al., 2022).\nOur approach, in contrast, uses modularity a priori, i.e., modularity is integrated into the module architecture as an inductive bias. Such modularity is similar to parameter sharing strategies commonly defined in multi-task learning (Ruder, 2017) as well as to mixture-of-experts approaches (MoE; Shazeer et al., 2017), which have been used to scale mod-els to trillion parameters (Fedus et al., 2021) and for domain-specific pre-training of LMs (Gururangan et al., 2021). The most related work to ours is X-Mod (Pfeiffer et al., 2022), which pre-trains an encoder-only BERT-style model in a modular fashion. Their model, however, cannot be used for natural language generation and underperforms our model on NLU tasks (see Section 4)." }, { "figure_ref": [], "heading": "Limitations of multilingual language models", "publication_ref": [ "b11", "b23", "b33" ], "table_ref": [], "text": "State-of-the-art multilingual LMs are pre-trained on large amounts of multilingual data in around 100 languages. Prior work has demonstrated, however, that models' performance deteriorates with increasing language coverage given the same fixed capacity, known as the curse of multilinguality (Conneau et al., 2020b). Prior studies also found that models perform poorly on languages that are underrepresented in pre-training (Wu and Dredze, 2020;Hu et al., 2020;Lauscher et al., 2020b;Artetxe et al., 2020;Pfeiffer et al., 2020Pfeiffer et al., , 2021c;;Chau et al., 2020;Ponti et al., 2020). For natural language generation, multilingual models have been observed to overfit to the source language and fail to generate text consistently in the correct target language (Vu et al., 2022)." }, { "figure_ref": [], "heading": "mmT5", "publication_ref": [], "table_ref": [], "text": "Standard multilingual models update the same model parameters for hundreds of languages during pre-training, resulting in the curse of multilinguality where different languages compete for the limited model capacity (Conneau et al., 2020a). We propose mmT5, the first modular sequenceto-sequence multilingual model that allocates language specific modules during pre-training. In this section, we discuss the architecture of mmT5, its training and fine-tuning methods, and our strategies to resolve the source language hallucination problem with mmT5." }, { "figure_ref": [ "fig_0" ], "heading": "Modeling", "publication_ref": [ "b8", "b20" ], "table_ref": [], "text": "First, we describe the overall architecture of mmT5. We augment a standard Transformer encoderdecoder model with language-specific modules at every transformer layer (see Figure 1). The selection of modules (i.e., fixed routing; Pfeiffer et al., 2023) is performed via the language ID provided with each example1 ; all tokens of an example are passed through the same language-specific module.\nWe use bottleneck adapters as the languagespecific module because they perform better at smaller model sizes compared to other modular methods such as continuous prompts (Karimi Mahabadi et al., 2021;He et al., 2022). We place a module after the feed-forward component in each layer. In contrast to Pfeiffer et al. (2022) that only experimented with encoder-only models, we focus on a more general sequence-to-sequence model following the T5 architecture (Raffel et al., 2020).\nWe add N × L modular components to the T5 architecture where L is the number of layers of the model and N corresponds to the number of languages which the model is pre-trained on. The transformer weights are shared across languages while the modular component provides the model with language-specific capacity. During a forward pass, each input is first passed through the shared transformer weights and then routed through the corresponding language-specific module based on the language of the input. We follow this procedure for all transformer layers until the representations are passed to the shared prediction head." }, { "figure_ref": [], "heading": "Modular Pre-training, Fine-tuning, and Inference", "publication_ref": [], "table_ref": [], "text": "We pre-train both language-specific modules and shared parameters jointly. During fine-tuning, we freeze all language-specific modules and only update the shared parameters. This paradigm allows us to more effectively adapt the fine-tuned model to any of the languages included in the pre-training data by simply switching to the corresponding language-specific module. At inference, the module corresponding to the target language is used together with the fine-tuned shared parameters." }, { "figure_ref": [], "heading": "Overcoming Modular Representation Drift", "publication_ref": [ "b33" ], "table_ref": [], "text": "When fine-tuning the modular model for transfer settings in §5, we observe a scenario of modular representation drift: we find that the shared parameters that are updated during task-specific training drift away from the modular parameters and become thus less compatible with modules that are used for inference. In practice, this leads to a loss of compositional generalization where the modular model generates text in the incorrect language, similar to its non-modular counterparts (Vu et al., 2022); see §6.4. In order to ameliorate this drift, we propose to freeze parts of the model, with a focus on the decoder. We find that freezing the decoder feedforward parameters provides the biggest benefit (see §6.1 for the detailed ablation) and almost completely eliminates the source language hallucination problem in modular models.2 " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Pre-training Details We pre-train mmT5 on data from 100 languages in mC4 (Xue et al., 2021) following the general pre-training setup of mT5 (Xue et al., 2021), if not specified otherwise. We pretrain mmT5 at two model sizes: small (300M parameters), and base (580M parameters). We train model variants with an input sequence length of 1024 and a target sequence length of 256 for 1M update steps with a batch size of 1024. The bottleneck size of each module is half of the hidden dimension of the transformer model. For instance, as the base variant has a hidden dimension of 768, we set the bottleneck size to 384. 3 We additionally pretrain a non-modular variant of our modular model, mT5 S , where all parameters are shared across all languages. The mT5 S variant uses exactly the same hyper-parameters and pre-training setup as mmT5. To ensure that the models are directly comparable and have exactly the same number of parameters, we add shared bottleneck layers to mT5 S in the same configuration as in mmT5." }, { "figure_ref": [], "heading": "Experimental setting", "publication_ref": [ "b7", "b4", "b11", "b0", "b20" ], "table_ref": [ "tab_0" ], "text": "We conduct experiments across datasets in zero-shot cross-lingual transfer and multilingual training scenarios. For zero-shot cross-lingual transfer, we train the model on a subset of languages (e.g., only English) and evaluate the model on held-out data of the same task in other languages. In multilingual training, we finetune the model on multiple languages of the same task, and evaluate the model on the same set of languages. As the language-specific modular components are replaced at inference time, we do not update the parameters of the modular components (i.e., we freeze the modules). We do the same for our shared model variants, in order for the number of trainable parameters to be equal for comparable scenarios. 4 For each dataset, we select the best model checkpoint based on performance on the validation set. et al., 2018) natural language inference dataset; on XL-Sum (Hasan et al., 2021) for summarization;5 and MASSIVE (FitzGerald et al., 2022) for semantic parsing. 6 We mainly fine-tune the model on English training data and evaluate on the target languages (Hu et al., 2020). For XL-Sum, we additionally evaluate in a multi-source zero-shot transfer setting where we train jointly on data in Arabic, English, Japanese and Chinese (XL-Sum ar,en,ja,zh ).\nFor multilingual training, we evaluate on semantic parsing (MASSIVE) and summarization (XL-Sum) datasets. For each dataset, we fine-tune and evaluate the model on all languages jointly.\nBaselines Our main comparison method is mT5 S , a shared model that is pre-trained with the same hyper-parameters, setup, and number of parameters as our modular model. We also compare to the published results of the mT5 encoderdecoder model (Xue et al., 2021). In addition, we compare to several encoder-only models including mBERT (Devlin et al., 2019), X-Mod (Pfeiffer et al., 2022), andXLM-R (Conneau et al., 2020b). Encoder-only models are generally smaller as they lack a decoder but cannot easily be used for generation tasks. We provide an overview of the model sizes of the baselines and our method in Table 1. Decoder Freezing Configurations To overcome the modular representation drift described in §3.3, we experiment with different configurations of freezing parts of the model when fine-tuning the model on a downstream task. We experiment with freezing the LayerNorm (LN), self-attention (Att), cross-attention (CrossAtt) and feed-forward component (FFN) in the encoder (Enc) and decoder (Dec) parts of the transformer model. We ablate freezing configurations in §6.1 and report test results of the freezing configuration that performs best on the dev set for each dataset for mmT5. For dense models, we observe no impact with freezing and report results using full fine-tuning." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Pre-training", "publication_ref": [], "table_ref": [], "text": "We first compare the language modeling perplexities of different model sizes for mmT5 and mT5 S during pre-training in Figure 2. We find that mmT5 significantly outperforms its fully shared counterpart during the early stages of pre-training and maintains the gap throughout the pre-training process. From an efficiency perspective, mmT5 only requires 282k and 220k update steps respectively for the small and base versions to achieve the same final perplexity as the mT5 S models at 1M update steps. This corresponds to a ≈ 4× efficiency boost when training a modular multilingual model compared to a fully dense one." }, { "figure_ref": [], "heading": "Fine-tuning", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "We present our main results on the test sets for zero-shot cross-lingual transfer and multilingual training scenarios in Tables 2 and3, respectively." }, { "figure_ref": [], "heading": "Zero-Shot", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "XQuAD TyDiQA(GoldP) XNLI XL-Sum en XL-Sum mmT5 outperforms both the original mT5 as well as mT5 S across all model sizes. It achieves performance similar to XLM-R at the same parameter size-despite its encoder-decoder configurationand significantly outperforms X-Mod, the only other modular model.\nZero-shot For zero shot cross-lingual transfer scenarios, we see large gains for generative tasks in particular. For question answering (XQuAD and TyDiQA), we observe an average relative F1 improvement of 5.5 and 6.3 for the small and base models respectively. For summarization, we see larger zero-shot gains when jointly training on more than one language. We suspect that this is due to the increase in training data and due to positive transfer during multi-source training, which modular methods are better able to harness. This is in line with previous findings that multi-source training improves cross-lingual transfer in adapterbased setups (Ansell et al., 2021c). We also see a gain of 6.1 EM points on MASSIVE. The smallest gains are achieved for the classification task XNLI. Here, mmT5 improves over the baselines only by 1-2.4 accuracy points. We hypothesize that due to the constrained formulation of the task, which only requires predicting a single token, the full multilingual generative capabilities of mmT5 are under-utilized. Overall, we see a clear trend that our modular models significantly outperform their respective dense counterparts especially for generation tasks.\nMultilingual training For multilingual training in Table 3, we also find that the modular models outperform their dense counterparts across all tasks we experiment with. Here we find the largest gains for semantic parsing (MASSIVE). For summarization (XL-SUM), we see smaller, but still consistent gains. These results indicate that modular representations are not only useful in transfer settings but that mmT5 can also leverage labeled data in the target language to deliver superior performance compared to the standard non-modular models.\n6 Analysis and Ablations" }, { "figure_ref": [], "heading": "Impact of Freezing Configuration", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We investigate the impact of the freezing configuration on the performance of the model. In Table 5, we compare the best-performing freezing configurations with a non-frozen baseline for mmT5 base (we show the results of all freezing configurations in Appendix A.1). We observe significant improvements when freezing the feed-forward layer of the decoder during fine-tuning, particularly in zeroshot scenarios. For multilingual training, freezing of the decoder has less effect on the performance. We also find that freezing parts of the decoder has no effect on the dense mT5 S model across all tasks (see Appendix A.1). " }, { "figure_ref": [], "heading": "Impact of Bottleneck Size", "publication_ref": [], "table_ref": [], "text": "We experiment with different bottleneck sizes of the modular components to understand the impact of providing each language with more capacity. We report results for XQuAD, and XNLI in Figure 3 using mmT5 base and bottleneck sizes of 96, 192, 384, and 768. We find that for all three tasks the bottleneck size has little effect on the downstream task performance, achieving only 0.5-2 absolute points difference between the larger and the smaller bottleneck sizes. This suggests that it is sufficient to provide the model with only a small amount of language-specific parameters in order to learn idiosyncratic information and mitigate catastrophic interference, and highlights the parameter-efficiency of modular models." }, { "figure_ref": [ "fig_3" ], "heading": "Impact of Model Size", "publication_ref": [], "table_ref": [], "text": "In Figure 4, we plot the performance difference of mmT5 and mT5 S for the small and base variants. We find that the modular model outperforms the dense variant across model sizes with a similar gap, indicating that the positive effect of modularity may not diminish at scale. " }, { "figure_ref": [ "fig_4" ], "heading": "Source Language Hallucination", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We perform an analysis of the generated text on the XL-Sum dev sets for mT5 S and mmT5 models trained in a zero-shot setting on XL-Sum ar,en,ja,zh using full fine-tuning and a decoder freezing configuration. We automatically detect the language of the generated text using the Language Detection from the Google Cloud Translation API 7 (Caswell et al., 2020). We show the results in Figure 6. We find that most models tend to generate text in one of the source languages (in this setting: Arabic, English, Japanese, and Chinese). This holds true also for mmT5 when we fine-tune the decoder. However, when freezing the decoder we observe a dramatic improvement in the target language generation rate from 1% to 99% of examples for mmT5, essentially solving the issue of source language hallucination in cross-lingual transfer scenarios. This improvement in language consistency also helps explain the significant improvement of the modular model over its dense counterparts on natural language generation tasks. High numbers are desireable for the first set of plots (\"target language\"), low numbers are desireable for the remaining four sets of plots (\"ar\", \"en\", \"ja\", \"zh\"). We only include zero-shot cross-lingual results, therefore exclude the four source languages; all models achieve 100% accuracy for those. For more granular results see Appendix Table 10.\nIn addition, we manually analyze outputs of mT5 S and mmT5 on XQuAD and find similar issues of source language hallucinations. We show examples in Figure 5. Although the task is extractive QA, i.e., the answer is a substring of the input, mT5 S tends to translate subwords into English (the source language). This does not happen to mmT5 when freezing parts of the decoder, partially explaining the large improvements of mmT5 over mT5 S on TyDi QA in Table 2." }, { "figure_ref": [ "fig_5" ], "heading": "Module Re-Use for Unseen Languages", "publication_ref": [ "b13" ], "table_ref": [ "tab_1" ], "text": "In the previous sections we have evaluated the cross-lingual performance of mmT5 on languages seen during pre-training. However, with more than 7000 languages spoken in the world (Joshi et al., 2020), mmT5 covers less than 1% of them. While extending the model to unseen languages is out of scope for this work8 , we evaluate the potential reusability of existing language modules for truly unseen languages with a case study on Tagalog. We utilize the base mmT5 model fine-tuned on the English MASSIVE training dataset (see Table 2). As a Tagalog language module does not exist within mmT5, we test all existing other language modules when evaluating on the Tagalog test set. In Figure 7, we report the Exact Match (EM) zero-shot accuracies for all languages. The module performing best corresponds to Javanese, which is the most closely related language to Tagalog as both belong to the Malayo-Polynesian subgroup of the Austronesian language family. This finding demonstrates the effectiveness of modular models; modular components specifically incorporate interpretable concepts, which can be re-used for unseen scenarios. Additionally, they can be further finetuned or adapted to the target domain if training data is available." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have proposed mmT5, a modular multilingual encoder-decoder model. During multilingual pretraining the majority of parameters of mmT5 are shared between tasks, but each language is provided with a small amount of parameters only accessible to the respective language. We demonstrated that integrating modularity as an architec- tural inductive bias significantly improves training efficiency, where the same perplexity as an equivalent fully dense model is achieved at a quarter of the update steps. mmT5 considerably outperforms comparable models on a large number of tasks including Question Answering, Semantic Parsing, Summarization and Classification in both zero-shot as well as multilingual scenarios.\njv _I D en _U S te _I N el _G R cy _G B zh _C N zh _T W sq _A L ja _J P af _Z A ar _S A fi_ FI km _K H ru _R U hi _I N ur _P K fa _I R m y_ M M bn _B D kn _I N th _T H ko _K R am _E T ta _I N hy _A M da _D K ka _G E es _E S lv _L V is _I S sl _S L az _A Z nb _N O m l_ IN it_ IT sv _S E nl _N L m s_ M Y hu _H U vi _V N id _I D sw _K E fr _F R pl _P L tr _T R m n_ M N pt\nFinally, we show that by freezing parts of the decoder when fine-tuning mmT5 on a target task in a source language, the model consistently generates text in the target language. Consequently, modularity arguably solves source language hallucinations in cross-lingual transfer scenarios." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [ "b2", "b15", "b18", "b30", "b27", "b5", "b1", "b20", "b23", "b10" ], "table_ref": [], "text": "In this paper, we explored the use of modularity for multilingual language models. We showed that modularity significantly improves cross-lingual performance on a number of generative tasks by mitigating hallucinations in the source language. However, there are still many avenues for future work. First, we did not consider placing the modules in different parts of the model. We only experimented with placing bottleneck layers after the feed-forward component of each transformer layer. Previous work has demonstrated that depending on the modality, different placements perform better (Pfeiffer et al., 2021b;Eichenberg et al., 2022).\nSecond, we only experimented with extending the vanilla transformer architecture with modular components. Future work might consider modularizing different parts of the transformer, such as the attention-components or entire feed-forward layers like in Kudugunta et al. (2021).\nThird, we performed fixed routing under the assumption that the language ID is easy to obtain. We chose this path, as learning-to-route has many difficulties such as training instabilities (Pfeiffer et al., 2023). However, this architecture design limits the sharing of information (e.g. domains) across languages. Consequently, a combination of fixed routing and learned routing would allow the model to learn how to share information across subsets of languages.\nFourth, we did not try using mmT5 for machine translation. Using a modular design for this type of task setup is quite natural, as modules from the encoder and decoder can be easily replaced with the source and target language components, respectively. The effectiveness of modular sequence-tosequence models for NMT has been investigated previously (Bapna and Firat, 2019;Philip et al., 2020;Chronopoulou et al., 2020;Le et al., 2021;Üstün et al., 2021;Stickland et al., 2021;Garcia et al., 2021;Dua et al., 2022).\nFinally, we did not consider extending the model to languages beyond those we pre-trained on. While our preliminary results (see § 6.5) suggest that there are benefits of reusing related language modules to learn unseen languages, this requires further experimentation. However, previous works have demonstrated that modular (Pfeiffer et al., 2022) as well as dense models can be adapted to new languages and scripts (Pfeiffer et al., 2020(Pfeiffer et al., , 2021c)). Alternatively, future work might consider using post-hoc adaptation techniques, such as LoRA (Hu et al., 2022), to adapt modules to new languages. " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Freezing combinations", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We show results with different freezing combinations in Table 5. We find that freezing the FFN component of the Decoder results in the biggest performance gains." }, { "figure_ref": [], "heading": "A.2 Language-ID prediction on Cross-lingual Summarization", "publication_ref": [], "table_ref": [ "tab_0", "tab_14", "tab_7", "tab_9", "tab_8" ], "text": "We report the languages predicted by the Language Detection model from the Google Cloud Translation API 9 (Caswell et al., 2020) for the XL-Sum ar,en,ja,zh task in Table 10. We find that mmT5 achieves near perfect performance for all target languages when freezing parts of the decoder (s7)-99% of the text is generated in the correct target language-significantly outperforming all other model variants. Interestingly, mmT5 hallucinates in the source language when the decoder is finetuned (s1), resulting in a drop down to only 2% in the correct target language. mT5 S also benefits slightly from freezing parts of the decoder, with an improvement from 7% to 18% target language generation, however, this is no where close to the performance of mmT5.\nA.3 Language-level Results XNLI. We report XNLI validation results in Table 11 and test results in Table 6.\nXQuAD. We report XQuAD validation results in Table 9 and test results in Table 7." }, { "figure_ref": [], "heading": "MASSIVE. We report MASSIVE validation results in", "publication_ref": [], "table_ref": [ "tab_25", "tab_27", "tab_28", "tab_19", "tab_18", "tab_16", "tab_24", "tab_15" ], "text": "Table 17 and test results in Table 8.\nTyDiQA. We report TyDiQA validation results in Table 23.\nMultilingual XL-Sum We report XL-Sum validation results in Tables 18,19,20, 21, and 22 and test results in Table 15.\nZeroshot XL-Sum en We report XL-Sum validation results in Table 14 and test results in Table 13.\nZeroshot XL-Sum ar,en,ja,zh We report XL-Sum validation results in Table 16 and test results in Table 12.\n9 https://cloud.google.com/translate/ docs/basic/detecting-language" }, { "figure_ref": [], "heading": "A.4 Language-level Pre-training Perplexities", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We report the language-level perplexities of the different model variants and sizes in Figures 8,9,10,11,12,13, 1.0 0.0 0.0 1.0 0.0 0.00 0.01 0.04 0.50 0.43 0.00 0.05 0.12 0.13 0.65 0.00 0.12 0.24 0.04 0.55 0.00 0.24 0.03 0.30 0.27 mmT5 s7 1.0 0.0 0.0 1.0 0.0 1.00 0.00 0.00 0.00 0.00 0.99 0.00 0.00 0.00 0.00 0.96 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 mT5 S s1 1.0 0.0 0.0 1.0 0.0 0.01 0.00 0.00 0.99 0.00 0. s7 0.99 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.0 0.0 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 mT5 S s1 0.00 0.15 0.00 0.67 0.14 0.05 0.24 0.04 0.48 0.16 0.02 0.96 0.00 0.0 0.0 0.02 0.00 0.97 0.00 0.00 0.07 0.53 0.28 0.01 0.04 mT5 S s7 0.01 0.02 0.03 0.82 0.09 0.18 0.12 0.19 0.40 0.03 0.12 0.82 0.02 0.0 0.0 0.07 0.00 0.91 0.00 0.00 0.27 0.05 0.54 0.00 0.00 s7 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.95 0.000 0.010 0.000 0.000 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 mT5 S s1 0.02 0.48 0.01 0.40 0.02 0.04 0.65 0.25 0.00 0.00 0.01 0.375 0.525 0.005 0.005 0.01 0.84 0.13 0.00 0.00 0.07 0.09 0.02 0.68 0.07 mT5 S s7 0.07 0.63 0.10 0.11 0.00 0.06 0.71 0.15 0.00 0.00 0.02 0.260 0.545 0.000 0.000 0.02 0.87 0.09 0.00 0.00 0. s7 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 mT5 s7 0.99 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.0 0.0 0.0 0.0 1.0 0.99 0.00 0.00 0.00 0.00 mT5 S s1 0.00 0.36 0.01 0.45 0.11 0.08 0.24 0.49 0.00 0.16 0.22 0.07 0.66 0.00 0.01 1.0 0.0 0.0 0.0 1.0 0.07 0.29 0.29 0.24 0.07 mT5 S s7 0.02 0.08 0.19 0.46 0.03 0.31 0.04 0.61 0.00 0.00 0.44 0.00 0.50 0.00 0.00 1.0 0.0 0.0 0.0 1.0 0.18 0.19 0.31 0.19 0.03\ntgt\nTable 10: Language prediction results on the XL-Sum ar,en,ja,zh task setup. The generated summarization text is passed into the language prediction model. We report the percentage of text which the model predicts to be in the correct target language, as well as each of the 4 source languages. It is possible that another language was predicted, the numbers therefore do not need to sum up to 1.0. .4 / 28.5 / 32.4 37.9 / 29.5 / 32.0 32.5 / 26.5 / 31.2 39.4 / 30.7 / 35.6 36.9 / 29.6 / 34.9 38.5 / 29.3 / 33.8 35.9 / 28.3 / 32.9 s14 36.7 / 28.7 / 33.3 38.3 / 29.5 / 33.1 32.8 / 26.7 / 31.5 40.4 / 31.4 / 36.6 37.8 / 30.2 / 35.7 41.3 / 31.2 / 34.9 36.3 / 28.6 35.4 / 27.7 / 30.6 36.8 / 28.6 / 30.5 31.8 / 25.9 / 29.3 37.2 / 28.9 / 32.6 36.1 / 28.8 / 32.9 35.6 / 27.4 / 32.4 34.3 / 27.0 / 30.9 s14 36.7 / 28.5 / 31.0 37.6 / 28.9 / 30.8 32.3 / 26.0 / 29.5 38.7 / 30.1 / 33.6 37.4 / 29.8 / 33.6 38.4 / 28.9 / 32.5 .5 / 30.7 30.8 / 19.5 66.8 / 55.7 51.2 / 35.5 50.8 / 35.9 28.3 / 20.3 55.6 / 38.1 43.1 / 29.5 19.9 / 14.9 43.7 / 31.1 s14 50.4 / 33.7 26.6 / 15.9 69.3 / 58.6 55.5 / 37.2 56.6 / 39.5 28.3 / 18.8 59.9 / 40.8 36.2 / 23.0 17.3 / 12.4 \n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /\n/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Andrea Gesmundo, Marc'Aurelio Ranzato and Srini Narayanan for helpful feedback on a draft of this paper." } ]
Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text in the correct target language in few-shot settings. To address these challenges, we propose mmT5, a modular multilingual sequence-to-sequence model. mmT5 utilizes language-specific modules during pre-training, which disentangle language-specific information from languageagnostic information. We identify representation drift during fine-tuning as a key limitation of modular generative models and develop strategies that enable effective zero-shot transfer. Our model outperforms mT5 at the same parameter sizes by a large margin on representative natural language understanding and generation tasks in 40+ languages. Compared to mT5, mmT5 raises the rate of generating text in the correct language under zero-shot settings from 7% to 99%, thereby greatly alleviating the source language hallucination problem.
mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations
[ { "figure_caption": "Figure 1 :1Figure 1: Architecture of mmT5. Language-specific bottleneck modules (dark blue and green components) are placed after the feed-forward component within each layer of the Transformer encoder-decoder model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Perplexity (lower is better) of different model sizes during pre-training for mmT5 and mT5 S , averaged across languages.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of bottleneck sizes of base mmT5 for XQuAD (F1) and XNLI (Accuracy).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of model sizes for XQuAD (F1) and XNLI (Accuracy).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Detected languages of generated text of the development set of XL-Sum ar,en,ja,zh . All models have base size. *ft indicates that the decoder was finetuned, *froz indicates that the decoder was partially frozen.High numbers are desireable for the first set of plots (\"target language\"), low numbers are desireable for the remaining four sets of plots (\"ar\", \"en\", \"ja\", \"zh\"). We only include zero-shot cross-lingual results, therefore exclude the four source languages; all models achieve 100% accuracy for those. For more granular results see Appendix Table10.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Average of the top-5 zero-shot EM accuracies on the Tagalog MASSIVE development set by varying the input language ID. Tagalog was not seen during mmT5 pre-training.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Number of shared and modular parameters of baselines and our models.", "figure_data": "ModelVariant Shared Params.Mod. Params. per Lang.mBERTBase178M-X-ModBase270M7MXLM-RBase Large270M 550M--mT5Small Base300M 580M--mT5 SSmall Base300M + 4M 580M + 14M--mmT5Small Base300M 580M4M 14M", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "ar,en,ja,zh MASSIVE F1 / EM F1 / EM Acc RG 1 / RG 2 / RG L RG 1 / RG 2 / RG L EM Zero-shot cross-lingual transfer test results averaged over all languages. mBERT and XLM-R scores are from (Hu et al., 2020); XLM-R Base XNLI results are from (Conneau et al., 2020b); mT5 results are from (Xue et al., 2021); X-Mod results are from (Pfeiffer et al., 2022) ( * average is only on a subset of languages).", "figure_data": "EncoderbasemBERT 64.5 / 49.4 X-Mod 72.8 * / -XLM-R 70.6 / 55.559.7 / 43.9 -/ --/ -65.4 73.5 * 76.2---------large XLM-R 76.6 / 60.865.1 / 45.079.2---Encoder-decodersmall basemT5 mT5 S mmT5 mT5 mT5 S mmT558.1 / 42.5 61.9 / 46.2 66.5 / 50.4 67.0 / 49.0 68.7 / 51.5 76.3 / 60.335.2 / 23.2 44.5 / 31.1 50.8 / 36.3 59.1 / 42.4 64.0 / 47.8 69.0 / 53.267.5 63.2 68.5 75.4 75.1 77.8-15.5 / 2.2 / 14.2 16.7 / 4.6 / 14.4 -16.2 / 2.8 / 4.5 19.6 / 6.1 / 16.4-17.0 / 4.7 / 15.1 29.4 / 12.6 / 23.3 -18.6 / 6.0 / 16.7 34.5 / 16.1 / 26.8-21.7 27.7 34.7 39.9 46.0MultilingualXL-SumMASSIVERG 1 / RG 2 / RG LEMEnc-decsmall basemT5 S 36.4 / 17.9 / 28.5 mmT5 36.7 / 18.1 / 28.7 mT5 S 39.1 / 20.3 / 30.5 mmT5 41.6 / 22.8 / 33.060.7 65.6 64.6 66.7", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Multilingual training test results averaged over all languages.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of different freezing configurations for mmT5 base on different tasks. Dev results for most. We always fine-tune Enc Att , and Enc F F N and always freeze Enc M od and Dec M od . ✗ indicates that this component is frozen during task-level fine-tuning.", "figure_data": "Zero-ShotMultilingualXQuADXNLI MASSIVEXL-Sumdev (en)testdevdevdev", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "XQuAD examples where mT5 S generates tokens with the correct meaning but in the wrong language. For the same examples, mmT5 is able to generate tokens in the correct language when freezing parts of the decoder.", "figure_data": "Südkalifornien besteht aus [. . . ] einer interna-[. . . ] Analysen [. . . ] waren irreführend, da estionalen Metropolregion und Großstadtgebi-mehrere Jahre dauert, bis die Auswirkungeneten. Die Region ist die Heimat von zwei er-zu Veränderungen des Wirtschaftswachstumsweiterten Metropolregionen mit jeweils mehrführen. [. . . ]als fünf Millionen Einwohnern. [. . .]Question: Wie lange dauert es, bis sichQuestion: Wie viele erweiterte Metropolre-die Auswirkungen als Veränderungen desgionen gibt es ?wirtschaftlichen Wachstums manifestieren?mmT5: zweimmT5: mehrere JahremT5 S : twomT5 S : more ere Jahre0.0 0.2 0.4 0.6 Figure 5: target language ar Percentage 0.8 1.0en Detected Languagejazh model mmt5 ft mmt5 froz mt5 ft mt5 froz", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "XinyiWang, Yulia Tsvetkov, Sebastian Ruder, and Graham Neubig. 2021b. Efficient test time adapter ensembling for low-resource language varieties. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 730-737, Punta Cana, Dominican Republic. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-totext transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computational Linguistics.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "14. Results with different freezing combinations of mmT5 base on different tasks. ✗ indicates that the component is frozen in the respective configuration Dev results for most. We always finetune the attention in the encoder (Enc Att ), the feed forward layer in the encoder (Enc F F N ), and always freeze the modules in the encoder (Enc M od ) and decoder (Dec M od ). We find that configurations s1-s4 strongly underperform the respective other configurations (s5-s14), suggesting that freezing the feed forward layer of the decoder is essential for good cross-lingual transfer performance.", "figure_data": "Zero-ShotMulti-SourceXQuADXNLIXL-Sum enXL-Sum ar,en,zh,ja MASSIVEXL-SumMASSIVEdev (en)testdevdevdevdevdevdevcfg Emb EncLN DecLN DecAtt DecCrossAtt DecFFNf1 / emf1 / emaccRg1 / Rg2 / RgL Rg1 / Rg2 / RgLEMRg1 / Rg2 / RgLEMs190.7 / 83.6 66.9 / 49.3 75.5 15.4 / 2.0 / 14.018.7 / 6.1 / 16.832.141.2 / 22.4 / 32.4s2✗✗90.7 / 83.4 65.6 / 48.0 75.0s3✗90.7 / 83.5 61.0 / 43.4 76.9s4✗✗✗90.9 / 83.6 64.6 / 47.1 77.5s5✗✗✗✗✗91.2 / 84.1 74.3 / 57.5 73.8s6✗✗✗✗91.9 / 85.1 75.8 / 59.5 75.643.241.2 / 22.4 / 32.6s7✗✗✗91.8 / 85.1 75.8 / 59.8 77.3 19.7 / 6.2 / 16.4 34.7 / 16.2 / 26.941.041.9 / 23.1 / 33.2s8✗✗✗✗91.2 / 84.5 75.0 / 59.3 73.3s9✗✗✗91.2 / 84.5 74.6 / 58.8 75.6s10✗✗✗✗✗92.1 / 85.5 76.3 / 60.3 76.145.440.8 / 22.1 / 32.366.78s11✗✗✗90.9 / 84.0 74.8 / 58.8 73.1s12✗✗✗91.2 / 84.5 75.0 / 59.3 75.6s13✗✗91.3 / 84.5 74.9 / 58.9 76.3s14✗✗✗✗91.8 / 85.1 75.0 / 59.2 77.739.941.8 / 23.0 / 33.1modelarbgdeelenesfrhiruswthtrurvizh avgsmall mmT5 65.3 71.9 70.2 70.5 81.8 74.7 73.4 62.7 70.1 63.8 67.4 64.2 59.1 66.7 66.3 68.5small mT5 S 63.8 69.1 67.6 68.1 80.0 71.6 69.3 60.4 68.7 53.2 64.1 59.1 58.4 63.9 64.4 65.5base mmT5 75.0 81.2 80.3 79.5 86.9 82.6 80.9 73.4 78.3 74.0 74.9 76.3 69.9 76.7 77.2 77.8base mT5 S 72.7 78.7 77.6 77.3 85.5 80.7 78.5 70.7 77.7 66.6 73.0 72.8 67.7 73.8 73.5 75.1", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "XNLI test results for all language. We select the checkpoint performing best on the validation set.", "figure_data": "modelardeeleneshiruthtrvizhavgF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMsmallmmT5 60.6 / 44.6 71.0 / 53.6 64.9 / 47.1 82.5 / 70.3 74.1 / 56.1 59.2 / 43.9 69.5 / 50.8 58.9 / 47.0 62.4 / 43.4 64.3 / 45.4 64.2 / 52.4 66.5 / 50.4 mT5 S 53.5 / 37.1 67.2 / 48.7 59.5 / 41.1 81.7 / 69.7 69.8 / 53.7 54.7 / 40.8 62.8 / 44.5 50.3 / 37.9 57.2 / 39.3 59.7 / 40.9 64.7 / 54.0 61.9 / 46.2basemmT5 74.2 / 57.6 79.5 / 63.0 77.6 / 59.9 86.7 / 74.5 79.2 / 61.3 72.4 / 56.1 77.6 / 58.7 69.3 / 59.3 74.5 / 55.9 74.2 / 54.2 74.4 / 63.1 76.3 / 60.3 mT5 S 63.3 / 43.0 75.9 / 57.2 63.3 / 40.3 84.3 / 71.9 76.1 / 58.7 62.8 / 47.1 64.0 / 42.6 59.6 / 48.5 70.1 / 51.7 70.4 / 50.4 66.0 / 55.5 68.7 / 51.5", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "XQuAD test set results for all languages. We select the checkpoint performing best on the English development set.", "figure_data": "LanguageExact Match (EM)af_ZA57.5am_ET29.6ar_SA38.3az_AZ41.3bn_BD37.2cy_GB35.5da_DK60.5de_DE55.3el_GR49.6en_US es_ES fa_IR fi_FI fr_FR hi_IN hu_HU hy_AM id_ID is_IS it_IT ja_JP jv_ID ka_GE km_KH kn_IN ko_KR lv_LV ml_IN mn_MN ms_MY my_MM nb_NO nl_NL pl_PL pt_PT72.7 53.8 48.2 54.4 51.4 44.1 47.4 38.6 57.1 42.8 51.7 42.5 38.1 38.9 40.7 34.4 39.1 50.3 36.0 34.2 52.5 33.8 58.2 57.5 52.9 56.0Small BasemmT5 mT5 S mmT5en F1 / EM 85.6 / 77.6 87.2 / 79.4 87.4 / 79.4 s10 87.3 / 79.6 cfg s1 s6 s7 s14 87.4 / 79.2 s1 84.9 / 76.5 s6 85.8 / 77.6 s7 85.9 / 77.7 s10 86.1 / 77.9 s14 85.9 / 77.6 s1 90.7 / 83.6 s2 90.7 / 83.4 s3 90.7 / 83.5 s4 90.9 / 83.6 s5 91.2 / 84.1 s6 91.9 / 85.1 s7 91.8 / 85.1 s8 91.2 / 84.5 s9 91.2 / 84.5 s10 92.1 / 85.5 s11 90.9 / 84.0 s12 91.2 / 84.5 s13 91.3 / 84.5 s14 91.8 / 85.1ro_RO55.4s189.9 / 82.5ru_RU sl_SL sq_AL sv_SE50.9 50.3 48.3 58.9. mT5 Ss6 s7 s10 90.2 / 82.8 90.2 / 83.0 90.5 / 83.6 s14 90.4 / 83.5sw_KE43.0ta_IN37.1te_IN35.4th_TH50.1tr_TR47.9ur_PK39.6vi_VN44.9zh_CN30.0zh_TW28.2Average46.0Table 8: MASSIVE Exact Match (EM) test accuraciesof the best model (s10 modular) for all languages", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "XQuAD validation results for English across the different freezing configurations.", "figure_data": "tgt langamarazbncypred lang cfgamarenjazhararenjazhazarenjazhbnarenjazhcyarenjazhmmT5s1 0.00 0.50 0.07 0.02 0.361.01.00.00.00.0 0.000.180.290.060.29 0.000.4 0.10 0.05 0.41 0.06 0.05 0.840.0 0.01mmT5s7 0.99 0.00 0.00 0.00 0.001.01.00.00.00.0 1.000.000.000.000.00 1.000.0 0.00 0.00 0.00 0.91 0.00 0.080.0 0.00mT5 Ss1 0.03 0.96 0.00 0.00 0.011.01.00.00.00.0 0.080.120.310.270.09 0.020.3 0.02 0.43 0.19 0.13 0.10 0.760.0 0.00mT5 Ss7 0.02 0.95 0.00 0.00 0.001.01.00.00.00.0 0.360.030.350.050.02 0.130.1 0.13 0.47 0.11 0.22 0.01 0.760.0 0.00tgt langenesfafrgdpred lang cfgenarenjazhesarenjazhfaarenjazhfrarenjazhgdarenjazhmmT5s11.00.01.00.00.0 0.03 0.03 0.73 0.01 0.07 0.020.960.010.00.0 0.05 0.07 0.740.0 0.02 0.25 0.01 0.670.0 0.01mmT5s71.00.01.00.00.0 0.99 0.00 0.00 0.00 0.00 1.000.000.000.00.0 1.00 0.00 0.000.0 0.00 1.00 0.00 0.000.0 0.00mT5 Ss11.00.01.00.00.0 0.02 0.00 0.95 0.00 0.00 0.010.990.000.00.0 0.01 0.00 0.980.0 0.00 0.23 0.01 0.760.0 0.00mT5 Ss71.00.01.00.00.0 0.05 0.00 0.91 0.00 0.00 0.130.860.000.00.0 0.04 0.00 0.940.0 0.00 0.56 0.00 0.430.0 0.00tgt langguhahiidigpred lang cfgguarenjazhhaarenjazhhiarenjazhidarenjazhigarenjazhmmT5s1 0.00 0.24 0.18 0.10 0.41 0.03 0.17 0.62 0.01 0.01 0.000.330.260.040.30 0.01 0.13 0.62 0.03 0.13 0.10 0.25 0.33 0.02 0.17mmT5s7 1.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 1.000.000.000.000.00 0.98 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00mT5 Ss1 0.13 0.16 0.01 0.56 0.13 0.11 0.03 0.83 0.00 0.00 0.030.230.020.640.05 0.07 0.14 0.73 0.01 0.02 0.63 0.01 0.34 0.00 0.00mT5 Ss7 0.23 0.05 0.12 0.49 0.06 0.18 0.01 0.76 0.00 0.00 0.120.080.230.480.01 0.25 0.03 0.67 0.00 0.00 0.66 0.00 0.31 0.00 0.00tgt langjakokymrmypred lang cfgjaarenjazhkoarenjazhkyarenjazhmrarenjazhmyarenjazhmmT5s1", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "XNLI validation results for all languages. We report the results of different freezing configurations./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "cfgarbgdeelenesfrhiruswthtrurvizh avgs163.7 68.4 68.4 66.7 77.6 71.6 69.3 60.8 66.3 60.2 62.2 62.5 56.2 64.5 63.4 65.4SmallmmT5s6 s7 s10 63.9 69.6 68.9 69.4 80.6 73.7 70.4 62.4 67.8 59.9 65.2 61.4 54.6 62.6 65.6 66.4 63.5 69.2 69.2 68.2 81.7 73.9 71.7 62.2 67.1 60.8 64.8 62.3 54.7 61.3 64.2 66.3 65.4 70.8 70.5 70.1 81.5 74.5 73.0 63.1 69.1 62.7 66.1 63.1 58.4 65.7 65.7 68.0 s14 64.7 70.7 70.4 69.5 81.6 74.5 72.6 63.2 68.8 60.9 65.3 62.4 58.0 65.1 65.9 67.6s163.2 68.6 67.8 67.3 80.2 70.8 70.1 59.8 67.6 52.9 62.5 58.2 56.9 63.5 63.2 64.8mT5 Ss6 s7 s10 57.5 64.2 66.8 63.6 77.6 68.2 66.3 56.4 64.4 51.0 59.9 54.8 51.8 59.0 61.3 61.5 58.8 64.5 64.2 62.4 77.2 67.3 65.9 55.0 64.0 48.2 60.3 53.1 50.9 58.6 61.2 60.8 62.2 66.9 66.6 65.6 77.9 70.8 69.3 59.4 67.1 53.2 60.6 57.6 56.1 61.7 63.5 63.9s14 61.8 66.9 67.7 66.3 77.6 71.0 69.3 59.4 67.6 53.3 61.6 58.0 56.3 61.1 62.9 64.1s173.5 77.9 77.6 77.7 84.3 79.2 77.8 72.2 75.3 71.7 72.9 73.5 68.9 75.0 74.6 75.5s273.3 77.1 77.2 77.1 84.1 79.8 76.9 71.2 75.2 70.7 72.8 72.6 68.7 74.2 74.9 75.0s374.4 79.9 79.7 78.2 85.8 81.3 79.5 73.6 76.7 72.9 75.0 74.4 69.2 76.2 77.0 76.9s475.9 79.8 79.5 79.6 86.2 81.9 80.1 73.5 78.0 73.5 74.3 75.2 70.1 77.0 77.4 77.5s572.4 75.5 75.4 76.1 83.3 79.2 76.8 70.9 73.4 71.5 69.9 70.7 67.1 71.8 72.8 73.8BasemmT5s6 s7 s8 s9 s10 74.1 77.7 78.0 77.5 84.3 81.4 78.3 72.9 75.9 73.2 73.5 73.7 69.4 75.1 76.2 76.1 73.5 77.6 78.2 77.2 84.6 81.0 79.0 72.3 76.2 71.8 72.4 72.2 68.7 73.8 74.7 75.6 75.3 80.1 79.8 79.1 86.3 82.2 79.4 73.5 77.8 73.0 75.1 74.7 70.0 75.8 77.5 77.3 72.2 76.4 73.7 74.8 83.2 77.8 73.2 70.2 73.5 69.7 71.7 70.7 67.1 71.7 73.5 73.3 74.1 77.7 77.8 76.9 84.7 80.2 78.3 71.6 75.8 71.4 73.2 73.7 69.6 74.0 75.3 75.6s11 71.3 75.5 73.9 74.7 82.2 77.8 75.2 70.0 73.3 69.6 71.1 70.0 66.9 70.8 73.4 73.1s12 74.1 78.0 77.5 77.4 84.6 79.4 78.0 71.3 75.7 71.8 73.4 73.5 69.3 74.7 75.7 75.6s13 75.2 78.1 78.3 78.2 84.4 80.1 78.4 73.1 75.9 73.0 73.9 73.8 69.7 75.8 76.5 76.3s14 76.0 80.6 81.0 79.0 86.9 82.4 79.9 73.2 77.6 74.0 74.9 75.7 70.0 76.3 77.6 77.7s172.0 76.0 76.4 76.3 84.4 78.0 78.2 69.8 74.7 66.1 71.3 71.1 67.8 73.0 72.4 73.8mT5 Ss6 s7 s10 66.7 70.8 71.3 69.9 78.9 73.3 71.8 64.5 69.2 60.8 66.9 64.9 61.8 66.1 68.7 68.4 71.2 75.0 75.0 75.7 84.3 78.7 78.0 68.3 74.0 61.6 71.0 69.6 64.9 70.9 71.9 72.7 72.5 77.6 77.5 76.6 85.0 79.2 79.5 70.6 75.2 65.5 72.6 72.1 68.2 73.5 73.6 74.6s14 69.2 73.9 73.7 74.0 81.9 75.6 75.3 67.2 72.5 63.2 68.9 68.2 64.7 70.1 71.2 71.3", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg LRg 1 / Rg 2 / Rg L XL-Sum ar,en,ja,zh test set results for all languages. We evaluate using the best performing model on the four source languages on the validation set./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "Rg L", "figure_id": "tab_15", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "XL-Sum en test set results for all languages. We evaluate using the best performing model on the English language of the validation set./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "amharicarabicazerbaijanibengaliburmese chinese_simplified chinese_traditionalmmT5 s7 Rg 1 Small 12.5 / 1.8 / 11.9 17.2 / 3.3 / 15.7 14.0 / 4.0 / 12.1 mT5 S s1 13.6 / 1.2 / 13.0 15.3 / 0.6 / 15.0 14.0 / 1.9 / 12.78.7 / 2.2 / 7.9 17.8 / 3.6 / 16.2 9.5 / 0.3 / 9.4 18.1 / 0.5 / 17.83.9 / 1.2 / 3.6 5.6 / 1.2 / 5.45.4 / 2.0 / 4.9 5.8 / 1.6 / 5.6BasemmT5 mT5 Ss1 s7 s1 s712.6 / 0.2 / 12.4 15.0 / 0.3 / 14.7 13.9 / 1.4 / 12.5 16.3 / 5.0 / 14.2 19.8 / 4.7 / 17.4 17.7 / 5.4 / 14.7 13.8 / 1.6 / 13.0 16.0 / 1.3 / 15.3 15.4 / 2.9 / 13.6 13.7 / 1.4 / 13.1 16.7 / 1.3 / 16.0 16.7 / 3.6 / 14.79.5 / 0.2 / 9.3 17.8 / 0.2 / 17.6 15.0 / 5.0 / 12.5 22.0 / 4.8 / 19.5 9.9 / 0.6 / 9.5 17.8 / 1.7 / 16.9 9.9 / 0.5 / 9.6 18.2 / 1.4 / 17.45.3 / 0.5 / 5.1 9.1 / 2.7 / 8.1 6.6 / 1.8 / 6.3 6.4 / 1.6 / 6.25.0 / 0.5 / 4.9 8.1 / 2.5 / 7.3 6.8 / 2.2 / 6.5 6.0 / 1.6 / 5.8englishfrenchgujaratihausahindiigboindonesianmmT5 s7 41.6 / 17.0 / 32.6 27.8 / 8.7 / 22.5 12.0 / 2.9 / 11.1 Rg 1 Small mT5 S s1 42.3 / 17.9 / 33.2 22.1 / 3.6 / 18.9 10.6 / 0.4 / 10.419.3 / 6.6 / 17.2 13.6 / 3.0 / 12.6 18.3 / 4.3 / 16.3 13.1 / 0.9 / 12.828.1 / 9.7 / 22.9 25.4 / 5.0 / 21.822.5 / 6.0 / 19.1 18.1 / 2.8 / 16.2BasemmT5 mT5 Ss1 45.0 / 20.8 / 35.6 21.1 / 3.6 / 17.8 10.6 / 0.2 / 10.4 s7 46.1 / 21.8 / 36.6 27.9 / 8.4 / 22.2 15.9 / 5.6 / 13.6 27.5 / 11.1 / 22.0 16.6 / 4.4 / 14.7 20.1 / 5.5 / 17.7 12.4 / 0.2 / 12.1 s1 44.7 / 20.6 / 35.4 21.4 / 3.9 / 18.2 11.1 / 0.8 / 10.7 19.5 / 4.9 / 16.9 14.0 / 1.7 / 13.3 s7 43.3 / 18.7 / 34.1 22.7 / 4.7 / 19.3 11.3 / 0.7 / 11.0 21.7 / 6.3 / 18.7 14.2 / 1.5 / 13.624.5 / 4.0 / 21.4 25.4 / 9.3 / 20.6 26.2 / 5.4 / 22.0 27.1 / 6.5 / 22.818.4 / 2.9 / 16.2 26.1 / 8.0 / 21.4 19.2 / 3.5 / 16.8 20.9 / 4.3 / 18.2japanesekoreankyrgyzmarathinepalipashtopersianmmT5 s7 Rg 1 Small 3.7 / 0.8 / 3.4 16.7 / 4.5 / 14.8 mT5 S s1 4.0 / 1.0 / 3.9 13.6 / 0.8 / 13.3 12.5 / 0.5 / 12.1 10.3 / 1.6 / 9.68.9 / 1.6 / 8.4 13.0 / 3.8 / 11.6 11.2 / 1.0 / 10.8 10.7 / 0.5 / 10.517.7 / 3.2 / 16.1 14.3 / 0.3 / 14.118.2 / 4.0 / 16.7 16.4 / 0.7 / 16.1BasemmT5 mT5 Ss1 s7 s1 s73.3 / 0.4 / 3.2 13.2 / 0.4 / 12.9 12.2 / 0.3 / 11.8 6.3 / 2.3 / 5.8 19.7 / 5.7 / 16.9 13.3 / 2.2 / 12.0 4.5 / 1.1 / 4.4 15.4 / 2.5 / 14.4 12.8 / 0.8 / 12.2 4.2 / 0.9 / 4.1 15.2 / 1.8 / 14.2 13.5 / 0.9 / 12.810.9 / 0.3 / 10.6 10.5 / 0.1 / 10.4 11.5 / 2.2 / 10.6 14.1 / 3.8 / 12.4 12.0 / 1.3 / 11.3 11.5 / 1.1 / 11.0 11.9 / 1.0 / 11.4 11.9 / 1.1 / 11.414.5 / 0.1 / 14.4 18.0 / 4.4 / 15.8 15.9 / 1.5 / 15.1 16.0 / 1.1 / 15.315.9 / 0.2 / 15.7 23.0 / 6.8 / 19.8 17.4 / 1.8 / 16.4 17.8 / 1.6 / 17.0portuguesepunjabirussianscottish_gaelic serbian_cyrillicserbian_latinsinhalaRg 1", "figure_id": "tab_16", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "Rg LSmallmmT5 s7 30.7 / 10.1 / 24.9 12.1 / 3.1 / 10.6 14.3 / 2.5 / 13.0 27.6 / 10.7 / 23.3 mT5 S s1 23.7 / 5.1 / 20.3 12.1 / 0.3 / 11.9 14.6 / 1.1 / 13.8 21.4 / 5.1 / 18.6 13.5 / 0.6 / 13.0 8.3 / 1.4 / 7.710.9 / 2.2 / 9.8 17.2 / 2.2 / 15.313.9 / 3.3 / 12.8 10.5 / 0.3 / 10.3BasemmT5 mT5 Ss1 s7 31.3 / 10.5 / 24.5 17.8 / 5.8 / 14.6 18.9 / 3.9 / 16.3 24.8 / 10.5 / 20.2 15.6 / 2.8 / 13.8 24.6 / 5.5 / 20.6 12.2 / 0.2 / 12.0 13.9 / 0.6 / 13.1 21.6 / 5.6 / 18.5 13.0 / 0.4 / 12.7 s1 23.7 / 5.6 / 19.8 12.5 / 0.7 / 12.0 14.8 / 1.3 / 13.7 22.3 / 6.3 / 18.8 13.9 / 0.9 / 13.2 s7 24.9 / 6.5 / 20.8 12.1 / 0.4 / 11.8 15.6 / 1.6 / 14.4 24.6 / 8.3 / 20.3 14.2 / 0.9 / 13.516.2 / 1.6 / 14.5 15.8 / 3.5 / 13.6 17.3 / 2.3 / 15.2 18.7 / 2.8 / 16.410.3 / 0.1 / 10.2 17.5 / 6.7 / 14.8 10.6 / 0.5 / 10.3 11.0 / 0.6 / 10.8somalispanishswahilitamilteluguthaiturkishmmT5 s7 Rg 1 Small 20.1 / 4.9 / 16.9 21.5 / 5.8 / 17.6 22.0 / 5.9 / 18.2 mT5 S s1 19.2 / 2.5 / 16.5 19.4 / 3.4 / 17.0 19.3 / 3.4 / 16.713.9 / 4.3 / 12.3 12.8 / 3.6 / 11.7 9.5 / 0.7 / 9.2 10.0 / 0.6 / 9.813.0 / 4.1 / 11.5 9.2 / 1.0 / 8.615.2 / 4.5 / 13.0 16.3 / 2.8 / 14.5BasemmT5 mT5 Ss1 s7 s1 s719.5 / 2.7 / 16.7 19.9 / 3.5 / 17.0 18.5 / 3.2 / 16.2 22.0 / 6.5 / 17.3 25.6 / 7.0 / 20.2 25.6 / 8.3 / 20.2 19.6 / 2.6 / 16.8 20.0 / 3.8 / 17.1 20.3 / 4.3 / 17.2 21.1 / 3.3 / 17.9 20.8 / 4.3 / 17.7 21.8 / 4.9 / 18.59.0 / 0.2 / 8.7 17.6 / 6.6 / 14.5 15.9 / 5.5 / 13.6 9.7 / 0.2 / 9.5 9.9 / 1.0 / 9.4 10.3 / 0.9 / 9.8 10.3 / 1.1 / 9.8 10.6 / 0.9 / 10.28.9 / 0.4 / 8.3 14.1 / 4.7 / 12.4 10.0 / 1.2 / 9.2 9.8 / 1.0 / 9.016.6 / 2.6 / 14.5 21.6 / 7.0 / 17.5 17.6 / 3.5 / 15.1 19.6 / 4.6 / 16.7ukrainianurduuzbekvietnamesewelshyorubaavgRg 1", "figure_id": "tab_17", "figure_label": "", "figure_type": "table" }, { "figure_caption": "XL-Sum en validation results for each of the languages. We report results of different combinations of freezing the model./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /Rg L / 22.3 / 33.7 mT5 S 41.7 / 22.5 / 33.8 37.4 / 20.2 / 32.0 26.8 / 12.4 / 21.5 32.7 / 18.5 / 26.7 39.0 / 22.3 / 31.6 / Rg 2/ Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "Rg L", "figure_id": "tab_18", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Test set results for XL-Sum in the multisource setup. We evaluate using the model which performed best on all the languages in the validation set./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "langamharicarabicazerbaijanibengaliburmese chinese_simplified chinese_traditionalmmT5 s7 Rg 1 Small 29.1 / 13.0 / 22.2 37.7 / 17.3 / 29.6 mT5 S s1 13.0 / 3.1 / 12.0 37.6 / 17.3 / 29.419.0 / 7.2 / 15.9 27.6 / 12.8 / 22.3 38.3 / 17.7 / 29.7 13.4 / 2.6 / 12.1 7.4 / 1.1 / 7.2 11.9 / 1.9 / 11.438.3 / 19.1 / 33.2 38.5 / 19.5 / 33.439.8 / 21.4 / 34.0 39.9 / 21.6 / 34.1BasemmT5 mT5 Ss1 s7 s1 s79.4 / 0.3 / 9.3 41.2 / 20.9 / 31.9 34.0 / 16.6 / 25.3 42.4 / 22.3 / 33.5 26.3 / 11.0 / 21.1 34.5 / 18.0 / 26.7 40.1 / 19.0 / 30.9 12.3 / 1.8 / 11.2 8.3 / 0.4 / 8.1 15.1 / 0.3 / 14.9 14.9 / 4.2 / 13.6 40.4 / 20.1 / 31.5 15.9 / 4.4 / 14.1 11.4 / 3.1 / 10.6 16.5 / 4.1 / 15.1 10.3 / 3.4 / 9.4 40.5 / 20.1 / 31.8 12.3 / 4.2 / 10.9 6.3 / 2.0 / 5.9 22.3 / 6.3 / 19.964.1 / 51.2 / 60.8 48.4 / 29.6 / 43.2 44.3 / 26.2 / 39.3 41.5 / 21.9 / 36.062.9 / 49.8 / 58.8 49.4 / 31.3 / 43.6 45.4 / 27.8 / 39.7 42.7 / 24.2 / 36.8langenglishfrenchgujaratihausahindiigboindonesianRg 1", "figure_id": "tab_19", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Rg L", "figure_data": "SmallmmT5 s7 mT5 S s129.3 / 11.6 / 22.7 16.8 / 2.7 / 15.128.5 / 8.2 / 22.3 30.0 / 10.9 / 22.9 25.3 / 12.0 / 20.6 23.4 / 10.6 / 19.2 18.8 / 3.7 / 16.8 14.6 / 2.9 / 13.5 8.0 / 1.8 / 7.6 8.5 / 1.3 / 8.326.3 / 12.8 / 20.3 7.1 / 2.2 / 6.624.8 / 9.9 / 20.5 16.4 / 3.8 / 14.7mmT5 s115.9 / 3.0 / 14.019.7 / 4.3 / 17.016.4 / 3.5 / 14.57.5 / 0.6 / 7.38.3 / 0.4 / 8.17.4 / 0.6 / 7.016.0 / 3.4 / 14.0BasemT5 Ss7 s134.0 / 15.0 / 25.1 32.6 / 10.4 / 24.0 36.0 / 15.0 / 26.4 29.6 / 14.8 / 23.2 26.8 / 12.8 / 21.1 18.6 / 3.8 / 16.4 21.2 / 4.7 / 18.3 16.8 / 4.1 / 15.2 10.6 / 3.5 / 9.7 10.8 / 2.9 / 10.230.7 / 15.7 / 23.3 10.9 / 3.9 / 9.931.2 / 13.8 / 25.1 18.4 / 5.4 / 16.0s712.6 / 2.5 / 11.519.8 / 4.9 / 17.313.8 / 3.5 / 12.710.0 / 3.9 / 9.18.7 / 2.7 / 8.011.1 / 4.7 / 9.817.8 / 6.7 / 15.4langukrainianurduuzbekvietnamesewelshyorubaavgRg 1", "figure_id": "tab_23", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Rg L XL-Sum ar,en,ja,zh validation set results for all languages. We report results for the different freezing configurations.", "figure_data": "SmallmmT5 s7 mT5 S s122.4 / 7.4 / 18.3 34.5 / 15.6 / 26.0 11.4 / 1.2 / 11.1 15.6 / 3.0 / 12.64.2 / 1.0 / 4.0 36.6 / 16.9 / 26.0 27.8 / 11.2 / 21.7 1.0 / 1.6 / 10.0 16.9 / 3.2 / 15.2 11.2 / 3.0 / 16.230.8 / 13.6 / 23.7 13.6 / 4.1 / 17.329.4 / 12.6 / 23.4 17.0 / 4.8 / 15.1BasemmT5 mT5 Ss1 s7 s1 s712.6 / 1.0 / 12.1 29.1 / 10.9 / 22.6 38.5 / 18.6 / 28.6 13.5 / 1.6 / 12.5 14.2 / 2.5 / 13.2 15.6 / 3.5 / 14.3 12.0 / 2.8 / 11.0 13.2 / 4.6 / 11.910.7 / 0.5 / 10.4 15.2 / 3.1 / 14.2 42.2 / 21.1 / 28.9 26.6 / 11.1 / 20.6 18.0 / 3.7 / 15.8 20.1 / 3.6 / 17.8 12.3 / 2.1 / 11.5 23.2 / 6.3 / 19.7 22.8 / 4.9 / 19.9 8.8 / 2.2 / 8.2 21.1 / 7.3 / 17.6 12.7 / 3.3 / 11.220.2 / 3.2 / 18.3 38.6 / 17.3 / 28.5 26.1 / 6.6 / 22.3 18.0 / 5.7 / 15.420.4 / 6.8 / 17.8 34.7 / 16.2 / 26.9 18.7 / 6.1 / 16.8 17.8 / 6.6 / 15.4", "figure_id": "tab_24", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "MASSIVE Exact Match (EM) dev accuracies of all models and settings for all languages. / Rg2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "langenglishfrenchgujaratihausahindiigboindonesians1 Rg 1 Small 38.5 / 30.0 / 33.5 38.1 / 29.3 / 31.5 33.5 / 27.0 / 31.2 41.9 / 32.6 / 36.0 39.0 / 31.2 / 35.7 43.0 / 31.8 / 34.5 38.0 / 30.0 / 33.8 s6 36.1 / 28.4 / 32.7 38.0 / 29.4 / 32.4 32.0 / 26.0 / 31.9 39.4 / 30.9 / 36.1 36.8 / 29.5 / 35.3 38.2 / 28.8 / 34.6 35.6 / 28.1 / 33.2 s7 37.3 / 29.1 / 33.5 37.9 / 29.1 / 33.0 32.8 / 26.5 / 31.8 40.4 / 31.4 / 37.1 38.1 / 30.4 / 35.8 41.0 / 30.8 / 35.0 36.5 / 28.9 / 34.1 mmT5 s10 36", "figure_id": "tab_25", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "XL-Sum results . Different configurations of freezing. / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "35.6 / 28.1 / 31.5", "figure_id": "tab_27", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "Validation set results for XL-Sum in the multisource setup for languages English, French, Gujarati, Hausa, Hindi, Igbo, and Indonesian. We report results for the different freezing configurations./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L", "figure_data": "langportuguesepunjabirussianscottish_gaelicserbian_cyrillicserbian_latinsinhalammT5 Rg 1 Small s1 40.6 / 30.4 / 33.5 39.7 / 29.0 / 33.3 33.1 / 25.5 / 28.8 37.1 / 28.1 / 30.9 30.5 / 23.1 / 25.6 30.4 / 23.0 / 26.1 36.4 / 29.2 / 31.9 s6 38.6 / 29.0 / 32.9 37.7 / 28.2 / 33.7 31.0 / 23.9 / 27.9 34.8 / 27.4 / 32.2 29.0 / 22.5 / 27.3 23.6 / 19.1 / 26.4 35.2 / 28.3 / 34.7 s7 39.5 / 29.3 / 33.7 38.8 / 28.8 / 33.9 31.7 / 24.4 / 28.8 36.3 / 28.2 / 32.1 29.4 / 22.7 / 27.9 26.1 / 20.5 / 27.0 35.5 / 28.7 / 35.0 s10 38.8 / 29.0 / 32.7 37.7 / 28.3 / 33.2 31.1 / 23.9 / 27.0 35.3 / 27.9 / 32.4 29.2 / 22.5 / 26.7 23.5 / 18.8 / 25.5 35.5 / 28.4 / 34.2 s14 39.4 / 29.3 / 33.5 38.3 / 28.3 / 33.5 31.0 / 23.9 / 28.7 35.3 / 27.6 / 32.1 29.2 / 22.6 / 27.9 24.4 / 19.3 / 26.9 35.0 / 28.1 / 34.3 s1 40.4 / 30.2 / 32.1 39.7 / 29.1 / 31.3 32.2 / 24.8 / 26.9 36.9 / 27.9 / 28.3 30.4 / 22.9 / 24.4 30.2 / 23.0 / 24.6 35.8 / 27.9 / 30.4mT5 Ss6 s7 s10 37.3 / 28.0 / 30.9 36.9 / 27.4 / 31.2 30.4 / 23.3 / 25.4 33.1 / 26.4 / 29.5 28.0 / 21.6 / 24.9 23.4 / 18.8 / 23.5 33.9 / 27.1 / 31.9 38.4 / 28.6 / 30.6 37.9 / 28.1 / 30.9 31.1 / 23.9 / 25.2 35.0 / 27.6 / 29.2 28.7 / 22.3 / 24.6 24.3 / 19.3 / 23.1 34.5 / 27.7 / 31.2 38.6 / 28.8 / 31.2 38.3 / 28.1 / 30.9 30.7 / 23.5 / 25.6 34.7 / 27.1 / 29.5 28.6 / 21.8 / 24.9 25.1 / 19.6 / 23.8 35.6 / 28.4 / 30.9s14 38.8 / 28.9 / 31.5 38.3 / 28.4 / 31.5 31.2 / 23.8 / 25.7 34.9 / 27.4 / 30.1 29.2 / 22.3 / 25.2 25.9 / 20.4 / 24.6 35.3 / 28.3 / 31.8s144.5 / 22.4 / 33.5 44.2 / 25.9 / 33.3 37.3 / 17.4 / 28.8 41.0 / 22.1 / 30.9 34.3 / 13.5 / 25.6 34.8 / 14.4 / 26.1 40.1 / 25.5 / 31.9mmT5s6 s7 s10 43.6 / 21.2 / 32.7 43.8 / 25.6 / 33.2 35.2 / 15.8 / 27.0 41.8 / 23.1 / 32.4 34.9 / 14.1 / 26.7 33.6 / 13.8 / 25.5 41.8 / 27.4 / 34.2 43.9 / 21.6 / 32.9 44.4 / 26.1 / 33.7 36.2 / 16.4 / 27.9 42.1 / 23.0 / 32.2 35.6 / 14.6 / 27.3 34.5 / 14.6 / 26.4 42.4 / 28.1 / 34.7 44.8 / 22.5 / 33.7 44.8 / 26.4 / 33.9 37.1 / 17.4 / 28.8 42.1 / 23.2 / 32.1 36.4 / 15.5 / 27.9 35.9 / 15.2 / 27.0 42.5 / 28.2 / 35.0Bases14 44.7 / 22.4 / 33.5 44.7 / 26.2 / 33.5 37.2 / 17.3 / 28.7 42.3 / 23.4 / 32.1 36.2 / 15.3 / 27.9 35.5 / 15.4 / 26.9 42.4 / 27.6 / 34.3 s1 42.9 / 20.7 / 32.1 42.5 / 23.2 / 31.3 35.0 / 15.6 / 26.9 38.2 / 19.2 / 28.3 32.5 / 12.0 / 24.4 33.0 / 12.8 / 24.6 38.6 / 24.0 / 30.4mT5 Ss6 s7 s10 41.7 / 19.1 / 30.9 41.5 / 23.0 / 31.2 33.1 / 14.0 / 25.4 38.6 / 19.7 / 29.5 32.7 / 12.2 / 24.9 30.6 / 11.7 / 23.5 39.3 / 25.1 / 31.9 41.3 / 18.7 / 30.6 41.1 / 22.5 / 30.9 32.9 / 13.8 / 25.2 37.5 / 18.8 / 29.2 32.3 / 12.0 / 24.6 30.1 / 11.3 / 23.1 38.5 / 24.4 / 31.2 41.9 / 19.5 / 31.2 41.7 / 22.6 / 30.9 33.5 / 14.1 / 25.6 38.9 / 19.7 / 29.5 32.7 / 12.3 / 24.9 31.4 / 12.1 / 23.8 38.7 / 24.2 / 30.9s14 42.3 / 19.8 / 31.5 42.5 / 23.6 / 31.5 33.6 / 14.4 / 25.7 39.5 / 20.5 / 30.1 33.2 / 12.5 / 25.2 32.7 / 12.8 / 24.6 39.4 / 25.2 / 31.8", "figure_id": "tab_28", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "Validation set results for XL-Sum in the multisource setup for languages Portuguese, Punjabi, Russian, Scottish Gaelic, Serbian, and Sinhala. We report results for the different freezing configurations./ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L /28.0 / 29.0 31.7 / 23.9 / 25.6 38.5 / 29.7 / 31.7 31.9 / 26.7 / 28.7 28.5 / 23.2 / 25.3 31.1 / 24.4 / 26.2 32.9 / 27.5 / 30.2 s6 35.4 / 26.3 / 28.4 32.1 / 23.9 / 25.1 36.1 / 27.9 / 31.3 29.7 / 25.0 / 26.2 26.9 / 22.3 / 25.0 30.6 / 24.7 / 26.7 29.8 / 25.1 / 28.3 s7 35.8 / 26.4 / 28.7 30.9 / 23.1 / 25.2 36.6 / 28.1 / 31.3 29.9 / 25.1 / 28.5 26.9 / 21.9 / 25.1 30.9 / 24.5 / 26.9 30.6 / 25.8 / 29.2 s10 34.6 / 25.8 / 28.7 31.6 / 23.6 / 25.6 35.3 / 27.0 / 31.5 27.5 / 22.7 / 28.3 26.0 / 21.6 / 25.3 30.4 / 24.2 / 27.0 29.2 / 24.5 / 28.8 s14 36.1 / 26.2 / 29.0 32.2 / 23.9 / 25.6 37.1 / 28.3 / 32.1 30.2 / 25.3 / 29.1 27.1 / 22.2 / 25.8 30.5 / 24.3 / 27.7 30.7 / 25.8 / 29.7 / 21.7 / 31.2 36.5 / 15.1 / 27.6 44.2 / 24.0 / 34.7 37.9 / 23.4 / 32.2 34.4 / 20.7 / 28.4 37.4 / 22.0 / 29.8 38.3 / 21.6 / 32.4 s10 41.0 / 20.9 / 30.2 35.6 / 14.1 / 26.7 43.2 / 22.9 / 33.6 36.7 / 22.1 / 31.0 33.3 / 19.6 / 27.3 36.4 / 21.2 / 29.3 36.9 / 20.1 / 31.0 s14 41.7 / 21.7 / 31.0 36.3 / 14.9 / 27.5 43.8 / 23.6 / 34.3 37.9 / 23.1 / 32.1 34.4 / 20.6 / 28.2 37.2 / 21.7 / 29.6 38.1 / 21.3 / 32.1 mT5 S s1 39.6 / 19.4 / 29.0 34.3 / 13.3 / 25.6 41.3 / 21.1 / 31.7 34.7 / 20.3 / 28.7 31.3 / 17.6 / 25.3 33.6 / 18.7 / 26.2 36.2 / 19.5 / 30.2 s6 38.5 / 18.2 / 28.4 33.6 / 12.2 / 25.1 40.4 / 20.2 / 31.3 31.5 / 17.8 / 26.2 30.6 / 17.0 / 25.0 33.5 / 18.6 / 26.7 33.9 / 17.3 / 28.3 s7 39.1 / 19.0 / 28.7 33.7 / 12.6 / 25.2 40.7 / 20.5 / 31.3 34.1 / 19.9 / 28.5 30.8 / 17.3 / 25.1 33.9 / 19.1 / 26.9 34.9 / 18.3 / 29.2 s10 38.8 / 18.7 / 28.7 34.1 / 12.6 / 25.6 40.8 / 20.5 / 31.5 33.7 / 19.8 / 28.3 30.8 / 17.4 / 25.3 33.8 / 18.9 / 27.0 34.5 / 18.0 / 28.8 s14 39.3 / 19.1 / 29.0 34.2 / 12.8 / 25.6 41.2 / 20.9 / 32.1 34.7 / 20.6 / 29.1 31.5 / 17.9 / 25.8 34.7 / 19.6 / 27.7 35.4 / 18.8 / 29.7", "figure_data": "langsomalispanishswahilitamilteluguthaiturkishmmT5 Rg 1 Small s1 38.4 / 28.3 / 30.6 31.3 / 23.5 / 27.3 38.9 / 30.0 / 33.3 31.8 / 26.6 / 31.2 29.0 / 23.6 / 27.5 31.5 / 24.8 / 28.2 33.2 / 27.8 / 32.0 s6 37.0 / 27.4 / 30.8 32.2 / 24.2 / 27.1 37.7 / 28.8 / 33.7 28.9 / 24.2 / 31.3 26.8 / 22.1 / 27.9 31.0 / 24.8 / 29.3 30.3 / 25.3 / 31.5 s7 37.5 / 27.7 / 31.2 30.8 / 23.1 / 27.6 38.3 / 29.5 / 34.7 30.8 / 26.0 / 32.2 27.6 / 22.7 / 28.4 31.8 / 25.3 / 29.8 31.5 / 26.4 / 32.4 s10 36.8 / 27.2 / 30.2 31.9 / 23.9 / 26.7 37.9 / 28.9 / 33.6 28.8 / 24.0 / 31.0 26.9 / 22.2 / 27.3 31.5 / 25.0 / 29.3 30.5 / 25.6 / 31.0 s14 37.6 / 27.6 / 31.0 31.1 / 23.3 / 27.5 38.3 / 29.4 / 34.3 30.4 / 25.6 / 32.1 27.7 / 22.6 / 28.2 31.8 / 25.1 / 29.6 31.2 / 26.1 / 32.1 mT5 S s1 s1 41.7 / 21.4 / 30.6 36.2 / 14.8 / 27.3 42.7 / 22.6 / 33.3 37.0 / 22.4 / 31.2 33.8 / 20.0 / 27.5 35.5 / 20.3 / 28.2 38.0 / 21.3 / 32.0 38.0 Base s6 41.5 / 21.5 / 30.8 36.0 / 14.5 / 27.1 43.1 / 23.0 / 33.7 37.0 / 22.4 / 31.3 33.9 / 20.1 / 27.9 36.5 / 21.4 / 29.3 37.3 / 20.5 / 31.5 mmT5 s7 41.7", "figure_id": "tab_29", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "Validation set results for XL-Sum in the multisource setup for languages Somali, Spanish, Swahili, Tamil, Telugu, Thai, and Turkish . We report results for the different freezing configurations. / Rg2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L /25.3 / 29.2 42.3 / 33.0 / 37.3 28.4 / 23.5 / 26.6 44.8 / 31.0 / 34.4 41.1 / 31.0 / 34.0 45.2 / 33.2 / 35.6 36.8 / 18.3 / 28.8 s6 30.1 / 23.6 / 28.6 39.5 / 30.6 / 36.8 26.8 / 22.1 / 27.1 42.9 / 29.9 / 34.1 37.4 / 28.3 / 33.5 42.9 / 31.8 / 35.8 34.8 / 16.5 / 27.5 s7 30.5 / 23.7 / 29.2 41.1 / 32.0 / 37.4 27.3 / 22.4 / 27.5 43.8 / 30.3 / 34.8 39.5 / 29.8 / 34.0 44.0 / 32.6 / 36.6 35.8 / 17.4 / 28.2 s10 30.1 / 23.7 / 28.1 39.8 / 30.9 / 36.4 26.9 / 22.2 / 26.9 43.2 / 30.1 / 34.1 37.7 / 28.4 / 33.5 42.8 / 31.7 / 35.5 35.0 / 16.7 / 27.6 s14 30.5 / 23.8 / 29.0 40.5 / 31.5 / 37.3 26.8 / 22.0 / 27.4 43.7 / 30.3 / 34.4 39.3 / 29.9 / 34.1 43.5 / 32.3 / 36.2 35.6 / 17.2 / 28.0 mT5 S s1 31.7 / 25.0 / 27.3 41.6 / 32.4 / 35.3 28.2 / 23.1 / 24.4 44.8 / 30.8 / 33.0 40.4 / 30.4 / 31.8 44.3 / 32.7 / 34.2 36.6 / 18.0 / 28.6 s6 29.7 / 23.2 / 25.8 39.4 / 30.4 / 33.7 26.9 / 22.3 / 24.6 42.8 / 29.7 / 31.9 37.7 / 28.4 / 30.0 41.8 / 31.0 / 33.5 34.5 / 16.2 / 27.2 s7 29.9 / 23.3 / 26.0 39.9 / 30.9 / 34.4 27.7 / 23.0 / 24.5 43.1 / 29.6 / 32.1 37.8 / 28.4 / 30.5 43.2 / 31.6 / 33.3 34.9 / 16.5 / 27.4 s10 29.1 / 22.9 / 26.1 38.5 / 29.6 / 34.1 26.8 / 22.1 / 25.1 42.1 / 29.1 / 32.2 36.7 / 27.7 / 30.9 40.8 / 30.1 / 33.5 33.6 / 15.5 / 26.4 s14 29.9 / 23.3 / 26.5 40.2 / 31.1 / 34.9 27.4 / 22.4 / 25.2 43.2 / 29.9 / 32.7 37.7 / 28.6 / 30.7 42.9 / 31.5 / 33.5 35.0 / 16.6 / 27.5 / 18.0 / 29.2 47.0 / 27.8 / 37.3 33.0 / 16.5 / 26.6 49.1 / 27.6 / 34.4 44.9 / 24.8 / 34.0 48.0 / 25.4 / 35.6 41.2 / 22.4 / 32.4 s6 36.1 / 17.2 / 28.6 46.4 / 27.2 / 36.8 33.2 / 16.9 / 27.1 48.5 / 27.0 / 34.1 43.9 / 23.7 / 33.5 47.8 / 25.8 / 35.8 41.2 / 22.4 / 32.6 s7 36.8 / 17.9 / 29.2 46.9 / 27.8 / 37.4 33.9 / 17.4 / 27.5 49.1 / 27.8 / 34.8 44.8 / 24.5 / 34.0 48.3 / 26.2 / 36.6 41.9 / 23.1 / 33.2 s10 35.6 / 16.7 / 28.1 46.0 / 26.8 / 36.4 32.8 / 16.8 / 26.9 48.4 / 27.0 / 34.1 44.2 / 23.8 / 33.5 47.4 / 25.2 / 35.5 40.8 / 22.1 / 32.3 s14 36.7 / 17.7 / 29.0 46.9 / 27.8 / 37.3 34.0 / 17.4 / 27.4 49.0 / 27.6 / 34.4 45.0 / 24.6 / 34.1 48.5 / 26.4 / 36.2 41.8 / 23.0 / 33.1 / 14.6 / 26.1 43.6 / 24.2 / 34.1 30.5 / 15.0 / 25.1 46.3 / 24.6 / 32.2 41.2 / 20.4 / 30.9 45.1 / 23.2 / 33.5 38.4 / 19.8 / 30.2 s14 33.9 / 15.1 / 26.5 44.6 / 25.2 / 34.9 30.9 / 15.2 / 25.2 47.1 / 25.2 / 32.7 41.2 / 20.4 / 30.7 45.6 / 23.0 / 33.5 39.1 / 20.4 / 30.7", "figure_data": "langukrainianurduuzbekvietnamesewelshyorubaavgRg 1 Small mmT5 s1 32.2 Base mmT5 s1 s1 34.7 / 16.1 / 27.3 45.1 / 25.6 / 35.3 30.5 / 14.3 / 24.4 47.8 / 25.9 / 33.0 42.4 / 22.1 / 31.8 46.7 / 24.1 / 34.2 39.2 / 20.4 / 30.6 s6 32.9 / 14.3 / 25.8 43.1 / 23.7 / 33.7 29.9 / 14.4 / 24.6 46.1 / 24.2 / 31.9 40.2 / 19.4 / 30.0 45.2 / 22.9 / 33.5 37.9 / 19.2 / 29.8 s7 33.3 / 14.5 / 26.0 43.8 / 24.4 / 34.4 30.1 / 14.5 / 24.5 46.6 / 24.7 / 32.1 40.5 / 19.9 / 30.5 45.1 / 22.8 / 33.3 38.5 / 19.8 / 30.2 36.8 mT5 S s10 33.2", "figure_id": "tab_30", "figure_label": "21", "figure_type": "table" }, { "figure_caption": "Validation set results for XL-Sum in the multisource setup for languages Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, and Yoruba. We report results for the different freezing configurations. / 34.9 30.4 / 16.8 62.7 / 51.4 46.9 / 32.1 49.9 / 33.6 26.6 / 19.6 46.3 / 28.6 34.4 / 23.8 31.5 / 22.1 42.3 / 29.2 s6 57.5 / 39.4 39.5 / 24.8 68.3 / 58.2 51.7 / 35.2 54.3 / 40.0 26.5 / 15.9 58.2 / 36.5 44.8 / 32.5 43.5 / 29.6 49.4 / 34.7 s7 63.2 / 46.0 40.6 / 26.5 70.7 / 60.5 59.3 / 41.9 57.9 / 43.4 30.7 / 18.5 60.3 / 39.9 36.4 / 25.7 38.1 / 23.9 50.8 / 36.3 s10 39.7 / 56.8 25.7 / 40.6 57.7 / 69.4 34.7 / 49.1 40.2 / 53.6 17.0 / 26.9 34.7 / 57.8 31.5 / 45.9 28.7 / 39.5 34.4 / 48.7 s14 63.1 / 47.6 40.5 / 25.7 71.0 / 60.2 58.1 / 40.3 56.1 / 41.1 30.4 / 18.1 59.3 / 37.9 40.9 / 29.9 33.9 / 22.9 50.4 / 36.0 mT5 S s1 45.6 / 30.2 25.5 / 14.2 61.7 / 50.0 45.1 / 29.9 49.7 / 33.8 27.1 / 20.3 50.8 / 35.0 33.1 / 21.8 18.5 / 13.6 39.7 / 27.6 s6 47.3 / 31.6 30.0 / 18.6 65.9 / 54.3 51.4 / 35.7 52.1 / 35.2 26.9 / 18.1 57.1 / 40.3 40.1 / 28.1 19.0 / 13.8 43.3 / 30.6 s7 51.4 / 31.4 31.3 / 18.6 68.4 / 57.0 52.4 / 33.5 54.4 / 36.1 27.7 / 19.2 54.7 / 35.5 33.4 / 23.6 20.1 / 14.6 43.8 / 30.0 s10 46", "figure_data": "arbnenfiidkoruswteavgcfgF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMF1 / EMs151.6mmT5Small", "figure_id": "tab_31", "figure_label": "22", "figure_type": "table" }, { "figure_caption": "/ 40.7 41.4 / 25.7 72.5 / 61.1 64.1 / 48.0 70.3 / 56.5 42.1 / 30.1 58.4 / 36.5 58.2 / 41.3 48.7 / 39.2 57.8 / 42.1 s6 66.5 / 44.8 42.8 / 27.4 74.8 / 62.7 67.4 / 50.6 75.3 / 60.7 53.9 / 40.9 65.0 / 43.6 59.7 / 42.9 55.0 / 42.0 62.3 / 46.2 s7 70.2 / 47.7 52.7 / 32.7 74.8 / 64.3 68.5 / 52.9 74.3 / 58.9 49.8 / 39.9 63.6 / 39.7 58.4 / 40.1 56.8 / 46.2 63.2 / 46.9 s10 67.5 / 46.9 48.5 / 30.1 74.2 / 63.9 67.0 / 51.0 73.5 / 57.7 47.7 / 35.1 64.7 / 42.5 57.9 / 42.3 53.3 / 42.6 61.6 / 45.8 s14 68.3 / 45.8 55.4 / 38.1 75.7 / 64.1 67.8 / 52.2 75.3 / 60.4 52.8 / 40.9 63.7 / 40.6 61.4 / 43.1 55.8 / 44.7 64.0 / 47.8 Results for the validation set of TyDiQA. We report results for the different configurations of freezing.", "figure_data": "44.5 / 31.1", "figure_id": "tab_32", "figure_label": "23", "figure_type": "table" } ]
Jonas Pfeiffer; Francesco Piccinno; Massimo Nicosia; Xinyi Wang; Machel Reid; Sebastian Ruder; Google Deepmind; Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham
[ { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b0", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Dheeru Dua; Shruti Bhosale; Vedanuj Goswami; James Cross; Mike Lewis; Angela Fan", "journal": "", "ref_id": "b1", "title": "Tricks for training sparse translation models", "year": "2022" }, { "authors": "Constantin Eichenberg; Sidney Black; Samuel Weinbach; Letitia Parcalabescu; Anette Frank", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "MAGMAmultimodal augmentation of generative models through adapter-based finetuning", "year": "2022" }, { "authors": "William Fedus; Barret Zoph; Noam Shazeer", "journal": "", "ref_id": "b3", "title": "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity", "year": "2021" }, { "authors": "Jack Fitzgerald; Christopher Hench; Charith Peris; Scott Mackie; Kay Rottmann; Ana Sanchez; Aaron Nash; Liam Urbach; Vishesh Kakarala; Richa Singh", "journal": "", "ref_id": "b4", "title": "Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages", "year": "2022" }, { "authors": "Xavier Garcia; Noah Constant; Ankur Parikh; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Towards continual learning for multilingual machine translation via vocabulary substitution", "year": "2021" }, { "authors": "Suchin Gururangan; Mike Lewis; Ari Holtzman; Noah A Smith; Luke Zettlemoyer", "journal": "", "ref_id": "b6", "title": "Demix layers: Disentangling domains for modular language modeling", "year": "2021" }, { "authors": "Tahmid Hasan; Abhik Bhattacharjee; Md Saiful Islam; Kazi Mubasshir; Yuan-Fang Li; Yong-Bin Kang; M Sohel Rahman; Rifat Shahriyar", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "XL-sum: Large-scale multilingual abstractive summarization for 44 languages", "year": "2021" }, { "authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "", "ref_id": "b8", "title": "Towards a Unified View of Parameter-Efficient Transfer Learning", "year": "2022" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzkebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b9", "title": "Parameterefficient transfer learning for NLP", "year": "2019-06" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b10", "title": "Lora: Low-rank adaptation of large language models", "year": "2022-04-25" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "", "ref_id": "b11", "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b12", "title": "", "year": "" }, { "authors": "Pratik Joshi; Sebastin Santy; Amar Budhiraja; Kalika Bali; Monojit Choudhury", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "year": "2020-07-05" }, { "authors": "Rabeeh Karimi Mahabadi; James Henderson; Sebastian Ruder", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Compacter: Efficient low-rank hypercomplex adapter layers", "year": "2021" }, { "authors": "Sneha Kudugunta; Yanping Huang; Ankur Bapna; Maxim Krikun; Dmitry Lepikhin; Minh-Thang Luong; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Beyond distillation: Task-level mixture-ofexperts for efficient inference", "year": "2021-11" }, { "authors": "Anne Lauscher; Olga Majewska; Leonardo F R Ribeiro; Iryna Gurevych; Nikolai Rozanov; Goran Glavaš", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Common sense or world knowledge? investigating adapterbased knowledge injection into pretrained transformers", "year": "2020" }, { "authors": "Anne Lauscher; Vinit Ravishankar; Ivan Vulić; Goran Glavaš", "journal": "", "ref_id": "b17", "title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers", "year": "2020" }, { "authors": "Hang Le; Juan Miguel Pino; Changhan Wang; Jiatao Gu; Didier Schwab; Laurent Besacier", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Lightweight adapter tuning for multilingual speech translation", "year": "2021-08-01" }, { "authors": "Rabeeh Karimi Mahabadi; Sebastian Ruder; Mostafa Dehghani; James Henderson", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Parameter-efficient multi-task fine-tuning for transformers via shared hypernetworks", "year": "2021-08-01" }, { "authors": "Jonas Pfeiffer; Naman Goyal; Xi Lin; Xian Li; James Cross; Sebastian Riedel; Mikel Artetxe", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Lifting the curse of multilinguality by pre-training modular transformers", "year": "2022" }, { "authors": "Hakan Sylvestre-Alvise Rebuffi; Andrea Bilen; Vedaldi", "journal": "", "ref_id": "b21", "title": "Learning multiple visual domains with residual adapters", "year": "2017-04-09" }, { "authors": "Hakan Sylvestre-Alvise Rebuffi; Andrea Bilen; Vedaldi", "journal": "", "ref_id": "b22", "title": "Efficient parametrization of multi-domain deep neural networks", "year": "2018-06-18" }, { "authors": "Andreas Rücklé; Jonas Pfeiffer; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Multicqa: Zero-shot transfer of self-supervised text matching models on a massive scale", "year": "2020-11-16" }, { "authors": "Sebastian Ruder", "journal": "", "ref_id": "b24", "title": "An overview of multi-task learning in deep neural networks", "year": "2017" }, { "authors": "Sebastian Ruder; Noah Constant; Jan Botha; Aditya Siddhant; Orhan Firat; Jinlan Fu; Pengfei Liu; Junjie Hu; Dan Garrette; Graham Neubig; Melvin Johnson", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "XTREME-R: Towards more challenging and nuanced multilingual evaluation", "year": "2021" }, { "authors": "Noam Shazeer; Azalia Mirhoseini; Krzysztof Maziarz; Andy Davis; Quoc V Le; Geoffrey E Hinton; Jeff Dean", "journal": "", "ref_id": "b26", "title": "Outrageously large neural networks: The sparselygated mixture-of-experts layer", "year": "2017-04-24" }, { "authors": "Asa Cooper Stickland; Alexandre Berard; Vassilina Nikoulina", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Multilingual domain adaptation for NMT: decoupling language and domain information with adapters", "year": "2021-11-10" }, { "authors": "Asa ; Cooper Stickland; Iain Murray", "journal": "", "ref_id": "b28", "title": "BERT and pals: Projected attention layers for efficient adaptation in multitask learning", "year": "2019-06" }, { "authors": " Pmlr", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "Ahmet Üstün; Alexandre Berard; Laurent Besacier; Matthias Gallé", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Multilingual unsupervised neural machine translation with denoising adapters", "year": "2021-07-11" }, { "authors": "Ahmet Üstün; Arianna Bisazza; Gosse Bouma; Gertjan Van Noord", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "UDapter: Language adaptation for truly Universal Dependency parsing", "year": "2020" }, { "authors": "Marko Vidoni; Ivan Vulić; Goran Glavaš", "journal": "", "ref_id": "b32", "title": "Orthogonal language and task adapters in zero-shot cross-lingual transfer", "year": "2020" }, { "authors": "Tu Vu; Aditya Barua; Brian Lester; Daniel Cer; Mohit Iyyer; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Overcoming catastrophic forgetting in zero-shot cross-lingual generation", "year": "2022" }, { "authors": "Ruize Wang; Duyu Tang; Nan Duan; Zhongyu Wei; Xuanjing Huang; Jianshu Ji; Guihong Cao; Daxin Jiang; Ming Zhou; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "K-adapter: Infusing knowledge into pre-trained models with adapters", "year": "2021-08-01" } ]
[ { "formula_coordinates": [ 8, 107.14, 207.83, 380.03, 20.89 ], "formula_id": "formula_0", "formula_text": "jv _I D en _U S te _I N el _G R cy _G B zh _C N zh _T W sq _A L ja _J P af _Z A ar _S A fi_ FI km _K H ru _R U hi _I N ur _P K fa _I R m y_ M M bn _B D kn _I N th _T H ko _K R am _E T ta _I N hy _A M da _D K ka _G E es _E S lv _L V is _I S sl _S L az _A Z nb _N O m l_ IN it_ IT sv _S E nl _N L m s_ M Y hu _H U vi _V N id _I D sw _K E fr _F R pl _P L tr _T R m n_ M N pt" }, { "formula_coordinates": [ 15, 84.09, 436.53, 5.83, 4.79 ], "formula_id": "formula_1", "formula_text": "tgt" }, { "formula_coordinates": [ 17, 142.08, 416.31, 360.67, 6.7 ], "formula_id": "formula_2", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 19, 147.08, 416.42, 355.6, 6.73 ], "formula_id": "formula_3", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 19, 147.08, 578.3, 355.6, 6.73 ], "formula_id": "formula_4", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 21, 150.84, 250.88, 352.36, 6.55 ], "formula_id": "formula_5", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 21, 150.84, 333.24, 352.36, 6.54 ], "formula_id": "formula_6", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 21, 150.84, 415.6, 352.36, 6.54 ], "formula_id": "formula_7", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 21, 150.84, 497.96, 352.36, 6.55 ], "formula_id": "formula_8", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" }, { "formula_coordinates": [ 21, 150.84, 581.05, 352.36, 6.55 ], "formula_id": "formula_9", "formula_text": "/ Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 / Rg L Rg 1 / Rg 2 /" } ]
10.18653/v1/2020.acl-main.478
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b29", "b14", "b7" ], "table_ref": [], "text": "Detecting texts that misrepresent information within reference texts is crucial for combating misinformation. Previous research has primarily tackled this issue in the context of fact-checking (Thorne et al., 2018;Wadden et al., 2020), where the goal is to debunk unsupported claims using relevant passages, and in summarization (Kryscinski et al., 2020;Fabbri et al., 2022), where the focus is on assessing the faithfulness of generated summaries to the reference articles. However, none of COVID-19 is not contagious at all! None of my friends have every got it 🙄" }, { "figure_ref": [], "heading": "Tweet Manipulating Information", "publication_ref": [], "table_ref": [], "text": "The novel COVID-19 is highly contagious and is transmitted mostly through respiratory droplets. But, whether its transmission can be forwarded by touching a surface (i.e., a fomite) is uncertain...." }, { "figure_ref": [], "heading": "Reference Article", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Tweet Expressing Opinions", "publication_ref": [ "b24" ], "table_ref": [], "text": "I'll continue practicing good hand hygiene and cleanliness. Stay safe, everyone! 💪🌟 Figure 1: Two illustrative examples that highlight the challenge of identifying manipulation of news on social media. For the first example, while the associated article does not explicitly discuss the importance of maintaining good hand hygiene, the tweet does not distort the information within the article. Conversely, in the second example, a tweet falsely asserts that COVID-19 is non-contagious, directly contradicting the content of the reference article. Hence, the second tweet misrepresents the information contained in the reference article.\nthe previous work has specifically addressed the identification of social media posts that manipulate information within the reference articles. This poses a significant challenge due to the prevalence of personal opinions in social media posts. To effectively tackle this problem, models must be able to discern between personal opinions and sentences that distort information in social media posts. Examples of tweets that only express personal opinions and tweets that manipulate information can be found in Figure 1.\nIn this paper, we introduce a new task called identifying manipulations of news on social media. To explore this problem, we repurposed news articles from FakeNewsNet (Shu et al., 2020) and constructed a fully-annotated dataset, MANITWEET, consisting of 3.6K tweets accompanied by their corresponding news outlets. One main challenge of collecting data for this task is the imbalanced tweet distributions, with the majority of tweets not manipulating the associated article. Randomly sampling human-written tweets for annotation will result in poor annotation cost-efficiency. To ensure efficient annotation, we propose a two-round annotation scheme. In the first round, human annotators are assigned the task of validating tweets generated by large language models (LLMs). The data collected from these rounds is subsequently utilized to train a sequence-to-sequence model for identifying manipulation within tweets authored by humans. In the second round of annotation, these human-authored tweets are labeled accordingly. The 0.5K human-written tweets annotated in the second round are used as the test set for evaluation. Conversely, the 3.1K machine-generated tweets collected in the first round are used for our training and development set.\nOur study aims to address three main research questions. First, we investigate the comparison between the fine-tuning paradigm and the in-context learning paradigm for this task. Using our curated dataset, we evaluate the performance of the finetuned sequence-to-sequence model discussed earlier in comparison to state-of-the-art LLMs. Surprisingly, we discover that our much smaller finetuned model outperforms LLMs prompted with zero-shot or two-shot exemplars on the proposed task. In fact, we find that LLMs do not achieve satisfactory performance on our task when only provided with a few exemplars. Second, we explore the impact of various attributes of a news article on its susceptibility to manipulation. To conduct this analysis, we employ the previously described sequence-to-sequence model to analyze a vast collection of over 1M tweets and their associated articles. Our findings reveal a higher likelihood of manipulation in social media posts when the associated news articles exhibit low trustworthiness or pertain to political topics. Finally, we investigate the role of manipulated sentences within a news outlet. To address this question, we perform discourse analysis on the test set of MANITWEET. Through this analysis, we uncover that manipulated sentences within a news article often encompass the primary narrative or consequential aspects of the news outlet.\nOur contributions can be summarized as follows:\n• We introduce and define the new task of identifying manipulations of news on social media. • We propose a novel annotation scheme for this task. Using this scheme, we construct a dataset consisting of 3.6K samples, carefully annotated by human experts. Each sample comprises a Twitter post paired with a reference news article.\n• Through our analysis, we have demonstrated that this dataset serves as a rigorous testbed for tackling the identification of manipulation in social media. Specifically, we showcased the inadequate performance of large language models in effectively addressing this challenge. • We also unveiled that low-trustworthiness and political news are more likely to be manipulated and that manipulated sentences are more likely to contain the main story or consequence of a news outlet." }, { "figure_ref": [], "heading": "Identifying Manipulations of News on Social Media", "publication_ref": [], "table_ref": [], "text": "The goal of our task is to identify whether a social media post is manipulated and what information is being manipulated given the associated reference article. The models are tasked to understand whether a tweet manipulates the reference article ( §2.1), which newly introduced information in the tweet is used for manipulation ( §2.2), and which original information in the reference article is being manipulated ( §2.3). In the following subsections, we provide detailed task formulation for each sub-task." }, { "figure_ref": [], "heading": "Sub-task 1: Tweet Manipulation Detection", "publication_ref": [], "table_ref": [], "text": "Given a tweet and its associated news article, the first subtask is to classify the manipulation label l of this tweet, where l ∈ {MANI, NOMANI}. A tweet is considered MANI as long as there is at least one sentence that comments on the content of the associated article, but this sentence contains manipulated or inserted information. Otherwise, this tweet is NOMANI." }, { "figure_ref": [], "heading": "Sub-task 2: Manipulating Span Localization", "publication_ref": [], "table_ref": [], "text": "Once a tweet is classified as MANI, the next step is determining which information in the reference article was manipulated in the tweet. We refer to the information being manipulated as the pristine span, and the newly introduced information as the manipulating span. Both pristine span and manipulating span are represented as a text span in the reference article and the tweet, respectively. Identifying both information can help provide interpretability on model outputs and enable finer-grained analysis that provides more insights, as demonstrated in §6.\nUsing Figure 1 as an example, the manipulating span is COVID-19 is not contagious at all!." }, { "figure_ref": [], "heading": "Sub-task 3: Pristine Span Localization", "publication_ref": [], "table_ref": [], "text": "Similar to the second task, in this task, the model should output the pristine span that is being manipulated. In cases where the manipulating span is simply inserted, and no pristine span is manipulated, models should output a null span or an empty string. Using Figure 1 as an example, the pristine span is The novel COVID-19 is highly contagious." }, { "figure_ref": [], "heading": "The MANITWEET Dataset", "publication_ref": [], "table_ref": [], "text": "Our dataset consists of 3,636 tweets associated with 2,688 news articles. Each sample is annotated with (1) whether the tweet manipulates information presented in the associated news article, (2) which new information is being introduced, and (3) which information is being manipulated. We refer to this dataset as the MANITWEET dataset. The following sections describe our corpus collection and annotation process." }, { "figure_ref": [], "heading": "News Article Source", "publication_ref": [ "b24" ], "table_ref": [], "text": "To facilitate the analysis of human-written tweets, we created MANITWEET by repurposing from a fake news detection dataset, FAKENEWSNET (Shu et al., 2020). FAKENEWSNET contains news articles from two fact-checking websites, POLITI-FACT1 and GOSSIPCOP2 , where each news article is annotated with a factuality label. In addition, for each news article, FAKENEWSNET also consists of user engagement data, such as tweets, retweets, and likes, on Twitter. We reused the news content, the factuality label, and the associated tweets from FAKENEWSNET for our MANITWEET dataset. During the early stage of the experiment, we observe that some news articles in FAKENEWSNET are inappropriate for our study due to insufficient textual context. For example, some articles only contain a news title, a video, and a caption. To avoid such content, we remove news pieces containing less than 300 tokens." }, { "figure_ref": [], "heading": "Tweet Collection", "publication_ref": [], "table_ref": [], "text": "A naive approach to collecting manipulating tweets is to request annotators to annotate a subset of human-written tweets from FAKENEWSNET. However, this approach suffers from imbalanced tweet distributions, with the majority of tweets not manipulating the associated article, resulting in poor annotation cost-efficiency. Moreover, such an annotation task is challenging as annotators must verify every information unit between the news article and the tweet, leading to an inefficient annotation process. To address these issues, we have devised a three-stage data collection pipeline. In the initial two rounds of annotation, we utilize ChatGPT3 to generate both MANI and NOMANI tweets in a controllable manner. Human annotators are then tasked with validating the generated tweets for their validity. In the third round of annotation, we train a model on the data collected from the previous two rounds and employ this model to identify MANI human-written tweets for human annotation. This approach ensures that annotators are not overwhelmed with a large number of NOMANI tweets, resulting in significant improvements in time and cost efficiency compared to the aforementioned naive method." }, { "figure_ref": [], "heading": "Tweet Generation", "publication_ref": [], "table_ref": [], "text": "We first used Stanza4 to extract LOCATION, PEO- Here, PRISTINE_SPAN is a span randomly sampled from the spans of all named entities belonging to NEWS_ARTICLE , whereas NEW_SPAN is another span sampled from S with the same entity type as PRISTINE_SPAN. We have also experimented with other prompt templates. While the overall generation quality does not differ much, these prompt templates most effectively prevent ChatGPT from generating undesirable sequences such as \"As an AI language model, I cannot ...\".\nIn addition to generating MANI tweets where new information is manipulated from the original information contained in the associated article, we also produce MANI tweets where new information is simply inserted into the tweet using the following prompt:\nThis is a news article: NEWS_ARTICLE.\nSummarize the article into a tweet and comment about it. Include NEW_SPAN in your summarization but do not include NEW_SPAN in the hashtag5 ." }, { "figure_ref": [], "heading": "Keep it within 280 characters:", "publication_ref": [ "b33" ], "table_ref": [], "text": "To further improve data quality and reduce costs in human validation, we only keep NOMANI tweets that contain at least one sentence inferrable from the corresponding article. Concretely, we use Doc-NLI (Yin et al., 2021), a document-level entailment model, to determine the entailment probability between the reference article and each tweet sentence. A valid consistent tweet must have at least one sentence with an entailment probability greater than 50%. Additionally, we remove MANI tweets that do not contain the corresponding NEW_SPAN specified in the corresponding prompts." }, { "figure_ref": [], "heading": "Our Proposed Annotation Process", "publication_ref": [ "b5" ], "table_ref": [], "text": "We use Amazon's Mechanical Turk (AMT) to conduct annotation. Annotators were provided with a reference article and a corresponding generated tweet, along with labels indicating whether the tweet manipulates the article, and whether the predicted NEW_SPAN and PRISTINE_SPAN are accurate. In the first two rounds of annotation, annotators were presented with tweets generated by ChatGPT. The labels for these tweets were naively derived from the data generation process, where we determined the manipulation label, NEW_SPAN, and PRISTINE_SPAN before prompting ChatGPT to generate a tweet. In the final round of annotation, human-written tweets were annotated, and the predicted labels for these tweets were obtained from a model (see below paragraphs) trained on the data collected in the previous two annotation rounds. For detailed information regarding annotation guidelines and the user interface, please refer to Appendix A. The following paragraphs provide an overview of our annotation process.\nFirst Round The first round of annotation is for curating machine-generated tweets, which are used as our training set and development set. Initially, for annotator qualification, three annotators worked on each of our HITs. We used the first 100 HITs to train annotators by instructing them where their annotations were incorrect. Then, the next 100 HITs were used to compute the inter-annotator agreement (IAA). At this stage, we did not provide further instructions to the annotators. Using Fleiss' κ (Fleiss, 1971), we obtain an average IAA of 90.4% across all tasks, indicating a high level of agreement. Finally, we selected the top 15 performers as qualified annotators. These annotators were chosen based on how closely their annotations matched the majority vote for each HIT.\nSince the annotators already achieved a reasonably high IAA, we assigned each HIT to a single annotator to improve annotation efficiency for the remainder of the machine-generated tweets. In addition to being annotated by an MTurk worker, each annotation is also re-validated by a graduate student. The average agreement between the graduate student and the MTurk worker is 93.1% per Cohen's κ (Cohen, 1960), implying a high agreement. We only keep samples where the validation done by the graduate student agrees with the annotation done by the worker. After two rounds of annotations, we collected 3,116 human-validated samples.\nSecond Round Using the 3K examples we collected, we train a sequence-to-sequence model that learns to tackle all three tasks jointly. Concretely, we split the collected data into 2,316: 800 for training and validation. " }, { "figure_ref": [], "heading": "No manipulation", "publication_ref": [ "b19" ], "table_ref": [], "text": "Otherwise, the model should output the following:\nManipulating span: NEW_SPAN \\ Pristine span: PRISTINE_SPAN\nFor cases where NEW_SPAN is merely inserted into the tweet, the model will output \"None\" for PRISTINE_SPAN. Using this formulation, our model is learned to optimize the maximum likelihood estimation loss. We set identical weights for all tokens in the outputs.\nTo learn the model, we use a learning rate of 5e-5. The maximum input and output sequence length are 1024 and 32 tokens, respectively. The model is optimized using the AdamW optimizer (Loshchilov and Hutter, 2019) with a batch size of 4 and a gradient accumulation of 8. During inference time, we use beam search as the decoding method with a beam width of 4." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b22", "b18" ], "table_ref": [], "text": "Subtask 1 involves a binary classification problem, and thus, the Macro F1 score serves as the evaluation metric. For subtasks 2 and 3, in addition to Exact Match, we use Macro Overlap F1 score (Rajpurkar et al., 2016) and ROUGE-L (Lin, 2004) as the metrics to more accurately assess model performance by allowing models to receive partial credit for correctly identifying some parts of the information, even if they fail to output the entire text span." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b20", "b28", "b26" ], "table_ref": [], "text": "In addition to the model discussed in §3.2.2, we also recently tested various released large language models (LLMs), including Vicuna7 (vic, 2023) and ChatGPT, which have demonstrated superior language understanding and reasoning capabilities.\nChatGPT is an improved version of InstructGPT (Ouyang et al., 2022) that was optimized for generating conversational responses. On the other hand, Vicuna is a LLaMA model (Touvron et al., 2023) fine-tuned on ShareGPT8 data, and has exhibited advantages compared to other open-source LLMs, such as LLaMA and Alpaca (Taori et al., 2023).\nWe tested both the zero-shot and two-shot performance of ChatGPT, where the in-context exemplars are randomly chosen from our training set for the two-shot experiment. For Vicuna, we only evaluated its zero-shot ability as we found that it often outputs undesirable texts when exemplars are provided. The details of our prompts for these LLMs can be found in Appendix B.\n5 Performance Analysis" }, { "figure_ref": [], "heading": "Performance on MANITWEET", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 2 presents a summary of the main findings from our evaluation on the MANITWEET test set.\nWe have made several interesting observations: First, all LLMs we tested performed poorly across the three proposed tasks. This indicates that simply prompting LLMs, whether with or without exemplars, is not sufficient to effectively address problem of identifying manipulation of news on social media. Second, despite its simplicity and smaller size compared to the LLMs, our proposed method outperformed them significantly in identifying social media manipulation across all three tasks. This outcome highlights the value and importance of our training data and suggests that a fine-tuned smaller model can outshine larger models when tackling challenging tasks. Finally, it was surprising to discover that the two-shot exemplars decrease Chat-GPT's ability to identify manipulating information. One possible explanation for this unexpected result is that the in-context exemplars we sampled from the training set do not represent the test set well, especially on Task 2. Further investigation into the robustness of different exemplars will be left for future research." }, { "figure_ref": [ "fig_1" ], "heading": "Remaining Challenges", "publication_ref": [], "table_ref": [], "text": "Although our proposed model exhibits significant improvements in identifying manipulation within social media posts, there is still room for further enhancement. To gain insights into the additional modeling and reasoning capabilities required for effectively addressing the task of social media manipulation, we manually compare 50 errors made by our model with ground-truth labels and analyze the sources of errors. The distribution of errors is illustrated in Figure 3. Notably, the most prevalent error arises from the model's inability to extract the correct pristine span from the reference article that underwent manipulation. Among the 18 erroneous predictions in this category, 16 cases result from the model outputting an empty string. This indicates that the model considers the manipulating information to be inserted when, in reality, it is manipulated from the information present in the reference articles. This could be attributed to the presence of 368 instances where the original information is an empty string, while the alternative answers for the original information only occur 1-2 times in other instances. This can be solved by scaling down the loss for these samples with an empty string as the label for original information. Additionally, another common type of error involves the model's failure to identify opinions expressed in the tweet. In these instances, the model considers the tweet to be manipulating information from the article, whereas the tweet primarily expresses opinions. Examples of these errors are presented in Appendix C. " }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Exploratory Analysis", "publication_ref": [ "b25" ], "table_ref": [], "text": "The learned model enables us to perform a largescale study of manipulation on the MANITWEET test set and the 1M human-authored tweets associated with the news articles from the FakeNewsNet dataset. In this section, we explore how an article is MANI and how different properties of a news article, such as domain and factuality affect manipulation.\nInsight 1: Low-trustworthiness and political news are more likely to be manipulated. Figure 2 shows the percentage of the 1M humanwritten tweets that are manipulated across different domains and factuality. 9 We first observe that tweets associated with False news are more likely to be manipulated. One possible explanation is that audience of low-trustworthy news media may pay less attention to facts. Hence, they are more likely to manipulate information from the reference article accidentally when posting tweets. In addition, we also see that tweets associated with Politics news are more frequently manipulated than those with Entertainment articles. This could be explained by the fact that people have a stronger incentive to manipulate information for political tweets due to elections or campaigns.\nInsight 2: Manipulated sentences are more likely to contain the main story or consequence of a news outlet. To discover the role of the sentence being manipulated in the reference article, we conducted discourse analysis on these sentences. We only conducted the analysis on our test set instead of the entire 1M human-written tweets since Concretely, we formulate the discourse classification task as a sequence-to-sequence problem and train a LED-based model on the NEWSDISCOURSE dataset (Choubey et al., 2020) using a similar strategy discussed in §3.2.2. The learned discourse classification model achieves a Micro F1 score of 67.7%, which is on par with the state-of-the-art method (Spangher et al., 2021). Upon the discourse classification model being trained, we applied it to all the sentences in the reference article to analyze the discourse distribution. As shown in Figure 4, compared to other sentences, sentences that were manipulated are much more likely to contain Main or Cause discourse, which corresponds to the primary topic being discussed and the underlying factor that led to a particular situation, respectively. Examples of the manipulated sentences with a Main or Cause discourse can be found in Appendix D. " }, { "figure_ref": [], "heading": "Manipulated", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "This proposed task is mostly relevant to faithfulness-related tasks. In the following sections, we discuss important prior work in faithfulness and other related tasks." }, { "figure_ref": [], "heading": "Faithfulness", "publication_ref": [ "b14", "b33", "b15", "b30", "b6", "b7", "b14" ], "table_ref": [], "text": "Faithfulness is often referred to as the factual consistency between the inputs and outputs. This topic has mainly been studied in the field of summarization. Prior work on faithfulness can be divided into two categories: evaluation and enhancement, the former of which is more relevant to our study. One line of faithfulness evaluation work developed entailment-based metrics by training document-sentence entailment models on synthetic data (Kryscinski et al., 2020;Yin et al., 2021) or using traditional natural language inference (NLI) models at the sentence level (Laban et al., 2022). Another line of studies evaluates faithfulness by comparing information units extracted from the summaries and input sources using question answering (QA) (Wang et al., 2020;Deutsch et al., 2021). Recently, a study combined QA with entailment by employing the outputs of QA as features for an entailment model (Fabbri et al., 2022).\nOur task differs from faithfulness evaluation in two key ways. Firstly, for our task to be completed effectively, models must possess the additional capability of distinguishing tweet sentences that relate to the reference article from those that simply express opinions. In contrast, models evaluating faithfulness only need to identify whether each sentence in the output is inferable from the input. Sec-ondly, we require models to not only identify which original information is being manipulated by the new information, but also to provide interpretability as to why a tweet has been manipulated. While Kryscinski et al. (2020) trained a span extractor to highlight mistakes in the generated summaries, they did not identify which part of the input contained the correct information. Furthermore, this task has not been included in any prior faithfulness evaluation work." }, { "figure_ref": [], "heading": "Citation Contextualization", "publication_ref": [ "b4", "b13" ], "table_ref": [], "text": "The task of citation contextualization (Cohan et al., 2015;Jaidka et al., 2017) involves identifying sections or excerpts in a cited document that are pertinent to a specific citation in a citing document. This task is most similar to our third subtask §2.3, recognizing the original information in the input article. Our second subtask poses a more difficult challenge as the reference article and the tweet typically do not share similar semantics due to manipulation." }, { "figure_ref": [], "heading": "Fact-checking", "publication_ref": [ "b27", "b29", "b34", "b21", "b31", "b9", "b16", "b9", "b12" ], "table_ref": [], "text": "Fact-checking is a task that determines the veracity of input claim based on some evidence passages. Some work assumes the evidence candidates are provided, such as in the FEVER dataset (Thorne et al., 2018) and the SCIFACT dataset (Wadden et al., 2020). Approaches for this category of factchecking tasks often involve a retrieval module to retrieve relevant evidence from the given candidate pool, followed by a reasoning component that determines the compatibility between a piece of evidence and the input claim (Yin and Roth, 2018;Pradeep et al., 2021). Other work focuses on the open-retrieval setting, where evidence candidates are not provided, such as in the LIAR dataset (Wang, 2017) and the X-FACT dataset (Gupta and Srikumar, 2021). For this task formulation, one of the main challenges is to determine where and how to retrieve evidence. Some approaches determine the veracity of a claim based solely on the claim itself and the information learned by language models during the pre-training stage (Lee et al., 2021), other methods leverage a retrieval module to look for evidence on the internet (Gupta and Srikumar, 2021) or a set of trustworthy sources (Huang et al., 2022). Similar to the faithfulness task, the key distinction between fact-checking and our proposed task lies in the additional requirement for models to possess the capability of discerning between tweet sentences that pertain to the reference article and those that merely express opinions." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we have introduced and defined a novel task called identifying manipulation of news on social media, which aims to determine whether and how a social media post manipulates the associated news article. To address this challenge, we meticulously collected a dataset named MANITWEET, composed of both humanwritten and machine-generated tweets. Our analysis revealed that existing large language models (LLMs) prompted with zero-shot and two-shot exemplars do not yield satisfactory performance on our dataset, highlighting avenues for future research. We believe that the resources presented in this paper can serve as valuable assets in combating the dissemination of false information on social media, particularly in tackling the issue of manipulation." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b32" ], "table_ref": [], "text": "There are two main limitations in our work. Firstly, despite our best efforts to minimize the gap between the training set and test set of MANITWEET, some discrepancies remain due to the training set being generated by machines and the test set being produced by humans. This limitation is primarily attributed to budget constraints. In the future, with additional resources, we aim to create an additional training set consisting entirely of human-written tweets. By comparing the performance of models trained on this human-written training set with those trained on the machine-generated training set, we can gain further insights. Secondly, in our experiments involving prompting LLMs, we only explored up to one-shot in-context exemplars, and our prompts were not meticulously optimized. There is a possibility that LLMs can achieve better performance when provided with more in-context exemplars and when prompted in a more refined manner, such as employing Chain-of-Thought reasoning (Wei et al., 2022) ." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "The primary ethical consideration in our work pertains to the presence of false information in two aspects: tweets that manipulate the associated news articles and the inclusion of false news from the FakeNewsNet dataset. As with other fact-checking and fake news detection research, it is important to acknowledge the dual-use concerns associated with the resources presented in this work. While our resources can contribute to combating false information, they also possess the potential for misuse. For instance, there is a risk that malicious users could utilize the manipulating tweets or fake news articles to train a text generator for creating deceptive content. We highlight appropriate and inappropriate uses of our dataset in various scenarios:\n• Appropriate: Researchers can use framework to study the manipulation issue on social media and develop stronger models for identifying social media posts that manipulate information.\n• Inappropriate: The fake news and manipulating tweets in MANITWEET cannot be used to train text generators for malicious purposes.\n• Inappropriate: Use the manipulation prompts discussed in this paper to generate tweets and spread false information.\n• Inappropriate: The fake news in MAN-ITWEET should not be used as evidence for fact-checking claims.\nFurthermore, the privacy of tweet users is another aspect that warrants consideration, given that we are releasing human-written tweets. However, we assure that the dataset does not pose significant privacy concerns. The tweets in our dataset are anonymized, and it is important to note that all the associated news articles were already publicly available. Therefore, the release of this dataset should not have adverse implications for privacy." }, { "figure_ref": [], "heading": "A Annotation Details", "publication_ref": [ "b23" ], "table_ref": [], "text": "In this section, we describe the details of our annotation process. For better control of the annotation quality, we required that all annotators be from the U.S. and have completed at least 10,000 HITs with 99% acceptance on previous HITs. The reward for each HIT is $1, complying with the ethical research standards outlined by AMT (Salehi et al., 2015). Annotation interfaces are shown below." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "A.1 User Interface", "publication_ref": [], "table_ref": [], "text": "Figure 5 and Figure 6 display the annotation interface for the first two rounds and the third round of annotation, respectively. The only difference is that for the third round of annotation, we asked annotators to correct errors made by our basic model discussed in §3.2.2. Samples that do not receive \"yes\" on all three questions for the first two rounds of annotation will be discarded. The rationale behind this design stems from three key reasons: Firstly, the data generated for the initial two rounds of annotation is automatically generated, enabling a relatively cost-effective approach to discard invalid samples and generate new ones, as opposed to requesting annotators to correct errors. Secondly, the data generated in these two rounds is predominantly valid, which eliminates the need for annotators to rectify errors and consequently accelerates the annotation process. Lastly, in the third round of annotation, by instructing annotators to identify er-rors made by our model, we can effectively identify the challenges faced by the model." }, { "figure_ref": [], "heading": "B Prompts for LLMs", "publication_ref": [], "table_ref": [], "text": "The zero-shot and two-shot prompt template to LLMs for the experiments discussed in §4.2 is shown in Table 3. The in-context exemplars for the two-shot experiments are randomly sampled from the training set of MANITWEET." }, { "figure_ref": [], "heading": "C Additional Qualitative Examples", "publication_ref": [], "table_ref": [], "text": "Table 4 presents two instances where our baseline model makes errors. In the first example, our model was not able to identify that \"Inspired Our Next Trip To The Salon\" is an expression of opinion, resulting in the model incorrectly classifying this sample as MANI. In the second example, although our model accurately predicts the example as MANI and extracts the correct manipulating span, it fails to extract the pristine text span correctly, likely due to the nature of the training set, as discussed in §5.2." }, { "figure_ref": [], "heading": "D Discourse Analysis Examples", "publication_ref": [], "table_ref": [], "text": "Table 5 shows examples of manipulated sentences associated with a Main or Cause discourse. A main discourse implies that the sentence conveys the main story of an article, whereas a cause discourse indicates that the sentences discuss the consequential aspect of the main story. " }, { "figure_ref": [], "heading": "Role Utterance", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "User", "publication_ref": [], "table_ref": [], "text": "You are tackling a social manipulation problem. You will be giving a tweet and an article, and your task is to identify which information from the article is misrepresented by which information in the tweet. You should answer in the following format \"Manipulating span: manipulating_span Pristine span: pristine_span\" in a single line. Here, {manipulating_span} is the new information introduced in the tweet and original_concept is the original information in the article. If the tweet simply inserts information, {original_concept} should be \"None\". If the tweet does not manipulate the article, answer \"No manipulation\". You do not need to output other information such as an explanation. You don't need to provide code. In the following utterances, you will be presented a pair of tweet and news article. .. In the address, Boehner notes that this is a new approach that hasn't been tried in Washingtonby either party -it is at the core of the Pledge to America, a governing agenda Republicans built by listening to the people. Leader Boehner recorded the weekly address earlier this week from Ohio, where he ran a small business and saw first-hand how Washington can make it harder for employers and entrepreneurs to meet a payroll and create jobs. Following is a transcript ... " } ]
to tackle the misrepresentation of information derived from reference articles in the domains of fact-checking and faithful summarization. However, an unaddressed aspect remains-the identification of social media posts that manipulate information within associated news articles. This task presents a significant challenge, primarily due to the prevalence of personal opinions in such posts. We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information. To study this task, we have proposed a data collection schema and curated a dataset called MANITWEET, consisting of 3.6K pairs of tweets and corresponding articles. Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance. Additionally, we have developed a simple yet effective basic model that outperforms LLMs significantly on the MANITWEET dataset. Finally, we have conducted an exploratory analysis of human-written tweets, unveiling intriguing connections between manipulation and the domain and factuality of news articles, as well as revealing that manipulated sentences are more likely to encapsulate the main story or consequences of a news outlet.
MANITWEET: A New Benchmark for Identifying Manipulation of News on Social Media
[ { "figure_caption": "Figure 2 :2Figure 2: The percentage of tweets that manipulate the associated articles across different factuality and domains.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Distributions of errors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results of discourse analysis. Manipulated sentences within news articles tend to encompass the main story (Main) or convey the consequential aspects (Cause) of the corresponding news outlet.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: MTurk user interface for the initial two rounds of data annotation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: MTurk user interface for the third round of data annotation.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Model details are described in Statistics of our MANITWEET dataset.", "figure_data": "Split # MANI # NOMANI # Doc Tweet AuthorTrain1,4658511,963MachineDev482318753MachineTest294226299Human", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance (%) of different models on the MANITWEET test set. EM denotes Exact Match, and RL denotes ROUGE-L. Statistical significance over best-performing LLMs computed with the paired bootstrap procedure(Berg-Kirkpatrick et al., 2012) are indicated with next paragraph. Once the model was trained, we applied it to identify manipulation in the humanwritten tweets that are associated with the articles in FakeNewsNet. Then, we randomly sampled from predicted MANI and NOMANI examples to be further validated by MTurk workers. The interannotator agreement between the graduate student and the MTurk worker is 73.0% perCohen's κ (Cohen, 1960). While the agreement is moderately high, it is much lower than that in the previous round. This suggests that manipulation in humanwritten tweets is more challenging to identify. The user interface of each round of annotation is shown in Appendix A.1. Finally, we have curated the MANITWEET dataset. The dataset statistics are shown in Table1.", "figure_data": "Model In this paragraph, we describe the modelwe used to facilitate the second round of annota-tion. Motivated by the advantages of generativemodels over sequence-tagging models (Li et al.,2021; Huang et al., 2021; Hsu et al., 2022), wetrained a sequence-to-sequence model based onLongFormer-Encoder-Decoder (LED) 6 (Beltagyet al., 2020) that learns to solve the three tasksjointly. Concretely, the input to our model is aconcatenation of a tweet and a reference article:Tweet: TWEET \\Reference article: REF_ARTICLEIf the article is NOMANI, the model should output:", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Kung-Hsiang Huang; Hou Pong Chan; Kathleen Mckeown; Heng Ji
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90% chatgpt quality", "year": "2023" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b1", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Taylor Berg-Kirkpatrick; David Burkett; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "An empirical investigation of statistical significance in NLP", "year": "2012" }, { "authors": "Prafulla Kumar Choubey; Aaron Lee; Ruihong Huang; Lu Wang", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Discourse as a function of event: Profiling discourse structure in news articles around the main event", "year": "2020" }, { "authors": "Arman Cohan; Luca Soldaini; Nazli Goharian", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Matching citation text and cited spans in biomedical literature: a search-oriented approach", "year": "2015" }, { "authors": "Jacob Cohen", "journal": "Educational and psychological measurement", "ref_id": "b5", "title": "A coefficient of agreement for nominal scales", "year": "1960" }, { "authors": "Daniel Deutsch; Tania Bedrax-Weiss; Dan Roth", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "Towards question-answering as an automatic metric for evaluating the content quality of a summary", "year": "2021" }, { "authors": "Alexander Fabbri; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "QAFactEval: Improved QAbased factual consistency evaluation for summarization", "year": "2022" }, { "authors": "L Joseph; Fleiss", "journal": "Psychological bulletin", "ref_id": "b8", "title": "Measuring nominal scale agreement among many raters", "year": "1971" }, { "authors": "Ashim Gupta; Vivek Srikumar", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "X-fact: A new benchmark dataset for multilingual fact checking", "year": "2021" }, { "authors": "I-Hung Hsu; Kuan-Hao Huang; Elizabeth Boschee; Scott Miller; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "DEGREE: A data-efficient generation-based event extraction model", "year": "2022" }, { "authors": "Kung-Hsiang Huang; Sam Tang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Document-level entity-based extraction as template generation", "year": "2021" }, { "authors": "Kung-Hsiang Huang; Chengxiang Zhai; Heng Ji", "journal": "International Committee on Computational Linguistics", "ref_id": "b12", "title": "CONCRETE: Improving cross-lingual factchecking with cross-lingual retrieval", "year": "2022" }, { "authors": "Kokil Jaidka; Muthu Kumar Chandrasekaran; Min-Yen Kan", "journal": "", "ref_id": "b13", "title": "Proceedings of the Computational Linguistics Scientific Summarization Shared Task (CL-SciSumm 2017) organized as a part of the 2nd Joint Workshop on Bibliometric-enhanced In", "year": "2002" }, { "authors": "Wojciech Kryscinski; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul N Bennett; Marti A Hearst", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b15", "title": "SummaC: Re-visiting NLIbased models for inconsistency detection in summarization", "year": "2022" }, { "authors": "Nayeon Lee; Yejin Bang; Andrea Madotto; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Towards few-shot fact-checking via perplexity", "year": "2021" }, { "authors": "Sha Li; Ji Heng; Jiawei Han", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Document-level event argument extraction by conditional generation", "year": "2021" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b19", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Ronak Pradeep; Xueguang Ma; Rodrigo Nogueira; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Scientific claim verification with VerT5erini", "year": "2021" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Niloufar Salehi; Lilly C Irani; Michael S Bernstein; Ali Alkhatib; Eva Ogbe; Kristy Milland; Clickhappier ", "journal": "Association for Computing Machinery", "ref_id": "b23", "title": "We are dynamo: Overcoming stalling and friction in collective action for crowd workers", "year": "2015" }, { "authors": "Kai Shu; Deepak Mahudeswaran; Suhang Wang; Dongwon Lee; Huan Liu", "journal": "Big data", "ref_id": "b24", "title": "Fakenewsnet: A data repository with news content, social context, and spatiotemporal information for studying fake news on social media", "year": "2020" }, { "authors": "Alexander Spangher; Jonathan May; Sz-Rung Shiang; Lingjia Deng", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Multitask semi-supervised learning for class-imbalanced discourse classification", "year": "2021" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b26", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "year": "2018" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b28", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "David Wadden; Shanchuan Lin; Kyle Lo; Lucy Lu Wang; Madeleine Van Zuylen; Arman Cohan; Hannaneh Hajishirzi", "journal": "", "ref_id": "b29", "title": "Fact or fiction: Verifying scientific claims", "year": "2020" }, { "authors": "Alex Wang; Kyunghyun Cho; Mike Lewis", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Asking and answering questions to evaluate the factual consistency of summaries", "year": "2020" }, { "authors": "William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection", "year": "2017" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b32", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Wenpeng Yin; Dragomir Radev; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "DocNLI: A large-scale dataset for documentlevel natural language inference", "year": "2021" }, { "authors": "Wenpeng Yin; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "TwoWingOS: A two-wing optimization strategy for evidential claim verification", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 92.41, 277.01, 177.63, 22.32 ], "formula_id": "formula_0", "formula_text": "This is a news article: NEWS_ARTICLE." } ]
10.18653/v1/N19-1388
2024-03-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b16", "b18", "b46", "b1", "b2", "b1", "b47", "b7", "b46", "b35", "b40", "b2", "b11" ], "table_ref": [], "text": "Recent advances in multilingual machine translation have led to better parameter efficiency and language transfer by simultaneously modeling multiple language pairs (Firat et al., 2016;Ha et al., 2016). Some work has even proven the viability of performing zero-shot translation between language pairs for which there may be very little to no bitext (Johnson et al., 2017;Zhang et al., 2020). However, multilingual translation systems with complete parameter sharing can suffer from interference, or reduced performance for some language pairs versus a comparable bilingual baseline (Aharoni et al., 2019;Arivazhagan et al., 2019).\nPrevious work has hypothesized that limited modeling capacity is a major contributor to reduced performance in multilingual models (Aharoni et al., 2019;Zhu et al., 2021;Conneau et al., 2020). Some prior work shows this bottleneck phenomenon empirically by evaluating bilingual versus multilingual model performance across different model and data sizes (Zhang et al., 2020;Shaham et al., 2023). Besides capacity, the direction of translation can also dictate how much interference occurs in multilingual models; one-to-many translation systems suffer more from interference compared to multilingual translation model types (Wang et al., 2018;Arivazhagan et al., 2019;Fernandes et al., 2023). Therefore, in this work, we focus on one-to-many multilingual translation systems.\nDespite trends pointing towards performance dif- " }, { "figure_ref": [], "heading": "s(2) t(2) t(n) s(n)", "publication_ref": [ "b14", "b32" ], "table_ref": [], "text": "Figure 1: Schematic of our hidden space utilization comparisons. We extract final layer representations from both a bilingual model and a multilingual model on the same set of parallel sentences. We compute the isotropy of these representations (Iso), and compare the two models. ferences between bilingual and multilingual translation systems, especially in those with a multilingual decoder, it still unclear how these systems may be performing differently. To this end, we systematically compare the behavior of one-to-many translation models to their bilingual counterparts. Specifically, we examine the geometry of model representations from both types of models and compare them directly. We ask the following: (1) How does the ambient space utilization of model representations differ between bilingual models and one-to-many models? (2) If space utilization differs, what might be driving these differences?\nWe measure space utilization using IsoScore and intrinsic dimensionality (ID), which are two metrics that determine how uniformly a point cloud utilizes the dimensions of its underlying vector space, or its isotropy (Fukunaga and Olsen, 1971;Rudman et al., 2022).\nWe compute the isotropy of representations on the same set of sentence pairs across model types so that their scores are directly comparable, and summarize our method in Figure 1. We observe the following in our comparison:\n• Across different data resource levels and different source-target language pairs, the isotropy of one-to-many decoder representations for a given source-target pair is reduced as contrasted with decoder representations in a comparable bilingual model. • Source-side representation capacity improves slightly in one-to-many models over bilingual models. However, the extent of this encoder capacity improvement is smaller than the extent of the decoder capacity reduction. • With further analysis, we find that reduced space utilization in multilingual decoder representations seems driven by language-specific information occupying much of the available representation space. Single language decoders, however, do not have to distinguish this language-specific information.\nWhile most previous work has observed empirical differences between bilingual and multilingual models and some of its potential causes, our work characterizes the differences between bilingual and multilingual models in terms of their internal model representations. Our results could inform alternative approaches on current multilingual modeling design, especially in models that cover multiple target languages." }, { "figure_ref": [], "heading": "Analysis of Model Representations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Representation Space Utilization", "publication_ref": [], "table_ref": [], "text": "In this work, we investigate the difference between our model types via the geometry of final and intermediate layer representations. Specifically, we are interested in how well these representations utilize the dimensions of the vector space they lie in. If a set of representations has very high variance across a few dimensions, and little to no variance spread across the remaining dimensions, this set is said to have low isotropy, or anisotropy. Because a one-to-many model has to accommodate multiple languages in its decoder, we hypothesize that our multilingual models have less representational capacity than bilingual models for a given language pair. Therefore, we turn to examining the isotropy of representations produced from both a bilingual model and a multilingual model on a set of parallel sentences. Since our experiments keep the hidden dimension fixed across all models, and the representations are computed from the same data, these two sets of hidden vectors are directly comparable. In this setting, if one set of representations uses more ambient vector space compared to the other set, we can say that the first set is using more of its representational capacity." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Computing Isotropy", "publication_ref": [ "b24", "b23", "b14", "b32", "b5", "b26", "b9", "b32", "b32" ], "table_ref": [], "text": "In computing the space utilization of model representations, we first compute the sequence of hidden states across tokens. For a given source target pair (x, y), a forward pass through the encoder gives h enc (x) = (v 1 , v 2 , . . . , v |x| ), and through the decoder gives h dec (x, y) = (w 1 , w 2 , . . . , w |y| )\nWe compute the isotropy of these model representations at a sentence level. For converting encoder and decoder hidden state sequences into single vectors, we mean pool all non-padding tokens over the token dimension (Li et al., 2020;Kudugunta et al., 2019). Isotropy, formally, is a measure of how uniformly the variance of a dataset is spread across its vector dimensions.\nThe isotropy metrics used in this work are intrinsic dimensionality (ID) as computed by the PCA Fukunaga-Olsen algorithm (Fukunaga and Olsen, 1971) and IsoScore (Rudman et al., 2022). PCA Fukunaga-Olsen is a straightforward method to estimate the ID of a dataset based on a linear PCA decomposition of the data. This method is simple, robust to large samples, and handles high dimensionality, which is important for our hidden vector setting (Bac et al., 2021). The PCA-FO ID algorithm computes the following, for threshold D e ∈ [0, 1) and original dimensionality n:\n1. Compute PCA of the dataset X ⊆ R n : cov(X) = V ΛV T 2. Compute normalized eigenvalues λ i = λ i /λ 1 3. return count(λ i > D e )/n\nIn this work, we use D e = 0.05.\nIsoScore is a similar metric that uses the diagonal of the covariance matrix of PCA-transformed points in order to measure how many dimensions are used and how uniformly the dimensions are used. Previous works on representation isotropy have used other metrics, like average cosine similarity or partition scores (Mu and Viswanath, 2018;Ethayarajh, 2019), but Rudman et al. (2022) found that these methods do not stand up to thorough validity testing, like mean agnosticism or rotational invariance.\nMore formally, IsoScore computes the following:\n1. Reorient dataset X ⊆ R n with PCA: X PCA 2. Compute the diagonal covariance matrix of X PCA ∈ R n , denoted as Σ D . 1 3. Normalize the variance diagonal to be:\nΣD := √ n Σ D ∥Σ D ∥\n4. Compute the distance between the covariance diagonal and the identity matrix, which reflects ideal isotropy: δ(X)\n:= ∥ ΣD -1∥ √ 2(n- √ n)\n5. Use δ(X) to compute the percentage of dimensions isotropically utilized.\nϕ(X) = (n -δ(X) 2 (n - √ n)) 2 /n 2\nThe final range of ϕ(X) is linearly rescaled to span the interval [0, 1], resulting in the IsoScore. More details and motivation behind the metric can be found in the original paper (Rudman et al., 2022). We detail an example of point clouds and their respective IsoScores and IDs in Figure 2. The main difference between IsoScore and ID is that IsoScore accounts for evenness of variance spread among the dimensions, whereas ID only computes a variance threshold. In our Figure 2 example, the ID of these point clouds is both 1.0, meaning that all dimensions are utilized, but the IsoScore captures more fine-grained detail about how the dimensions are being used.\nIn our work, we compute IsoScores and the ID of several sets of model representations for comparison. We begin with a multilingual model that translates language pairs s → {t 1 , t 2 , ..., t n }, 1 PCA guarantees no off-diagonal covariance elements.\na bilingual model that translates only s → t k , and a set of sentences {s(i), t k (i)}. For both models, we compute the isotropy using one of our metrics, of X enc = {h enc (s(i)) : ∀i} and X dec = {h dec (s(i), t k (i)) : ∀i}. These values are labelled Iso(X multi enc (s, t k )), Iso(X multi dec (s, t k )) and Iso(X bi enc (s, t k )), Iso(X bi dec (s, t k )). Additionally, to observe the overall behavior of our multilingual models, we compute the isotropy of hidden states from all covered language pairs, resulting in Iso(X multi enc (s, j t j )), Iso(X multi dec (s, j t j ))." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Trilingual Models", "publication_ref": [ "b43", "b11", "b35" ], "table_ref": [], "text": "In order to control for the effects of language similarity, we experiment with trilingual models that translate from English to two languages, keeping one of the target languages fixed (Xin et al., 2022;Fernandes et al., 2023;Shaham et al., 2023). Specifically, we look at trilingual models with English as a source language, and 2 target languages. We use Russian (ru) as a fixed target, and vary the 3 other target languages: Chinese (zh), German (de), and Ukrainian (uk). These three additional languages have differing degrees of language similarity with Russian; Ukrainian and Russian share a close language family and script, German and Russian share a distant language family and do not share a script, and Russian and Chinese do not share a language family or script. In summary, we experiment with en-{ru,zh}, en-{ru,de}, and en-{ru,uk} models." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b8" ], "table_ref": [ "tab_0" ], "text": "Our main experiments use data from previous WMT competitions on general translation. We use training and development data from the 2022 WMT General Machine Translation task, and describe our WMT data preparation pipeline in Appendix A.\nFor validation on our en-{ru,uk} multilingual models, we subsample from the WMT22 Russian development set in order to match the size of the Ukrainian set for evenness. However, we perform our analysis on the whole development set. We additionally use bitext from the Multitarget TED talks, which allow us to investigate the role of multiparallel data in MT representations (Duh, 2018). We filter the Multitarget TED talk training sets to be strictly multiparallel, like their dev and test sets, and henceforth refer to the dataset as multiparallel TED talks. To measure the effect of data availability as well as multiparallelism, we subsample our WMT data to match the size of the Multiparallel TED talks. This way, our small WMT 1." }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [ "b38", "b28", "b31", "b17", "b42", "b34", "b22", "b29", "b30" ], "table_ref": [], "text": "For our bilingual and multilingual translation models, we use the Transformer architecture as implemented by fairseq (Vaswani et al., 2017;Ott et al., 2019). For TED and WMT-small experiments, we use the transformer_iwslt_de_en configuration, and for WMT-large experiments, we use a transformer base configuration. We use weight tying between decoder input and output embeddings (Press and Wolf, 2017;Inan et al., 2016). For multilingual models, we incorporate target language id tokens prepended to the source sentence (Wicks and Duh, 2022). For all bilingual experiments, we use a joint source-target SentencePiece vocabulary of 16K tokens (Sennrich et al., 2016;Kudo and Richardson, 2018). For all multilingual experiments, we use a joint source-target vocabulary of 32K tokens. These vocabularies have high token overlap, where each multilingual vocabulary contains at least 93% of the bilingual vocabulary across all languages and datasets. This overlap leads to very similar tokenizations of the sentences in our comparisons.\nFor TED and WMT-small experiments, we select the best model checkpoint using validation on BLEU after training for up to 80 epochs. For WMT large experiments, we use average validation loss for selection after training up to 240k updates with a batch size of 32k tokens. All outputs are computed using a beam size of 5. We report BLEU scores on our dev sets computed with sacrebleu (Papineni et al., 2002;Post, 2018)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Multilingual decoder capacity reduction", "publication_ref": [], "table_ref": [], "text": "We find that across our language pair settings and across our dataset sizes, representations from bilingual model decoders are more isotropic than multilingual model decoder representations. In Table 2, we see that for all trilingual settings, and for both WMT-small and WMT-large, bilingual decoder isotropy scores are larger than those of multilingual models for the same language pair. For example, in the WMT-large en-{ru,zh} dataset, the IsoScore of multilingual decoder representations (iso-dec) for Russian is 0.164 and Chinese is 0.106, but in their respective bilingual models, these values jump to 0.192 for Russian and 0.142 for Chinese.\nAdditionally, we plot the singular values from the singular value decomposition (SVD) of the hidden states of one of our multilingual model decoders and its corresponding two bilingual model decoders in Figure 3. We see that the spectra of the bilingual model decoder hidden states are more balanced than those of from the multilingual model, as they do not drop off in value as quickly as the multilingual singular values. This additionally demonstrates that the bilingual decoder hidden states have better distribution of variance across its dimensions.\nBecause these representations are computed from the same set of source-target sentences, and only the model types differ, the multilinguality of the one-to-many decoder must be contributing its reduced representational capacity for the source-target pair. In this case, modeling languagespecific information in each decoder pass may be occupying much of the multilingual decoder state space. We explore this hypothesis further in Section 4.5.\nWMT-large WMT-small dataset langs type BLEU iso-enc ID-enc iso-dec ID-dec BLEU iso-enc ID-enc iso-dec ID-dec Table 2: Main isotropy results for models trained on WMT data. We report BLEU scores of each model on the appropriate validation set, and IsoScores and intrinsic dimensionalities (ID) for both encoder and decoder sentence representations. We report scores for both language pairs, and in both types of models, bilingual (bi) and multilingual (multi). We bold the higher IsoScore/ID value between each multilingual/bilingual comparison. We additionally report the IsoScore of multilingual model spaces on the entire development set, not separating by language pair (both). " }, { "figure_ref": [], "heading": "Multilingual encoder capacity increase", "publication_ref": [], "table_ref": [], "text": "In encoder representation spaces, we see an opposite effect, although less pronounced. In both en-{ru,zh} and en-{ru,de} models, across small and large data availability, multilingual encoders tend to have greater isotropy among representations than bilingual model encoders. However, the one exception is the WMT-small en-{ru-uk} model. Re-sults comparing this increase in encoder capacity to the decrease in decoder capacity in multilingual models, compared to their bilingual counterparts, are summarized in Figure 4.\nComparing multilingual encoder isotropy separated by language versus the isotropy of the whole multilingual encoder space (Table 2), we see that the difference in scores is not very large. This could indicate that the multilingual encoder space is benefiting from sharing across the English sources from both language pairs in our multilingual dataset. Figure 4: ∆IsoScore values comparing the extent of the observed encoder isotropy increase (Iso(X multi enc ) -Iso(X bi enc )) to the extent of the observed isotropy decrease (Iso(X bi dec ) -Iso(X multi dec )) in our multilingual models, compared to their bilingual counterparts. Overall, the extent of the decoder isotropy decrease is larger than that of the encoder increase." }, { "figure_ref": [], "heading": "Effects of training scale", "publication_ref": [ "b15", "b35" ], "table_ref": [], "text": "In comparing IsoScore results on WMT-small vs WMT-large setups, we see that in a larger scale, there is consistently less space utilization in both multilingual and bilingual models. This occurs consistently in the decoder space, and in almost all settings in the encoder space. Both models have the same hidden dimension d = 512, and differ only in their feed-forward dimension and attention heads. Even among the overall multilingual isotropy scores (setting labeled 'both' in Table 2), WMT-large representations have smaller isotropy values than WMT-small representations in almost all language settings. The observed increase in anisotropy with larger training scale is closely related to the representation degeneration problem reported in previous literature (Gao et al., 2019). This phenomenon describes a tendency towards anisotropy of the final softmax layer W in natural language generation models, due to a frequency bias affecting output token embedding updates. With more training updates, this frequency bias causes output token embeddings to become more anisotropic. In our case, we see a similar degeneration with final hidden states, which are closely related to the softmax layer given the output distribution computation y = softmax(h T W ) where h is our final hidden vector.\nIn terms of performance, we note that only the WMT-large BLEU scores see a reduction or no improvement in the multilingual case; it is known that measurable interference does not generally occur much at a smaller data scale (Shaham et al., 2023)." }, { "figure_ref": [], "heading": "Multi-way parallelism", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We report results on the Multiparallel TED Talks in Table 3. In this setting, we find that our results on increased isotropy of multilingual source-side representations still holds in a majority of cases, even though the source-side sentences are identical across our two language pairs in the trilingual model. This is a strong indication that in one-tomany models, source-side representations benefit from a shared source embedding space, and do not separate much based on target language.\nOn the other hand, our results on decreased decoder capacity do not hold in all language settings in our multiparallel model. An isotropy increase occurs over bilingual models to a small extent for our en-{ru,de} model, and a larger one for our en-{ru,uk} model, where the target languages share a script. However, the isotropy of our entire decoder multilingual space is still relatively low. This indicates that although there is still separation in the decoder space by language, each language's representation cluster in the decoder space is still more locally isotropic than its bilingual counterpart.\nWe test our TED model on our WMT test sets for direct comparability to our other models. Full results can be found in Appendix B. We see that results are mostly consistent for multilingual encoder isotropy improvement. For multilingual decoder isotropy, we see similar results with respect to language relatedness -bilingual decoder representations are more anisotropic than their multilingual counterparts for en-{ru,zh}, similar for en-{ru,de}, and the opposite for en-{ru,uk}, where the target languages are most related." }, { "figure_ref": [], "heading": "Decoder language separation", "publication_ref": [], "table_ref": [], "text": "Across all three language settings, and in all of our data settings, we see that the isotropy of the overall multilingual decoder hidden space is much lower than either of the specific language portions of the multilingual space. What this suggests, according to our metrics, is that there are some dimensions whose variance is heavily dictated by language information. When separating out these representations by language, the variance is reduced. This, however, is not the case when considering encoder language separation. We summarize this phenomenon in Figure 5.\nIn our multiparallel setting, tested on both our TED and WMT datasets, we see that this difference is smallest for en-{ru,uk}. We hypothesize that this difference is due to vocabulary sharing. Because Russian and Ukrainian share a script and subword units, shared output embedding vocabulary items would lead to closer hidden states. Their close typological relatedness could be contributing to their decoders state closeness as well. However, since Russian and German or Russian and Chinese share very few vocabulary units, their hidden states are further in the multilingual decoder space, as also seen in Figure 5." }, { "figure_ref": [], "heading": "Layerwise decoder behavior", "publication_ref": [ "b33", "b21", "b27" ], "table_ref": [], "text": "We further investigate our claim that multilingual decoders use significant representational capacity to model language-specific information by observing how isotropy changes in multilingual decoder states across decoder layers. We show layerwise isotropy results for multilingual decoder states in Figure 6. We obtain hidden states according method described in Section 2.2, but instead at each layer boundary.\nWe find that throughout decoder layers, the over- : ∆IsoScore values between languagespecific multilingual representations separated by language and overall multilingual representations, for both the encoder and decoder (Iso(X multi (s, t k ) -Iso(X multi (s, ∪ j t j )). Large ∆IsoScores between language-specific multilingual reps. and overall multilingual reps. indicate heavy encoding of language specificity in the decoder space. all isotropy of the entire set of decoder hidden states remains constant or decreases. However, for language-specific decoder states, we see that isotropy increases throughout the layers. Together, this implies that throughout the decoder layers, representations become more language specific. This suggests that earlier layers in the decoder benefit from some sharing, whereas later layers handle Figure 6: Layerwise IsoScores on our WMT-large models. The divergence between the overall decoder isotropy and language-specific isotropy shows that hidden states become more language-specific throughout the decoder. greater language specificity.\nIn summary, these results seem to suggest that decoders in multilingual translation models seem to separate out languages among the dimensions available in their hidden states. This finding could motivate the design and use of multilingual architectures that do not use complete sharing in their decoder parameters. Some prior work has already examined this approach (Sachan and Neubig, 2018;Kong et al., 2021;NLLB Team et al., 2022)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multilingual model capacity", "publication_ref": [ "b33", "b37", "b25", "b45", "b41", "b23" ], "table_ref": [], "text": "Prior work has also examined the bottleneck phenomenon in multilingual machine translation. Much of this work observes the phenomenon empirically, and proposes methods to try to alleviate the parity. Sachan and Neubig (2018) also focus on oneto-many translation models, and propose partial sharing between language decoders in order to reduce the observed interference during full sharing. Tan et al. (2019) propose a knowledge distillation method to reduce the parity between bilingual and multilingual translation models by using bilingual models as multiple teachers and the multilingual model as a student. Other methods propose using a mix of language-specific and language-agnostic parameters, (Lin et al., 2021) and even automatically learning where to and where not to share across language pairs (Zhang et al., 2021). Wang et al. (2021) approach interference from a gradient viewpoint, and find that in En→ Any models, gradients become less similar in decoders, and hypothesize that this is due to the difference in decoder label spaces. Kudugunta et al. (2019), like us, also investigate hidden representations to understand sharing in multilingual translation models. However, they focus on an in-house many-to-many translation model, and focus on representational similarities between languages, rather than representational capacity for language pairs. Shaham et al. ( 2023) take an empirical approach to understanding interference in multilingual translation models, by investigate how scale and multilingual dataset ratios affect performance. They propose to both scale up models and adjust temperature sampling to reduce interference for simple models. However, this approach is largely empirical, and does not account for smaller scales and balanced datasets." }, { "figure_ref": [], "heading": "Isotropy of Representations", "publication_ref": [ "b9", "b32", "b15", "b44" ], "table_ref": [], "text": "Recently, studies analyzing the geometry of Transformer representations have shown that they do not uniformly occupy many of the dimensions of the underlying space in which they lie. Ethayarajh (2019) show that many pretrained language models are anisotropic, where any two representations have very high cosine similarity. In addition to proposing a new metric, Rudman et al. (2022) also find that in their revised analysis, representations from language models use even fewer dimensions than previously reported. In the translation setting, Gao et al. (2019) show that embeddings from generation models, including MT models, tend to degenerate into an anisotropic distribution due to frequency bias. Yu et al. (2022) find a similar degeneration in generation models, and propose a gradient gating method that helps reduce the frequency bias causing embedding isotropy. They report improved MT results when controlling for anisotropy." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "While previous work has empirically demonstrated performance differences in multilingual and bilingual models, in this work, we systematically compare the geometry of model representations in bilin-gual and multilingual translation models in order to determine what might drive these differences. Using one-to-many models which are most prone to interference, we experiment with varying data sizes and source-target combinations.\nWe find for a given language pair, there is a consistent reduction in representational capacity in multilingual decoders versus comparable bilingual decoders. We additionally find a small increase in representational capacity for multilingual encoder spaces given the one-to-many task. Representational capacity decreases in a larger model and data paradigm, and results on multiparallel data show a strong improvement in multilingual encoder representational capacity and some improvement in multilingual decoder representational capacity. Finally, we find that reduced capacity in multilingual decoders can be attributed to language information occupying a significant portion of the available representation space." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b33", "b40", "b35" ], "table_ref": [], "text": "Our models cover at most 3 language families for the sake of controlled analysis when modern multilingual translation models cover many more. We think it is worthwhile to analyze models with larger coverage as future work. We focus on one-to-many models as they tend to fall behind other multilingual model types (Sachan and Neubig, 2018;Wang et al., 2018;Shaham et al., 2023). However, manyto-many models still have multilingual decoders but may have different behavior given their multilingual encoder state space.\nAdditionally, our conclusions focus on encoderdecoder models, but there is growing interest in decoder-only translation models whose isotropic behavior may differ.\nFinally, our work focuses only on the characterization of representational capacity differences between model types, and not on the improvement of representational capacity of one-to-many models. However, we hope this work provides insight into the development of future modeling techniques for models with multilingual decoders. " }, { "figure_ref": [], "heading": "B. TED Models on WMT dev set", "publication_ref": [], "table_ref": [], "text": "" } ]
Multilingual machine translation has proven immensely useful for both parameter efficiency and overall performance across many language pairs via complete multilingual parameter sharing. However, some language pairs in multilingual models can see worse performance than in bilingual models, especially in the one-to-many translation setting. Motivated by their empirical differences, we examine the geometric differences in representations from bilingual models versus those from one-to-many multilingual models. Specifically, we compute the isotropy of these representations using intrinsic dimensionality and IsoScore, in order to measure how the representations utilize the dimensions in their underlying vector space. Using the same evaluation data in both models, we find that for a given language pair, its multilingual model decoder representations are consistently less isotropic and occupy fewer dimensions than comparable bilingual model decoder representations. Additionally, we show that much of the anisotropy in multilingual decoder representations can be attributed to modeling language-specific information, therefore limiting remaining representational capacity.
Exploring Geometric Representational Disparities Between Multilingual and Bilingual Translation Models
[ { "figure_caption": "Figure 2 :2Figure 2: Depictions of 2D point clouds, their principal components, and their computed IsoScores and IDs. The left point cloud has high IsoScore due to even variance spread across principal components, but the right has lower IsoScore due to uneven variance spread. Both clouds have an ID of 1.0 as ID is less sensitive to variance spread.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Semi-log plots of normalized singular values from SVD of bilingual decoder hidden states and multilingual decoder hidden states for the WMT-large en-{ru,zh} model. The spectra of bilingual decoder hidden states are better balanced than those of multilingual decoder hidden states. We use a semi-log scale for visibility.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure5: ∆IsoScore values between languagespecific multilingual representations separated by language and overall multilingual representations, for both the encoder and decoder (Iso(X multi (s, t k ) -Iso(X multi (s, ∪ j t j )).Large ∆IsoScores between language-specific multilingual reps. and overall multilingual reps. indicate heavy encoding of language specificity in the decoder space.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: ∆ IsoScore values between languagespecific multilingual representations separated by language and overall multilingual representations, for both the encoder and decoder.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Total sentences in each bitext used in our work. We train trilingual models that translate from English into two other languages. We force the WMT-small training split to be the same size as Multiparallel TED for comparability. set and TED talks can help us study multiparallelism, and our small WMT set and large WMT set can help show the effect of scale on representational capacity. Statistics on our datasets are in Table", "figure_data": "WMT-largeWMT-smallMultiparallel TEDdatasetlangtraindevtraindevtraindeven-{ru,de}en-ru 98.2M 2993 149k 2993 149k en-de 98.2M 2203 149k 2203 149k1958 1958en-{ru,uk}en-ru 31.5M 2993 67k 2993 67k en-uk 31.5M 997 67k 997 67k1958 1958en-{ru,zh}en-ru 41.1M 2993 161k 2993 161k en-zh 41.1M 3418 161k 3418 161k1958 1958", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Isotropy results on the encoder and decoder sentence representations from our Multiparallel TED model, tested on the Multiparallel TED development set.", "figure_data": "Multiparallel TEDlangs type BLEU iso-enc ID-enc iso-dec ID-decen-rumulti bi16.0 15.50.135 0.1300.133 0.1190.253 0.2840.313 0.348en-zhmulti bi19.3 18.80.122 0.1250.113 0.1250.244 0.2770.305 0.338both-0.1380.1370.1040.063en-rumulti bi15.8 15.30.108 0.0970.094 0.0980.261 0.2500.326 0.309en-demulti bi26.1 25.20.104 0.0730.088 0.0660.258 0.2470.320 0.287both-0.1080.0940.1160.072en-rumulti bi13.5 12.20.127 0.1240.139 0.1450.248 0.2220.305 0.260en-ukmulti bi16.8 15.60.128 0.1240.141 0.1520.244 0.2010.299 0.238both-0.1300.1430.1730.168", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Isotropy results on our multiparallel TED model, tested on the WMT development set for direct comparison with our other models.", "figure_data": "TEDlangs type BLEU iso-enc iso-decen-rumulti bi10.6 10.30.102 0.1070.227 0.264en-zhmulti bi16.4 15.30.084 0.0340.166 0.194multi-0.0920.056en-rumulti bi11.0 10.20.097 0.0760.243 0.235en-demulti bi16.5 15.20.073 0.0400.223 0.227multi-0.0790.085en-rumulti bi7.7 7.10.125 0.1030.228 0.213en-ukmulti bi11.2 10.00.143 0.1310.202 0.188multi-0.1300.174", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Neha Verma; Kenton Murray; Kevin Duh
[ { "authors": "", "journal": "Bibliographical References", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Dmitry Lepikhin; Melvin Johnson; Maxim Krikun; Mia Xu Chen; Yuan Cao; George Foster; Colin Cherry", "journal": "", "ref_id": "b2", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges", "year": "2019" }, { "authors": "Mikel Artetxe; Holger Schwenk", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Marginbased parallel corpus mining with multilingual sentence embeddings", "year": "2019" }, { "authors": "Mikko Aulamo; Sami Virpioja; Jörg Tiedemann", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "OpusFilter: A configurable parallel corpus filtering toolbox", "year": "2020" }, { "authors": "Jonathan Bac; M Evgeny; Mirkes; Ivan Alexander N Gorban; Andrei Tyukin; Zinovyev", "journal": "Entropy", "ref_id": "b5", "title": "Scikit-dimension: a python package for intrinsic dimension estimation", "year": "2021" }, { "authors": "Kyunghyun Cho", "journal": "", "ref_id": "b6", "title": "Noisy parallel approximate decoding for conditional recurrent language model", "year": "2016" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Kevin Duh", "journal": "", "ref_id": "b8", "title": "The multitarget ted talks task", "year": "2018" }, { "authors": "Kawin Ethayarajh", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings", "year": "2019" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary; Naman Goyal; Tom Birch; Vitaliy Liptchinsky; Sergey Edunov; Edouard Grave; Michael Auli; Armand Joulin", "journal": "J. Mach. Learn. Res", "ref_id": "b10", "title": "Beyond english-centric multilingual machine translation", "year": "2021" }, { "authors": "Patrick Fernandes; Behrooz Ghorbani; Xavier Garcia; Markus Freitag; Orhan Firat", "journal": "", "ref_id": "b11", "title": "Scaling laws for multilingual neural machine translation", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b12", "title": "", "year": "" }, { "authors": "Orhan Firat; Kyunghyun Cho; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "year": "2016" }, { "authors": "K Fukunaga; D R Olsen", "journal": "IEEE Transactions on Computers, C", "ref_id": "b14", "title": "An algorithm for finding intrinsic dimensionality of data", "year": "1971" }, { "authors": "Jun Gao; Di He; Xu Tan; Tao Qin; Liwei Wang; Tieyan Liu", "journal": "", "ref_id": "b15", "title": "Representation degeneration problem in training natural language generation models", "year": "2019" }, { "authors": "Thanh-Le Ha; Jan Niehues; Alex Waibel", "journal": "", "ref_id": "b16", "title": "Toward multilingual neural machine translation with universal encoder and decoder", "year": "2016" }, { "authors": "Hakan Inan; Khashayar Khosravi; Richard Socher", "journal": "", "ref_id": "b17", "title": "Tying word vectors and word classifiers: A loss framework for language modeling", "year": "2016" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b18", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Armand Joulin; Edouard Grave; Piotr Bojanowski; Matthijs Douze; Hérve Jégou; Tomas Mikolov", "journal": "", "ref_id": "b19", "title": "Fasttext.zip: Compressing text classification models", "year": "2016" }, { "authors": "Armand Joulin; Edouard Grave; Piotr Bojanowski; Tomas Mikolov", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Bag of tricks for efficient text classification", "year": "2017" }, { "authors": "Xiang Kong; Adithya Renduchintala; James Cross; Yuqing Tang; Jiatao Gu; Xian Li", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Multilingual neural machine translation with deep encoder and multiple shallow decoders", "year": "2021" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Sneha Kudugunta; Ankur Bapna; Isaac Caswell; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Investigating multilingual NMT representations at scale", "year": "2019" }, { "authors": "Bohan Li; Hao Zhou; Junxian He; Mingxuan Wang; Yiming Yang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "On the sentence embeddings from pre-trained language models", "year": "2020" }, { "authors": "Zehui Lin; Liwei Wu; Mingxuan Wang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Learning language specific sub-network for multilingual machine translation", "year": "2021" }, { "authors": "Jiaqi Mu; Pramod Viswanath", "journal": "", "ref_id": "b26", "title": "All-butthe-top: Simple and effective post-processing for word representations", "year": "2018" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b27", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b29", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Ofir Press; Lior Wolf", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Using the output embedding to improve language models", "year": "2017" }, { "authors": "William Rudman; Nate Gillman; Taylor Rayne; Carsten Eickhoff", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "IsoScore: Measuring the uniformity of embedding space utilization", "year": "2022" }, { "authors": "Devendra Sachan; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Parameter sharing methods for multilingual selfattentional translation models", "year": "2018" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Uri Shaham; Maha Elbayad; Vedanuj Goswami; Omer Levy; Shruti Bhosale", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Causes and cures for interference in multilingual translation", "year": "2023" }, { "authors": "Jianlin Su; Jiarun Cao; Weijie Liu; Yangyiwen Ou", "journal": "", "ref_id": "b36", "title": "Whitening sentence representations for better semantics and faster retrieval", "year": "2021" }, { "authors": "Xu Tan; Yi Ren; Di He; Tao Qin; Zhou Zhao; Tie-Yan Liu", "journal": "", "ref_id": "b37", "title": "Multilingual neural machine translation with knowledge distillation", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b39", "title": "", "year": "" }, { "authors": "Yining Wang; Jiajun Zhang; Feifei Zhai; Jingfang Xu; Chengqing Zong", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Three strategies to improve one-to-many multilingual translation", "year": "2018" }, { "authors": "Zirui Wang; Yulia Tsvetkov; Orhan Firat; Yuan Cao", "journal": "", "ref_id": "b41", "title": "Gradient vaccine: Investigating and improving multi-task optimization in massively multilingual models", "year": "2021" }, { "authors": "Rachel Wicks; Kevin Duh", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "The effects of language token prefixing for multilingual machine translation", "year": "2022" }, { "authors": "Derrick Xin; Behrooz Ghorbani; Justin Gilmer; Ankush Garg; Orhan Firat", "journal": "", "ref_id": "b43", "title": "Do current multi-task optimization methods in deep learning even help?", "year": "2022" }, { "authors": "Sangwon Yu; Jongyoon Song; Heeseung Kim; Seongmin Lee; Woo-Jong Ryu; Sungroh Yoon", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Rare tokens degenerate all tokens: Improving neural text generation via adaptive gradient gating for rare token embeddings", "year": "2022" }, { "authors": "Biao Zhang; Ankur Bapna; Rico Sennrich; Orhan Firat", "journal": "", "ref_id": "b45", "title": "Share or not? learning to schedule language-specific capacity for multilingual translation", "year": "2021" }, { "authors": "Biao Zhang; Philip Williams; Ivan Titov; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Improving massively multilingual neural machine translation and zero-shot translation", "year": "2020" }, { "authors": "Yaoming Zhu; Jiangtao Feng; Chengqi Zhao; Mingxuan Wang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Counterinterference adapter for multilingual machine translation", "year": "2021" }, { "authors": "A Wmt", "journal": "", "ref_id": "b48", "title": "Data Preprocessing We preprocess and filter the WMT training data in order to ensure a set of high quality bitext from the original crawled data provided by organizers", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b49", "title": "Length ratio cleaning with ratio=3, and remove sentences with > 250 subwords", "year": "" }, { "authors": "", "journal": "", "ref_id": "b50", "title": "Language identification filter such that both the source and target language ID must be correct", "year": "2016" }, { "authors": "", "journal": "Artetxe and Schwenk", "ref_id": "b51", "title": "Bitext filtering using LASER Embeddings as implemented by the OpusFilter toolkit", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 313.91, 616.72, 213.02, 47.88 ], "formula_id": "formula_0", "formula_text": "1. Compute PCA of the dataset X ⊆ R n : cov(X) = V ΛV T 2. Compute normalized eigenvalues λ i = λ i /λ 1 3. return count(λ i > D e )/n" }, { "formula_coordinates": [ 3, 91.93, 166.12, 198.34, 26.68 ], "formula_id": "formula_1", "formula_text": "ΣD := √ n Σ D ∥Σ D ∥" }, { "formula_coordinates": [ 3, 180.87, 216.65, 55.6, 18.59 ], "formula_id": "formula_2", "formula_text": ":= ∥ ΣD -1∥ √ 2(n- √ n)" }, { "formula_coordinates": [ 3, 91.93, 255.39, 147.56, 15.91 ], "formula_id": "formula_3", "formula_text": "ϕ(X) = (n -δ(X) 2 (n - √ n)) 2 /n 2" } ]
2023-05-27
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b36", "b56", "b18", "b28", "b46", "b48", "b4", "b9", "b19", "b60", "b61", "b61", "b33" ], "table_ref": [], "text": "High-fidelity clothes digitization plays an essential role in various human-related vision applications such as virtual shopping, film, and gaming. In our daily life, humans are always in a moving status, driving their clothes to move together. To realize this very common scenario, it is indispensable to gain dynamic garments in real applications. Thanks to the rapid development of mobile devices in terms of digital cameras, processors, and storage, shooting a monocular video in the wild becomes highly convenient and accessible for general customers. In this paper, Figure 1. Can we extract dynamic 3D garments from monocular videos? The answer is Yes! By jointly optimizing the dynamic feature curves and garment surface followed by non-rigid template registration, our method can reconstruct high-fidelity and temporally consistent garment meshes with open boundaries. our goal is definite -extracting dynamic 3D garments from monocular videos, which is significantly meaningful and valuable for practical applications, but is yet an uncultivated land with many challenges.\nWe attempt to seek a new solution to this open problem and start by revisiting existing works from two mainstreams. i) Leveraging the success of neural rendering methods [35,37,57], several works are able to reconstruct dynamic clothed humans from monocular videos [8, 19,29,47,49], by representing the body surface with an implicit function in the canonical space and apply skinning based deformation for motion modeling. One naive way to achieve our goal is: first to get the clothed human through these methods and separate the garments from human bodies. However, such a separation job requires laborious and non-trivial processing by professional artists, which is neither straightforward nor feasible for general application scenarios. ii) As for garment reconstruction, many methods [5,10,20,61,62] make it possible to reconstruct high-quality garment meshes from single-view images in the wild. Specifically, ReEF [62] estimates 3D fea-ture curves * and an implicit surface field [34] for non-rigid garment template registration. Nonetheless, these methods struggle to produce temporally consistent surfaces when taking videos as inputs.\nThe above discussion motivates us to combine the merits of both the dynamic surface modeling in recent neural rendering methods and the explicit curve representation for garment modeling. To this end, we try to delineate a new path towards our goal: optimizing dynamic explicit feature curves and implicit garment surface from monocular videos, to extract temporally consistent garment meshes with open boundaries. We represent the explicit curves and implicit surface in the canonical space with skinning-based motion modeling, and optimize them by 2D supervision automatically extracted from the video (e.g., image intensities, garment masks, and visible feature curves). After that, the open garment meshes can be extracted by a garment template registration in the canonical space (see Fig. 1).\nWe strive to probe this path as follows: (1) As a feature curve is a point set whose deformation has a high degree of freedom, directly optimizing the per-point offsets often leads to undesired self-intersection and spike artifacts. To better regularize the deformation of curves, we introduce an intersection-free curve deformation method to maintain the order of feature curves. (2) We optimize the 3D feature curves using 2D projection loss measured by the estimated 2D visible curves, where the key challenge is to accurately compute the visibility of curves. To address this problem, we propose a surface-aware curve visibility estimation method based on the implicit garment surface and z-buffer. (3) To ensure the accuracy of curve visibility estimation during the optimization process, the curves should always be right on the garment surface. We therefore introduce a progressive curve and surface evolution strategy to jointly update the curves and surface while imposing the on-surface regularization for curves.\nTo summarize, the main contributions of this work are:\n• We introduce REC-MV, to our best knowledge, the first method to reconstruct dynamic and open loose garments from the monocular video. • We propose a new approach for joint optimization of explicit feature curves and implicit garment surface from monocular video, based on carefully designed intersection-free curve deformation, surfaceaware curve visibility estimation, and progressive curve and surface evolution methods. • Extensive evaluations on casually captured monocular videos demonstrate that our method outperforms existing methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b31", "b3", "b22", "b23", "b2", "b25", "b42", "b33", "b37", "b44", "b36", "b56", "b30", "b40", "b54", "b59", "b18", "b46", "b48", "b20", "b48", "b46", "b28", "b6", "b21", "b24", "b41", "b47", "b50", "b4", "b9", "b19", "b35", "b60", "b61", "b4", "b19", "b8", "b9", "b60", "b33", "b61", "b12", "b13", "b10", "b39" ], "table_ref": [], "text": "Human Reconstruction from Single-view Image. Traditional methods for human reconstruction often adopt a parametric human model (e.g., SMPL [32] or SCAPE [4]) and can only recover a naked 3D body [23,24]. To increase the surface details, free-form deformations can be applied to the mesh vertices to model small geometry variations caused by the clothing [2, 3,26,43,52].\nRecent methods propose to utilize implicit surface representations [34,38] to reconstruct 3D clothed human with an arbitrary topology. Specifically, PIFu and PIFuhd [44,45] extract pixel-aligned spatial features from images as the input for implicit surface function for occupancy prediction. Follow-up methods then integrates 3D-aligned features to improve the results [6, 15-18, 56, 58]. As these methods only consider single-image reconstruction, they cannot produce temporally consistent results for video input. Human Reconstruction from Monocular Video. Inspired by the success of neural rendering methods [35,37,57] in scene reconstruction, many methods have been proposed to reconstruct 3D human from sparse-view [31,41,53,55,60] or monocular [19,47,49] videos.\nAnim-NeRF [8], Neuman [21] and HumanNeRF [49] introduce methods based on neural radiance field (NeRF) [35] to reconstruct an animatable avatar from monocular video. These methods transform a 3D point in the observation space to the canonical space by inverse-skinning, and then perform volume rendering in the canonical space. A-NeRF [47] additionally adopt a skeleton-relative encoding strategy. AvatarCap [29] proposes a monocular human volumetric capture method, but requires reconstructing an avatar from multiple 3D scans in advance. Garment Reconstruction from Images. Reconstructing garment mesh from images enables many applications like virtual try-on and content creation. Existing methods reconstruct the clothing as a separate layer on top of the body [7,22,25,42,48,51]. Among them, several methods address the challenging problem of garment reconstruction from single-view image [5,10,20,36,61,62]. MGN [5] learns a per-category parametric model from a large-scale clothing dataset. BCNet [20] first reconstructs a coarse template and then refines the surface details with a displacement network. AnchorUDF [59] adopts the unsigned distance field (UDF) [9] to represent the open surface mesh. SMPlicit [10] proposes a generative model to reconstruct layered garments from a single image. Deep-Fasion3D [61] reconstructs the surface with occupancy network [34] and applys non-rigid ICP to register the clothing template. ReEF [62] registers explicit clothing template to the implicit field learned from pixel-aligned implicit function. However, as these single-image methods do not consider clothing motion, they are not suitable for dynamic gar- ment reconstruction.\nAmong methods related to garment reconstruction from videos, Li et al. [28] introduce a method to learn physicsaware clothing deformation from monocular videos, but assumes the template scans for the body and clothing are provided [13]. Garment Avatar [14] proposes a multi-view patterned cloth tracking algorithm, requiring the subject to wear clothing with specific patterns. SCARF represents the layered clothing using radiance field [11] on top of the SMPL-X model [40] from monocular video. In contrast, our method first reconstructs the explicit 3D garment curves and surfaces, and then extracts the garment mesh via template registration." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [ "b60", "b18", "b31", "b2" ], "table_ref": [], "text": "Given a monocular video with N i frames depicting a moving person {I t |t = 1, . . . , N i }, REC-MV aims to reconstruct high-fidelity and space-time coherent open garment meshes. This is a challenging problem as it requires a method to simultaneously capture the shape contours, local surface details, and the motion of the garment.\nObserving that feature curves (e.g., necklines, hemlines) provide critical cues for determining the shape contours of garment [61] and implicit signed distance function (SDF) can well represent a detailed closed surface [19], we propose to first optimize the explicit 3D feature curves and implicit garment surfaces from the video, and then apply non-rigid clothing template registration to extract the open garment meshes (see Fig. 2). Preprocessing. We generate the initial shape parameter β, camera intrinsic π, and per-frame SMPL [32] pose param-eters {θ t |t = 1, . . . , N i } using Videoavatar [3]. To identify the garment regions in 2D images, we apply the existing garment parsing method [27] to estimate the garment masks. Our method also requires 2D visible curves ζ = {ζ l,t |l = 1, . . . N l , t = 1, . . . , N i } for 3D curve recovery, where N l denotes the number of curves. Note that the 2D visible curves can be automatically produced by parsing boundaries of the garment mask (more details in the supplementary material).\nOverview. To utilize the information that exists in the entire video for dynamic garment reconstruction, we represent the explicit feature curve and implicit garment surface in the canonical space (Sec. 3.1). For a specific time step, we adopt the skeleton-based skinning and non-rigid deformation modeling to map the canonical curves and surfaces to the camera view space (Sec. 3.2). As the given 2D curves only contain visible points, to optimize the 3D feature curves from 2D projection error, we propose a surfaceaware approach to compute the visibility of the 3D feature curve based on z-buffer (Sec. 3.3). In terms of implicit surface optimization, we minimize the photometric loss between the rendered and input image based on the differentiable surface rendering technique (Sec. 3.4). Then the adopted loss functions for joint optimization of curves and surfaces are described (Sec. 3.5). Last, the open garment meshes can be extracted by registering an explicit garment template to the recovered curves and implicit surfaces in the canonical space (more details in the supplementary material). Then the garment meshes can be deformed based on the SMPL poses." }, { "figure_ref": [ "fig_1" ], "heading": "Feature Curve and Surface Representation", "publication_ref": [ "b60", "b8", "b18" ], "table_ref": [], "text": "Explicit Surface Template.\nFollowing DeepFash-ion3D [61], we employ several surface templates, each contains a pre-defined set of 3D feature curves L = {L i |i = 1, . . . , N l } extracted from the garment boundaries, where N l is the number of feature curves (see our supplementary materials for more details). The surface templates will be used for garment surface initialization and the pre-defined feature curves will be used for curve initialization † . Intersection-free Curve Deformation. A straightforward idea is to represent a feature curve as a discrete point set, and directly estimate the 3D deformation offset for each point during optimization. However, this unstructured curve representation struggles to maintain the order of the points and often generate spike artifacts due to the high degree of freedom of the deformation.\nTo address this issue, we introduce a novel intersectionfree curve deformation method, in which the point's deformation at each step is controlled by the curve center and two orthogonal directions (see Fig. 3 for illustration). Formally, given a curve C of N p points with center p c , the updated position of i-th point C(i) is defined as\nC ′ (i) = p c + S d i n d i + S c i n c ,(1)\nwhere n d i is the direction from the curve center to the current point C(i), and\nn c = 1 Np-1 Np i=1 (n d i × n d i-1\n) is the direction perpendicular to the current feature curve plane. S d i ∈ R and S c i ∈ R are learnable parameters specifying the step size of the deformation.\nThe proposed intersection-free curve deformation can well preserve the order of points in the curve, which largely reduced the difficulty of optimization compared to the direct offset estimation approach. Implicit SDF in Canonical Space. Unsigned distance field (UDF) [9] is an implicit function that can represent an open surface. However, as UDF is not differentiable at points close to the surface, it is non-trivial to integrate UDF with differentiable surface rendering to take advantage of supervision from 2D photometric loss. We therefore adopt the SDF to represent a closed garment surface for surface geometry recovery, followed by garment template registration to extract the open surface.\nIt is common to represent the whole surface with a single SDF for human reconstruction [19]. However, as our goal is to reconstruct separate clothes, using a single SDF to represent both the upper clothes and bottom clothes (e.g., skirt) increases the difficulty of template registration (i.e., splitting the upper and bottom clothes requires highly accurate waist curves). † including templates for uppers, dresses, coats, pants, and skirts. To enable better template registration, we consider three different surface types (i.e., upper-clothing, bottomclothing, and upper-bottom) according to the garment types, and represent each surface type as the zero-isosurface of an independent SDF in the canonical space. The SDF is expressed by an MLP f with learnable weights η :\nS(η) = {p ∈ R 3 |f (p; η) = 0}.\nFor the sake of simplicity and without loss of generality, we illustrate our method in reconstructing a single surface type later in this section." }, { "figure_ref": [], "heading": "Skinning Based Motion Modeling", "publication_ref": [ "b31", "b29", "b15", "b18" ], "table_ref": [], "text": "We model large body motions by linear blend skinning (LBS) transformation based on the SMPL [32] model, and utilize a non-rigid deformation field to account for finegrained deformations. Skinning Transformation. Given a SMPL body with shape parameter β and a pose parameter θ i in i-th frame, a point p on the body surface in canonical space with skinning weights w(p) can be warped to camera view space via skinning transformation W.\nNotably, the skinning weights w(p) are only defined for points on the SMPL surface. To warp arbitrary points in the canonical space to camera view, we use the diffused skinning strategy [30] to propagate the skinning weights of SMPL body vertices to the entire canonical space, and store the weights in a voxel grid of size 256×256×256. Then we can obtain the skinning weights by trilinear interpolation. Non-rigid Deformation. Skinning deformation enables the garment surface to deform in a way consistent with the body's large-scale motion [16]. However, the motion of details and garment parts that are far away from body cannot be fully represented by skinning transformation [19]. Hence, a non-rigid deformation MLP is used to model these fine-grained changes. Specifically, we design an MLP D with learnable parameters ϕ to model garment surface's non-rigid deformation:\np ′ = D(p, h, E(p); ϕ),(2)\nwhere p ′ is the deformed point of the input point p in the canonical space, h is the latent code of the current frame, and E(p) of p is the position encoding [35] to represent the high-frequency information of spatial points. Finally, combining D with skinning transformation field W, we could define a deformation field Φ(•) = W(D(•)) to warp any points in the canonical space to the camera view." }, { "figure_ref": [], "heading": "3D Feature Curves from 2D Projections", "publication_ref": [ "b32" ], "table_ref": [], "text": "The 3D feature curve will be optimized by minimizing the distance between its 2D projection on the image plane and the provided 2D visible curves. The key challenge here is how to compute the visibility of the 3D curves in the camera view. We first introduce a curve initialization strategy based on rigid transformation, and then propose a surfaceaware curve visibility estimation method to support accurate non-rigid curves optimization. Feature Curve Initialization. We start from the predefined feature curve sets L = {L i |i = 1, . . . , N l } provided in the garment template. To reduce the difficulty of curve optimization, we perform a rigid curve initialization by directly minimizing the Chamfer Distance (CD) between the projected curves on the camera view space and the corresponding visible 2D curves ζ as\ns, t, R = arg min s,t,R CD Π W( Li ) , ζ i ,(3)\nLi = sR(L i ) + t,(4)\nwhere Π is the projection matrix, Li is the transformed feature curve. t ∈ R 3 , R ∈ SO(3), and s ∈ R are the optimized translation, rotation, and scaling parameters, respectively.\nIn our implementation, we execute 150 gradient descent iterations to solve the rigid transformation parameters. After rigid optimization, we set L as the initial position for the feature curve sets {C i |i = 1, . . . , N l } for later non-rigid optimization. Surface-aware Curve Visibility Estimation. As the 2D feature curve ζ only contains visible points, it is essential to identify the visible points of the 3D curve C in camera space. A naive solution is to consider a point C(i) as visible if the cosine similarity between the view direction v and n d i (i.e., the direction from curve center to the i-th point) in view-pose is less than 0. However, this approach will produce wrong judgments when a curve is occluded by other body parts.\nTo tackle this problem, a surface-aware curve visibility estimation method is proposed. Specifically, we generate an explicit mesh T s from implicit surface S(η) in canonical space via marching cube [33]. Next, we deform T s to camera view space via the deformation field Φ(T s ). Then, we can check if a feature curve point Φ(C(i)) is occluded by the explicit mesh in view space based on z-buffer:\nV C(i) = zbuffer test(Φ(C(i)), Φ(T s )).\n(5)\nHowever, we find that the 3D curve C might sometimes move outside or have a scale larger than the explicit mesh T s , there will be some errors if only depending on the zbuffer testing between C(i) and T s . We therefore make use of the SMPL surface to improve the visibility estimation, by checking if the nearest point of C(i) on the SMPL body is occluded in the camera view space in a similar way. Note that this is feasible as in our intersection-free curve deformation, the correspondences between C(i) and its nearest vertice in the SMPL body are almost unchanged during optimization. Then a curve point is considered as visible if it passes both visibility checks." }, { "figure_ref": [], "heading": "Progressive Curve and Surface Co-evolution.", "publication_ref": [ "b45", "b18" ], "table_ref": [], "text": "The surface of the garment is represented by the implicit SDF. As the feature curve visibility estimation depends on the garment surface, the curves and surface have to evolve consistently. To ensure the accuracy of curve visibility during the optimization process, we jointly optimize the curves and surface while imposing a regularization that the curves lie on the zero-isosurface of the SDF. The implicit surface is minimized by the photometric loss based on differentiable surface rendering. Curve-aware Surface Initialization. A good initialization for the implicit SDF S(η) can reduce the optimization difficulty and improve the performance, especially for the long skirt and dress. Thanks to our curve-aware garment representation, we can utilize the initialized feature curve L computed in Eq. ( 4) to enable a better shape initialization. Specifically, we apply a handle-based deformation [46] to deform a surface template such that its feature curves are aligned with L. Then, we apply IGR [12] to initialize the implicit surface S(η) by fitting the deformed template. Differentiable Surface Rendering. To reconstruct highfidelity geometry, following the SelfRecon [19], we find the intersection points p on the surface and make them differentiable (more details can be found in supplementary).\nAfter obtaining the intersection points p, we compute its gradient n p = ∇f (p; η) and transform the camera view to canonical space as v p by the Jacobian matrix of the deformed point Φ(p) (more details can be found in supplementary). To better account for the changes in the illumination, we also take a per-frame latent code z as input to the color rendering network f c . Then, the surface color C p of point p can be computed as\nC p = f c (p, n p , v p , z, E(p); ψ).(6)" }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "The overall loss function consists of two parts, one part is for the feature curves optimization and the other is for garment surfaces optimization." }, { "figure_ref": [], "heading": "Explicit Feature Curve Loss", "publication_ref": [ "b9" ], "table_ref": [], "text": "The optimization of feature curves relies on the 2D projection loss, a curve slope regularization loss, and an onsurface regularization loss that ensures the feature curves are on the garment surface. Feature Curve Projection Loss. Given SMPL pose parameter θ i and the camera projection matrix Π, we warp predicted feature curve C to camera view space via deformation field Φ, and compute project loss L proj measured by 2D visible curves ζ using Chamfer Distance (CD):\nL proj = CD(V C ⊗ Π(Φ(C)), ζ)(7)\nwhere V C is the visibility mask of curves, and symbol ⊗ indicates the mask selection operator.\nFeature Curve Slope Regularization. To maintain the curvature of 3D curve C, we design a slope loss L slop to regularize that the slope is consistent between adjacent points\nL slop = Np i=1 (1 -cos < s i+1 , s i >)(8)\nwhere s i = C(i + 1) -C(i), N p is the point number in the curve, and cos <> is the cosine similarity function.\nOn-surface Regularization. In addition, the feature curves are required to be on the corresponding garment surface. Hence, we introduce an as near as possible loss L anap as:\nL anap = Np i=1 |f (C(i); η)|(9)\nThe overall explicit feature curve loss can be written as: (10) where λ proj , λ slop and λ anap are loss weights.\nL curve = λ proj L proj + λ slop L slop + λ anap L anap" }, { "figure_ref": [ "fig_7" ], "heading": "Garment Surface Loss", "publication_ref": [ "b18", "b49", "b38", "b18", "b53" ], "table_ref": [], "text": "For a monocular video with N i frames, the learnable parameter in implicit surface reconstruction is denoted as Θ:\nΘ = {η, ϕ, ψ} ∪ {h i , z i |i = 1, . . . , N i }(11)\nSurface Rendering Loss. For a pixel within the garment mask, we compute the ray's intersection point p on the canonical surface S(η) and apply surface rendering network to predict the color C p (see Eq. ( 6)). Then the photometric loss can be computed as\nL RGB = 1 |R| p∈R |C p (Θ) -I p |, (12\n)\nwhere R is the sample point set, I p is the corresponding ground-truth pixel color from the input images. Mask-guided Implicit Consistency Loss. To better optimize implicit surface, following SelfRecon [19], we periodically extract explicit surface meshes T s in canonical space from SDF f and use a differentiable renderer [50] to iteratively optimize T s by a mask loss using the surface mask.\nThen the updated explicit surface Ts will be used to supervise the implicit SDF f as\nL mcons = 1 | Ts | p∈ Ts |f (p; η)|.(13)\nCurve-guided Implicit Consistency Loss. We find that the explicit mesh Ts updated by the mask loss might contain holes or even collapse in some surface areas, which will harm the learning of implicit surface (see in Fig. 9). To address this issue, we design an explicit curve and surface consistency loss. Specifically, for a specific feature curve C that belongs to two implicit surfaces (e.g., waist curve belongs to both the upper-clothing and bottom-clothing), we generate its closed surface T C and then sample N a points from T C to constrain the implicit SDF f as\nL ccons = 1 |T C | p∈T C |f (p; η)|.(14)\nCommon Implicit Loss. Eikonal loss L eik [12] is included to make the implicit function the signed distance function. To avoid distortion of non-rigid transformation, a rigid loss [39] L arap is computed to constrain the nonrigid deformation. We also compute normal loss L norm in canonical space to further refine the surface [19]. Moreover, we compute the skeleton smoothness loss [54] to reduce the high-frequency jitter of SMPL poses among frames (more details can be found in supplementary).\nThe overall implicit surface loss can be written as:\nL ims = L RGB + λ mcons L mcons + λ ccons L ccons λ arap L arap + λ eik L eik + λ norm L norm ,(15)\nwhere λ arap , λ mcons , λ ccons , λ eik , and λ norm are the loss weights." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b19", "b35", "b61" ], "table_ref": [], "text": "Since there is no existing method for open garment meshes reconstruction from monocular videos, we compare with three state-of-the-art single-image methods, namely BCNet [20], ClothWild [36], and ReEF [62]." }, { "figure_ref": [ "fig_2" ], "heading": "Evaluation on Synthetic Dataset", "publication_ref": [ "b18", "b19", "b35", "b61", "b0" ], "table_ref": [], "text": "Since there is no public real dataset for evaluating dynamic garment reconstruction, we adopt four video sequences from the synthetic data generated by SelfRecon [19] for quantitative evaluation. From left to right in each example: the ground-truth mesh, results of BCNet [20], ClothedWild [36], ReEF [62], and Ours. We first employ Blender [1] to extract the ground-truth garment mesh from the provided clothed human mesh of the first frame. To measure the accuracy of the reconstructed meshes, we compute the Chamfer distance (CD) between the ground-truth and estimated meshes. To evaluate the temporal consistency of the reconstructed meshes for the video sequence, we measure the consistency of corresponding vertices (CCV), which is the root mean square error of the corresponding vertices distances in adjacent frames.\nWe test our method and the baseline methods on these four video sequences. Table 1 shows that our method achieves the best results in the metrics of CD and CCV on all four videos, demonstrating the effectiveness of our method in reconstructing accurate and temporally consistent dynamic garment meshes. From the results of high errors in the CCV, we can clearly see that single-image methods fail to maintain the consistency of the reconstruction for the video input. Figure 4 compares the visual results, in which our method produces detailed and accurate garments that are mostly close to the ground-truth surfaces." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Evaluation on Real-world Videos", "publication_ref": [ "b2", "b19", "b35" ], "table_ref": [], "text": "We then qualitatively evaluate our method on the Peo-pleSnapshot [3] and a dataset captured by ourselves. These testing videos include a diverse variety of garments categories, including upper-cloth, dress, coats, pants, and skirts. Table 1. Quantitative results on four synthetic sequences. We compare the Chamfer distance (CD) between the ground-truth and reconstructed surfaces (in cm), as well as the consistency of corresponding vertices (CCV) between adjacent frames. Figure 5 shows the visual comparisons. The results of the baseline methods are predicted using a single image as input. Our method can faithfully reconstruct the layouts and surface details of the garments. In contrast, BCNet [20] and ClothWild [36] cannot accurately predict the garment layouts and produce over-smooth surfaces.\nWe also demonstrate our dynamic reconstruction results in Fig. 6. We can see that our method can produce spacetime coherent results for different garment types (including the challenging dresses) from monocular videos, which is difficult to achieve with single-image methods." }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We next conduct ablation study for different components of our method (more results in our supplementary material). Curve Visibility Estimation. As shown in Fig. 7, sim- ply using normal direction for visibility estimation leads to worse results, while using both the implicit SDF and SMPL surfaces for z-buffer testing produces the best result. Explicit Curve Losses. Curve-guided Consistency Loss. To improve the optimization of the implicit surface, we use curves to regularize the surface. Figure 9 shows that this regularization effectively improves the surface geometry, verifying that curves benefit the optimization of surfaces." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented a new framework for dynamic garment reconstruction from monocular videos, by formulating this task as an optimization problem of dynamic 3D curves and surface recovery, followed by garment template registration. To solve this problem, we introduce a novel approach, called REC-MV, to jointly optimize the curves and surface from 2D supervision in a progressive co-evolution manner. Experimental results show that our method can reconstruct high-fidelity dynamic garments meshes with open boundaries, significantly outperforming existing methods. Limitations. Our method can only reconstruct common garment categories whose contours can be represented by feature curves. Additionally, our method requires the moving person to be observed from different angles." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. The work was supported in part by NSFC with Grant No. 62293482, the Basic Research Project No.HZQB-KCZYZ-2021067 of Hetao Shenzhen-HK S&T Cooperation Zone. It was also partially supported by Shenzhen General Project with No.JCYJ20220530143604010, the National Key R&D Program of China with grant No.2018YFB1800800, by NSFC No. 62202409, by Shenzhen Outstanding Talents Training Fund 202002, by Guangdong Research Projects No.2017ZT07X152 and No.2019CX01X104, by the Guangdong Provincial Key Laboratory of Future Networks of Intelligence (Grant No.2022B1212010001), and by Shenzhen Key Laboratory of Big Data and Artificial Intelligence (Grant No.ZDSYS201707251409055). It was also sponsored by CCF-Tencent Open Research Fund." } ]
Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem as it provides a practical and low-cost solution for clothes digitization. Recent neural rendering methods achieve high-quality dynamic clothed human reconstruction results from monocular video, but these methods cannot separate the garment surface from the body. Moreover, despite existing garment reconstruction methods based on feature curve representation demonstrating impressive results for garment reconstruction from a single image, they struggle to generate temporally consistent surfaces for the video input. To address the above limitations, in this paper, we formulate this task as an optimization problem of 3D garment feature curves and surface reconstruction from monocular video. We introduce a novel approach, called REC-MV, to jointly optimize the explicit feature curves and the implicit signed distance field (SDF) of the garments. Then the open garment meshes can be extracted via garment template registration in the canonical space. Experiments on multiple casually captured datasets show that our approach outperforms existing methods and can produce high-quality dynamic garment surfaces.
REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
[ { "figure_caption": "Figure 2 .2PhotometricLoss", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Illustration of the intersection-free curve deformation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative comparison on the synthetic dataset.From left to right in each example: the ground-truth mesh, results of BCNet[20], ClothedWild[36], ReEF[62], and Ours.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparison on real datasets between BCNet[20], ClothWild[36], ReEF[62], and our method. Upper clothes are visualized in red color, while bottom clothes and dresses are visualized in blue color. Note that BCNet and ClothWild cannot model dresses.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Dynamic garment reconstruction results of our method. Each row shows the reconstruction of four frames in a monocular video.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Ablation study of curve visibility estimation method.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 (a) shows that without the curve slop loss L slop , the optimized curves will contain noise and artifacts. As shown in Fig. 8 (b), the proposed onsurface regularization (i.e., L anap ) can well constrain the curves to be on the surface and produce much more accurate fitting results, demonstrating the implicit surface helps the optimization of curves.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 8. Ablation study of the explicit curve losses.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" } ]
Lingteng Qiu; Guanying Chen; Jiapeng Zhou; Mutian Xu; Junle Wang; Xiaoguang Han; Sse; Fnii
[ { "authors": " Blender", "journal": "", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Thiemo Alldieck; Marcus Magnor; Bharat Lal Bhatnagar; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b1", "title": "Learning to reconstruct people in clothing from a single RGB camera", "year": "2019" }, { "authors": "Thiemo Alldieck; Marcus Magnor; Weipeng Xu; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b2", "title": "Video based reconstruction of 3d people models", "year": "2018" }, { "authors": "Dragomir Anguelov; Praveen Srinivasan; Daphne Koller; Sebastian Thrun; Jim Rodgers; James Davis", "journal": "TOG", "ref_id": "b3", "title": "SCAPE: shape completion and animation of people", "year": "2005" }, { "authors": "Bharat Lal Bhatnagar; Garvita Tiwari; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b4", "title": "Multi-garment net: Learning to dress 3d people from images", "year": "2019" }, { "authors": "Yukang Cao; Guanying Chen; Kai Han; Wenqi Yang; Kwan-Yee K Wong", "journal": "", "ref_id": "b5", "title": "Jiff: Jointly-aligned implicit face function for high quality single view clothed human reconstruction", "year": "2022" }, { "authors": "Andrés Casado-Elvira; Marc Comino Trinidad; Dan Casas", "journal": "", "ref_id": "b6", "title": "Pergamo: Personalized 3d garments from monocular video", "year": "2022" }, { "authors": "Jianchuan Chen; Ying Zhang; Di Kang; Xuefei Zhe; Linchao Bao; Xu Jia; Huchuan Lu", "journal": "", "ref_id": "b7", "title": "Animatable neural radiance fields from monocular rgb videos", "year": "2021" }, { "authors": "Julian Chibane; Aymen Mir; Gerard Pons-Moll", "journal": "NeurIPS", "ref_id": "b8", "title": "Neural unsigned distance fields for implicit function learning", "year": "2020" }, { "authors": "Enric Corona; Albert Pumarola; Guillem Alenya; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b9", "title": "Smplicit: Topology-aware generative model for clothed people", "year": "2021" }, { "authors": "Yao Feng; Jinlong Yang; Marc Pollefeys; Michael J Black; Timo Bolkart", "journal": "", "ref_id": "b10", "title": "Capturing and animation of body and clothing from monocular video", "year": "2022" }, { "authors": "Amos Gropp; Lior Yariv; Niv Haim; Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b11", "title": "Implicit geometric regularization for learning shapes", "year": "2020" }, { "authors": "Marc Habermann; Weipeng Xu; Michael Zollhofer; Gerard Pons-Moll; Christian Theobalt", "journal": "", "ref_id": "b12", "title": "Deepcap: Monocular human performance capture using weak supervision", "year": "2020" }, { "authors": "Oshri Halimi; Fabian Prada; Tuur Stuyck; Donglai Xiang; Timur Bagautdinov; He Wen; Ron Kimmel; Takaaki Shiratori; Chenglei Wu; Yaser Sheikh", "journal": "", "ref_id": "b13", "title": "Garment avatars: Realistic cloth driving using pattern registration", "year": "2022" }, { "authors": "Tong He; John Collomosse; Jin Hailin; Stefano Soatto", "journal": "NeurIPS", "ref_id": "b14", "title": "Geo-pifu: Geometry and pixel aligned implicit functions for single-view human reconstruction", "year": "2020" }, { "authors": "Tong He; Yuanlu Xu; Shunsuke Saito; Stefano Soatto; Tony Tung", "journal": "", "ref_id": "b15", "title": "Arch++: Animation-ready clothed human reconstruction revisited", "year": "2021" }, { "authors": "Yang Hong; Juyong Zhang; Boyi Jiang; Yudong Guo; Ligang Liu; Hujun Bao", "journal": "", "ref_id": "b16", "title": "Stereopifu: Depth aware clothed human digitization via stereo vision", "year": "2021" }, { "authors": "Zeng Huang; Yuanlu Xu; Christoph Lassner; Hao Li; Tony Tung", "journal": "", "ref_id": "b17", "title": "Arch: Animatable reconstruction of clothed humans", "year": "2020" }, { "authors": "Boyi Jiang; Yang Hong; Hujun Bao; Juyong Zhang", "journal": "CVPR", "ref_id": "b18", "title": "Selfrecon: Self reconstruction your digital avatar from monocular video", "year": "2022" }, { "authors": "Boyi Jiang; Juyong Zhang; Yang Hong; Jinhao Luo; Ligang Liu; Hujun Bao", "journal": "", "ref_id": "b19", "title": "Bcnet: Learning body and cloth shape from a single image", "year": "2020" }, { "authors": "Wei Jiang; Kwang Moo Yi; Golnoosh Samei; Oncel Tuzel; Anurag Ranjan", "journal": "", "ref_id": "b20", "title": "Neuman: Neural human radiance field from a single video", "year": "2022" }, { "authors": "N Jin; Y Zhu; Z Geng; R Fedkiw", "journal": "", "ref_id": "b21", "title": "A pixelbased framework for data-driven clothing", "year": "2020" }, { "authors": "Hanbyul Joo; Tomas Simon; Yaser Sheikh", "journal": "", "ref_id": "b22", "title": "Total capture: A 3d deformation model for tracking faces, hands, and bodies", "year": "2018" }, { "authors": "Angjoo Kanazawa; Michael J Black; David W Jacobs; Jitendra Malik", "journal": "", "ref_id": "b23", "title": "End-to-end recovery of human shape and pose", "year": "2018" }, { "authors": "Zorah Lahner; Daniel Cremers; Tony Tung", "journal": "", "ref_id": "b24", "title": "Deepwrinkles: Accurate and realistic clothing modeling", "year": "2018" }, { "authors": "Eldar Verica Lazova; Gerard Insafutdinov; Pons-Moll", "journal": "", "ref_id": "b25", "title": "360-degree textures of people in clothing from a single image", "year": "2019" }, { "authors": "Peike Li; Yunqiu Xu; Yunchao Wei; Yi Yang", "journal": "TPAMI", "ref_id": "b26", "title": "Selfcorrection for human parsing", "year": "2020" }, { "authors": "Yue Li; Marc Habermann; Bernhard Thomaszewski; Stelian Coros; Thabo Beeler; Christian Theobalt", "journal": "", "ref_id": "b27", "title": "Deep physicsaware inference of cloth deformation for monocular human performance capture", "year": "2021" }, { "authors": "Zhe Li; Zerong Zheng; Hongwen Zhang; Chaonan Ji; Yebin Liu", "journal": "", "ref_id": "b28", "title": "Avatarcap: Animatable avatar conditioned monocular human volumetric capture", "year": "2022" }, { "authors": "Siyou Lin; Hongwen Zhang; Zerong Zheng; Ruizhi Shao; Yebin Liu", "journal": "", "ref_id": "b29", "title": "Learning implicit templates for point-based clothed human modeling", "year": "2022" }, { "authors": "Lingjie Liu; Marc Habermann; Viktor Rudnev; Kripasindhu Sarkar; Jiatao Gu; Christian Theobalt", "journal": "TOG", "ref_id": "b30", "title": "Neural actor: Neural free-view synthesis of human actors with pose control", "year": "2021" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "TOG", "ref_id": "b31", "title": "Smpl: A skinned multiperson linear model", "year": "2015" }, { "authors": "E William; Harvey E Lorensen; Cline", "journal": "SIG-GRAPH", "ref_id": "b32", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b33", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b34", "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Gyeongsik Moon; Hyeongjin Nam; Takaaki Shiratori; Kyoung Mu; Lee ", "journal": "", "ref_id": "b35", "title": "3d clothed human reconstruction in the wild", "year": "2022" }, { "authors": "Michael Niemeyer; Lars Mescheder; Michael Oechsle; Andreas Geiger", "journal": "", "ref_id": "b36", "title": "Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision", "year": "2020" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b37", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Keunhong Park; Utkarsh Sinha; Jonathan T Barron; Sofien Bouaziz; Dan B Goldman; Steven M Seitz; Ricardo Martin-Brualla", "journal": "", "ref_id": "b38", "title": "Nerfies: Deformable neural radiance fields", "year": "2021" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; Dimitrios Ahmed Aa Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b39", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "Sida Peng; Yuanqing Zhang; Yinghao Xu; Qianqian Wang; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b40", "title": "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans", "year": "2021" }, { "authors": "Gerard Pons-Moll; Sergi Pujades; Sonny Hu; Michael Black", "journal": "TOG", "ref_id": "b41", "title": "ClothCap: Seamless 4D clothing capture and retargeting", "year": "2017" }, { "authors": "Ma Qianli; Yang Jinlong; Ranjan Anurag; Pujades Sergi; Pons-Moll Gerard; Tang Siyu; J Black Michael", "journal": "", "ref_id": "b42", "title": "Learning to dress 3d people in generative clothing", "year": "2020" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b43", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b44", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "H-P Seidel", "journal": "", "ref_id": "b45", "title": "Laplacian surface editing", "year": "2004" }, { "authors": "Shih-Yang Su; Frank Yu; Michael Zollhöfer; Helge Rhodin", "journal": "NeurIPS", "ref_id": "b46", "title": "A-nerf: Articulated neural radiance fields for learning human shape, appearance, and pose", "year": "2021" }, { "authors": "Garvita Tiwari; Bharat Lal Bhatnagar; Tony Tung; Gerard Pons-Moll", "journal": "", "ref_id": "b47", "title": "Sizer: A dataset and model for parsing 3d clothing and learning size sensitive 3d clothing", "year": "2020" }, { "authors": "Chung-Yi Weng; Brian Curless; P Pratul; Jonathan T Srinivasan; Ira Barron; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b48", "title": "Humannerf: Free-viewpoint rendering of moving people from monocular video", "year": "2022" }, { "authors": "Olivia Wiles; Georgia Gkioxari; Richard Szeliski; Justin Johnson", "journal": "", "ref_id": "b49", "title": "Synsin: End-to-end view synthesis from a single image", "year": "2020" }, { "authors": "Donglai Xiang; Timur Bagautdinov; Tuur Stuyck; Fabian Prada; Javier Romero; Weipeng Xu; Shunsuke Saito; Jingfan Guo; Breannan Smith; Takaaki Shiratori", "journal": "", "ref_id": "b50", "title": "Dressing avatars: Deep photorealistic appearance for physically simulated clothing", "year": "2022" }, { "authors": "Donglai Xiang; Fabian Prada; Chenglei Wu; Jessica K Hodgins", "journal": "", "ref_id": "b51", "title": "Monoclothcap: Towards temporally coherent clothing capture from monocular RGB video", "year": "2020" }, { "authors": "Hongyi Xu; Thiemo Alldieck; Cristian Sminchisescu", "journal": "NeurIPS", "ref_id": "b52", "title": "H-nerf: Neural radiance fields for rendering and temporal reconstruction of humans in motion", "year": "2021" }, { "authors": "Weipeng Xu; Avishek Chatterjee; Michael Zollhöfer; Helge Rhodin; Dushyant Mehta; Hans-Peter Seidel; Christian Theobalt", "journal": "TOG", "ref_id": "b53", "title": "Monoperfcap: Human performance capture from monocular video", "year": "2018" }, { "authors": "Gengshan Yang; Minh Vo; Natalia Neverova; Deva Ramanan; Andrea Vedaldi; Hanbyul Joo", "journal": "", "ref_id": "b54", "title": "Banmo: Building animatable 3d neural models from many casual videos", "year": "2022" }, { "authors": "Ze Yang; Shenlong Wang; Sivabalan Manivasagam; Zeng Huang; Wei-Chiu Ma; Xinchen Yan; Ersin Yumer; Raquel Urtasun", "journal": "", "ref_id": "b55", "title": "S3: Neural shape, skeleton, and skinning fields for 3d human modeling", "year": "2021" }, { "authors": "Lior Yariv; Yoni Kasten; Dror Moran; Meirav Galun; Matan Atzmon; Ronen Basri; Yaron Lipman", "journal": "NeurIPS", "ref_id": "b56", "title": "Multiview neural surface reconstruction by disentangling geometry and appearance", "year": "2020" }, { "authors": "Zheng Zerong; Yu Tao; Dai Liu Yebin; Qionghai", "journal": "TPAMI", "ref_id": "b57", "title": "Pamir: Parametric model-conditioned implicit representation for image-based human reconstruction", "year": "2021" }, { "authors": "Fang Zhao; Wenhao Wang; Shengcai Liao; Ling Shao", "journal": "", "ref_id": "b58", "title": "Learning anchored unsigned distance functions with gradient direction alignment for single-view garment reconstruction", "year": "2021" }, { "authors": "Fuqiang Zhao; Wei Yang; Jiakai Zhang; Pei Lin; Yingliang Zhang; Jingyi Yu; Lan Xu", "journal": "", "ref_id": "b59", "title": "Humannerf: Efficiently generated human radiance field from sparse inputs", "year": "2021" }, { "authors": "Heming Zhu; Yu Cao; Hang Jin; Weikai Chen; Dong Du; Zhangye Wang; Shuguang Cui; Xiaoguang Han", "journal": "", "ref_id": "b60", "title": "Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images", "year": "2020" }, { "authors": "Heming Zhu; Lingteng Qiu; Yuda Qiu; Xiaoguang Han", "journal": "", "ref_id": "b61", "title": "Registering explicit to implicit: Towards high-fidelity garment mesh reconstruction from single images", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 105.3, 360.9, 181.06, 12.71 ], "formula_id": "formula_0", "formula_text": "C ′ (i) = p c + S d i n d i + S c i n c ,(1)" }, { "formula_coordinates": [ 4, 132.86, 398.5, 123.95, 14.95 ], "formula_id": "formula_1", "formula_text": "n c = 1 Np-1 Np i=1 (n d i × n d i-1" }, { "formula_coordinates": [ 4, 363.49, 284.59, 126.99, 11.37 ], "formula_id": "formula_2", "formula_text": "S(η) = {p ∈ R 3 |f (p; η) = 0}." }, { "formula_coordinates": [ 4, 374.55, 671.93, 170.56, 11.13 ], "formula_id": "formula_3", "formula_text": "p ′ = D(p, h, E(p); ϕ),(2)" }, { "formula_coordinates": [ 5, 86.01, 348.13, 200.35, 18.98 ], "formula_id": "formula_4", "formula_text": "s, t, R = arg min s,t,R CD Π W( Li ) , ζ i ,(3)" }, { "formula_coordinates": [ 5, 102.96, 371.22, 183.41, 12.09 ], "formula_id": "formula_5", "formula_text": "Li = sR(L i ) + t,(4)" }, { "formula_coordinates": [ 5, 89.98, 704.1, 156.52, 9.96 ], "formula_id": "formula_6", "formula_text": "V C(i) = zbuffer test(Φ(C(i)), Φ(T s ))." }, { "formula_coordinates": [ 5, 356.9, 636.77, 188.21, 9.72 ], "formula_id": "formula_7", "formula_text": "C p = f c (p, n p , v p , z, E(p); ψ).(6)" }, { "formula_coordinates": [ 6, 104.47, 214.62, 181.89, 9.65 ], "formula_id": "formula_8", "formula_text": "L proj = CD(V C ⊗ Π(Φ(C)), ζ)(7)" }, { "formula_coordinates": [ 6, 97.18, 308.53, 189.18, 31.4 ], "formula_id": "formula_9", "formula_text": "L slop = Np i=1 (1 -cos < s i+1 , s i >)(8)" }, { "formula_coordinates": [ 6, 117.99, 423.88, 168.37, 31.4 ], "formula_id": "formula_10", "formula_text": "L anap = Np i=1 |f (C(i); η)|(9)" }, { "formula_coordinates": [ 6, 59.13, 490.15, 201.12, 9.65 ], "formula_id": "formula_11", "formula_text": "L curve = λ proj L proj + λ slop L slop + λ anap L anap" }, { "formula_coordinates": [ 6, 87.88, 595.53, 198.48, 9.68 ], "formula_id": "formula_12", "formula_text": "Θ = {η, ϕ, ψ} ∪ {h i , z i |i = 1, . . . , N i }(11)" }, { "formula_coordinates": [ 6, 100.33, 688.18, 181.88, 26.8 ], "formula_id": "formula_13", "formula_text": "L RGB = 1 |R| p∈R |C p (Θ) -I p |, (12" }, { "formula_coordinates": [ 6, 282.21, 695.24, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 6, 364.2, 188.13, 180.91, 28.57 ], "formula_id": "formula_15", "formula_text": "L mcons = 1 | Ts | p∈ Ts |f (p; η)|.(13)" }, { "formula_coordinates": [ 6, 364.87, 349.44, 180.24, 27.44 ], "formula_id": "formula_16", "formula_text": "L ccons = 1 |T C | p∈T C |f (p; η)|.(14)" }, { "formula_coordinates": [ 6, 319.71, 510.96, 225.4, 24.59 ], "formula_id": "formula_17", "formula_text": "L ims = L RGB + λ mcons L mcons + λ ccons L ccons λ arap L arap + λ eik L eik + λ norm L norm ,(15)" } ]
10.18653/v1/2021.emnlp-main.532
2023-11-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b63", "b39", "b36", "b37", "b16", "b27", "b63", "b28", "b43", "b49", "b41", "b10", "b23" ], "table_ref": [], "text": "Recent studies (Liu et al., 2023b;Zhang et al., 2023) have discovered that large language models (LLMs), like GPT-3.5 (Ouyang et al., 2022), can generate summaries that are more preferred by human annotators when compared to reference summaries from widely used datasets, such as CNN/DailyMail (Nallapati et al., 2016) and XSum (Narayan et al., 2018), in a reference-free human evaluation setting. This quality issue of existing reference summaries effectively puts an upper bound on the performance of summarization models trained on them, which likely contributes to the performance gap between supervised summarization models and LLMs as observed by related work (Goyal et al., 2022;Liang et al., 2022;Liu et al., 2023b;Zhang et al., 2023).\nTherefore, we investigate a new learning setting for text summarization models, where LLMs are considered the reference or the gold-standard oracle for the summarization task. Such an LLMas-reference setting introduces interesting changes to the learning setting of text generation models in general with respect to both model training and evaluation, therefore we examine the standard practices that are aligned with this shift ( §2).\nSpecifically, the traditional learning setting of summarization models usually revolves around a single reference summary -in training, the standard training algorithm, Maximum Likelihood Estimation (MLE), requires the model to predict the reference summary tokens; in evaluation, automatic evaluation metrics like ROUGE (Lin, 2004) estimate the quality of system outputs by comparing them with the reference summary. In contrast, LLMs provide a target probability distribution or quality measurement over all possible candidate summaries. As a result, LLMs can assign quality scores to arbitrary candidates, which enables training techniques beyond MLE such as contrastive learning (Liu et al., 2022b) and reinforcement learning (Paulus et al., 2018;Stiennon et al., 2020;Pang and He, 2021), and provides an oracle to assess the model output quality for model evaluation.\nAdapting to this change, we investigate two ways of using LLMs for summary quality evaluation: (1) GPTScore (Fu et al., 2023), which treats the LLMpredicted probability of a candidate summary as its quality score; (2) GPTRank, a new method we propose that requires an LLM to provide a quality ranking of different summaries, inspired by recent work (Liu et al., 2023a) on LLM-based evaluation. With these two evaluation methods, we adopt contrastive learning methods for model training to effectively leverage the LLM-provided supervision signals. Using the proposed methods, we are able to train smaller summarization models, such as BART (Lewis et al., 2020), to match the LLM performance under LLM-based evaluation ( §3).\nHaving studied the new LLM-as-reference setting, we then perform a meta-analysis on this setting itself ( §4). Specifically, we conduct human evaluation on the LLMs and LLM-guided smaller models using both crowd and expert annotators and use the evaluation results to assess the reliability of LLM-based evaluation methods. Our analysis reveals both the benefits and risks of the LLMas-reference setting. On the one hand, smaller models can indeed benefit from LLM's guidance and the contrastive learning method. On the other hand, LLM-based evaluation fails to align with human evaluation since human evaluation still prefers LLMs over smaller models, while they achieve similar performance under LLM-based evaluation.\nOur main contributions are two-fold: (1) We empirically demonstrate that the performance of smaller models like BART can be improved when trained using better references (LLMs) and learning methods (contrastive learning) with a small budget. 1 (2) Our meta-analysis highlights the limitations of LLM-based training and evaluation methods. It reveals that smaller summarization models can not yet match the LLM's performance under human evaluation, which calls for further examination and improvement of this new learning setting. 22 Summarize as Large Language Models" }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "A neural abstractive summarization model g aims to generate a text sequence S that summarizes the information of a source document D: S ← g(D). When g is an auto-regressive generation model, it factorizes the probability of a candidate summary S given the source document D as\npg(S|D) = l S i=1 pg(si|S<i, D),(1)\nwhere s i is the i-th token in S and S 0 is a special begin-of-sequence (BOS) token, S <i is the prefixstring of S before S i , l S is the length of S (without the BOS token), and p g is a probability distribution parameterized by the summarization model g.\nThe standard training algorithm for g is Maximum Likelihood Estimation (MLE) with a single reference (gold standard) summary S * . With Eq. 1, the MLE optimization on this example is equivalent to minimizing the following cross-entropy loss:\nLxent(θ) = -log pg(S * |D; θ),(2)\nwhere θ are the learnable parameters of g." }, { "figure_ref": [], "heading": "Large Language Models as References", "publication_ref": [ "b20" ], "table_ref": [], "text": "Similar to Eq. 1, an auto-regressive LLM h defines a target distribution for text summarization:\np h (S|D) = l S i=1 p h (si|S<i, D),(3)\nwhich is different from the point-mass distribution defined by a single reference summary. Consequently, the cross-entropy loss becomes\nL (h) xent (θ) = - S∈S p h (S|D) log pg(S|D; θ), (4\n)\nwhere S is the set of possible outputs (candidate summaries). This setting is coined as sequencelevel knowledge distillation by Kim and Rush (2016). In practice, computing Eq. 4 is intractable because S is infinite. Therefore, we investigate three types of methods to approximate the optimization process of Eq. 4.\nMLE with Quasi-Reference Summary Our baseline method treats the greedy decoding results of the LLM h as the quasi-reference summaries and optimizes the summarization model g using MLE. Specifically, the loss function becomes\nL(h) xent (θ) = -log pg( Ŝ|D; θ),(5)\nwhere Ŝ is the greedy decoding result of h:\nŝi = arg max s p h (s| Ŝ<i, D),(6)\nwhere s denotes a token in the vocabulary." }, { "figure_ref": [], "heading": "Learning from LLM-based Evaluation", "publication_ref": [ "b10" ], "table_ref": [], "text": "Apart from the quasi-reference summaries, the reference LLMs can provide richer supervision signals for model training since they can be used to evaluate the quality of any candidate summary. Consequently, we adopt a contrastive learning method, BRIO (Liu et al., 2022b), that can leverage the LLM guidance in model training, and explore two LLM-based evaluation methods, the recently introduced GPTScore (Fu et al., 2023), and a new method we will introduce later, GPTRank." }, { "figure_ref": [], "heading": "Article:", "publication_ref": [], "table_ref": [], "text": "The biggest pension reforms in a century have been met with confusion as customers as young as 23 try to cash in their retirement savings. Pension firms said Britons remained baffled about how the radical changes worked, with many unaware of age restrictions or tax implications…" }, { "figure_ref": [], "heading": "Input Article", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Prompt", "publication_ref": [], "table_ref": [], "text": "Summarize the above article in three sentences. Summary:" }, { "figure_ref": [], "heading": "Candidate Summary", "publication_ref": [], "table_ref": [], "text": "The new pension reforms in the UK have caused confusion among customers, with many unaware of age restrictions or tax implications…" }, { "figure_ref": [], "heading": "GPTScore", "publication_ref": [], "table_ref": [], "text": "Figure 1: Illustration of GPTScore. The LLM-predicted probability of a candidate summary given the input context is interpreted as the quality score of the candidate.\nContrastive Learning We adopt a contrastive loss (Liu et al., 2022b) to better leverage the LLM supervision signal, which sets the following objective: given two candidate summaries S 1 , S 2 , if S 1 receives a higher quality score from the LLMbased evaluation method, the summarization model g should also assign S 1 a higher probability (Eq. 1).\nIn more detail, this loss is defined with a set of candidate summaries S c , which is sorted by the LLM-assigned quality scores, and the summarization model g is tasked with assigning a probability that is at least twice as large to a better candidate:\npg(Si|D) pg(Sj|D) > 2(j -i), ∀i, j, i < j,(7)\nwhich corresponds to the following margin loss:\nLctr(θ) = S i ,S j ∈Sc,i<j max(0, log pg(Sj|D; θ) -log pg(Si|D; θ) + log 2(j -i)).(8)\nIn practice, we observe that the magnitude of the log-probability in Eq. 8 is highly dependent on the length of the candidate summaries. Therefore, we introduce a modification to Eq. 8 based on the length-normalized log-probability pg :\npg(S|D) = l S i=1 log pg(si|S<i, D) lS ,(9)\nand Eq. 8 is changed to\nLctr(θ) = S i ,S j ∈Sc,i<j max(0, pg(Sj|D; θ) -pg(Si|D; θ) + 1 λ log 2(j -i)),(10)\nwhere λ is a scaling factor approximating the average summary length. Following Liu et al. (2022b)," }, { "figure_ref": [], "heading": "GPTRank", "publication_ref": [], "table_ref": [], "text": "You will be given a news article along with a list of summaries numbered as follows: 1. Summary 1, 2. Summary 2, and so on. Please evaluate and rank the summaries in descending order of their quality. First you will give an explanation of your ranking, then you will provide the ranking itself. Please refer to the example below for the format of your response.\nExample Response: Explanation: \"Your explanation of the ranking\" Ranking: \"The ranking, e.g., 4,2,7,3,5,6,8 we combine the cross-entropy loss (Eq. 5) with the contrastive loss as a multi-task loss:\nL mul (θ) = L(h) xent (θ) + α Lctr(θ),(11)\nwhere α is the weight of the contrastive loss." }, { "figure_ref": [], "heading": "GPTScore for Summary Quality Evaluation", "publication_ref": [ "b10" ], "table_ref": [], "text": "The contrastive learning objective (Eq. 10) requires access to ground-truth candidate summary quality scores from the reference LLM. Therefore, we first adopt GPTScore (Fu et al., 2023) for the summary quality evaluation. Specifically, GPTScore interprets the length-normalized conditional logprobability of a candidate summary predicted by the reference LLM h as its quality score, i.e.,\nph (S|D) = l S i=1 log p h (si|S<i, D) lS .(12)\nConsequently, the set of candidate summaries S c used in Eq. 10 is sorted based on the (normalized) target distribution (Eq. 3), such that for any S i , S j ∈ S c , i < j, ph (S i |D) > ph (S j |D). We provide an illustration of GPTScore in Fig. 1." }, { "figure_ref": [], "heading": "GPTRank for Summary Quality Evaluation", "publication_ref": [], "table_ref": [], "text": "Instead of leveraging the LLM predicted probability, recent work, e.g., G-Eval (Liu et al., 2023a), formulates the automatic evaluation as a text completion or infilling task for the LLMs. For example, given the source article and a summary, the LLM can be asked to provide a numerical quality score for the summary. However, as Liu et al. (2023a) has found that LLM predicted scores are not diverse enough and different candidate summaries are likely to receive the same score, we propose a ranking task to the LLM. The proposed evaluation method, GPTRank, requires the LLM to provide a ranking to a list of different candidate summaries for the same source article. Moreover, since recent work (Liu et al., 2022a(Liu et al., , 2023a) ) has found that language generation models can benefit from a selfexplaining stage for an evaluation task, we prompt the LLM to first generate an explanation before providing the actual ranking. The ranking is then used in contrastive training (Eq. 10). We provide an example of using GPTRank in Fig. 2." }, { "figure_ref": [], "heading": "Learning with LLMs as References", "publication_ref": [], "table_ref": [], "text": "We conduct experiments with several LLMs as the reference of smaller models and compare the performance of different training methods. Training Details The model training is started with a BART 4 checkpoint fine-tuned on the original CNNDM dataset. We choose BART because it is widely used and is relatively small-sized. The fine-tuning process includes three steps:" }, { "figure_ref": [], "heading": "Experimental", "publication_ref": [], "table_ref": [], "text": "(1) Warm-starting. We use ChatGPT 5 to generate 10K summaries for fine-tuning and 1K summaries for validation, and fine-tune the original BART checkpoint with MLE training (Eq. 5).\n(2) MLE Training. Using the fine-tuned checkpoint from Step (1), we continue fine-tuning the model using MLE training on the quasi-reference summaries generated by different LLMs.\n(3) Contrastive Training. Continuing from Step (2), we keep fine-tuning the model using the multi-task, contrastive learning objective (Eq. 11). The candidate summaries for Eq. 10 are generated from the checkpoint trained in Step (2), and diverse beam 3 Further information regarding the prompts and the process of generating LLM summaries can be found in Appendix A.1.\n4 https://huggingface.co./facebook/ bart-large-cnn. It contains around 350M parameters. 5 We used the checkpoint gpt-3.5-turbo-0301 at https: //platform.openai.com/docs/models/gpt-3-5." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "LP GS R1 R2 Len. LP is the log-probability predicted by GPT3D3 (text-davinci-003). GS is the GPTScore based on GPT3D3, i.e., the length-normalized log-probability. R1 and R2 are the ROUGE1/2 F1 scores respectively. Len. is the average summary length. BART.ChatGPT is fine-tuned with MLE training and ChatGPT as the reference, BART.GPT3D3 is fine-tuned with MLE training and GPT3D3 while BRIO.GPT3D3 is fine-tuned with contrastive learning (BRIO)." }, { "figure_ref": [], "heading": "GPT3D3", "publication_ref": [ "b53", "b10" ], "table_ref": [], "text": "search (Vijayakumar et al., 2018) is used to generate 8 candidates for each data point. 6We note that for a more fair comparison, in the following sections, we compare the performance of checkpoints from Step (2) and Step (3) that are trained with similar amounts of data in terms of the budget. Regarding checkpoint selection, for MLE training we use the cross-entropy loss on the validation set as the criterion while for contrastive training we use the contrastive loss (Eq. 10). Automatic Evaluation For reference-based evaluation, we report the ROUGE-1/2 F1 scores between the system outputs and the (quasi-)reference summaries generated by the reference LLM. For reference-free evaluation, we use either GPTScore (Fu et al., 2023) or GPTRank (Fig. 2). In particular, for GPTScore we report both the un-normalized and normalized sum of log-probability." }, { "figure_ref": [], "heading": "Learning with GPTScore", "publication_ref": [], "table_ref": [], "text": "We first investigate learning with GPTScore. The reference LLM we chose is OpenAI's text-davinci-003 (GPT3D3), since its API provides access to the predicted log-probability. With GPT3D3, around 2K summaries are generated for MLE training and 200 data points are generated for contrastive learning. We report the model performance on the test set in Tab. 1. The following model's performance is compared: (1) GPT3D3, (2) the BART checkpoint fine-tuned on the original CNNDM dataset, (3) GPT3D2 (OpenAI's text-davinci-002), (4) a 7B Alpaca checkpoint,7 \n(5) ChatGPT. We make the following observations:\n(1) Compared with the original BART checkpoint, MLE training on quasi-reference summaries from LLMs can effectively improve the model performance as measured by either GPTScore or ROUGE. It shows that training with better reference summaries can reduce the performance gap between smaller summarization models and LLMs.\n(2) The model that results from contrastive learning (BRIO.GPT3D3) can achieve significantly better GPTScore than the model fine-tuned with MLE training (BART.GPT3D3), demonstrating the effectiveness of contrastive learning for approximate the target distribution of the reference LLM.\n(3) BRIO.GPT3D3 can already achieve a similar GPTScore as the reference LLM (GPT3D3) itself while only being trained on 100 examples with contrastive learning, showing a promising path to further close the performance gap." }, { "figure_ref": [], "heading": "Learning with GPTRank", "publication_ref": [], "table_ref": [], "text": "We now conduct experiments using GPTRank for model training and evaluation. The reference LLMs we choose are ChatGPT8 and GPT4 (Ope-nAI, 2023) since they have shown state-of-theart performance on summarization evaluation (Liu et al., 2023a ChatGPT and GPT4 as the reference LLM respectively, and 100 data points are used for validation.\nTo enable a more accurate evaluation, we choose ChatGPT as the baseline model and use the LLMs to conduct a pair-wise comparison between different systems and ChatGPT. In addition, we allow the LLM to predict a tie between two summaries. 10 The results with ChatGPT as the reference LLM are reported in Tab 2. The findings are similar to what we observed in §3.2:\n(1) Training with better references can help improve the summarization model performance.\n(2) Contrastive learning is more effective than the standard MLE training since the model trained with contrastive learning (BRIO.ChatGPT) can outperform its counterpart (BART.ChatGPT).\n(3) BRIO.ChatGPT wins more than half of the comparisons against the baseline model, Chat-GPT, under the evaluation of ChatGPT itself, showing that contrastive learning can efficiently optimize the summarization model with respect to a specific evaluation metric (i.e., GPTRank).\nApart from using ChatGPT as the reference LLM, we also conduct experiments with GPT4 as the backbone model of GPTRank. We report the results in Tab. 3, and note the following:\n(1) The evaluation results of GPTRank differ when different LLMs are used. For example, while BRIO.ChatGPT outperforms ChatGPT under the ChatGPT's evaluation in Tab. 2, GPTRank with GPT4 still prefers ChatGPT.\n(2) The model checkpoint (BRIO.GPT4) trained using contrastive learning and GPT4 as the reference LLM is able to outperform ChatGPT under GPT4's evaluation, which also suggests that BRIO.GPT4 can outperform BRIO.ChatGPT. It shows the importance of choosing the appropriate evaluation method used for contrastive training.\n(3) BRIO.ChatGPT can outperform BART.GPT4 despite the fact that BRIO.ChatGPT is trained with a reference LLM that is supposedly weaker, which indicates the advantage of contrastive learning and the importance of using a better training method." }, { "figure_ref": [], "heading": "Comparative Study", "publication_ref": [], "table_ref": [], "text": "We investigate the generalization ability of our training method regarding the choice of the backbone model and the data format. " }, { "figure_ref": [], "heading": "Human Evaluation and Meta-Analysis", "publication_ref": [], "table_ref": [], "text": "In §3 we have demonstrated that smaller summarization models that are trained with contrastive learning can achieve on-par or even better performance than LLMs under LLM-based evaluation.\nHowever, the alignment between LLM and human evaluation still requires examination. Therefore, we first conduct a human evaluation comparing the model performance in §3, then perform a metaanalysis regarding the LLM-human alignment." }, { "figure_ref": [], "heading": "Human Evaluation Collection", "publication_ref": [], "table_ref": [], "text": "Evaluation Design To enable a more direct and robust comparison and reduce task difficulty, we formulate the human evaluation as a pair-wise comparison task between two different systems.\nThe summary pairs are compared on three aspects:\n(1) salience, (2) coherence, and (3) overall preference/quality, where the annotators are required to choose which summary is better (ties are allowed).\nThe detailed aspect definitions are in Appendix B.1." }, { "figure_ref": [], "heading": "Crowd-Annotation Collection", "publication_ref": [ "b42", "b16", "b16" ], "table_ref": [], "text": "We use Amazon Mechanical Turk 12 (MTurk) for the crowdannotation collection. Each data example is annotated by three annotators who are given two minutes for one task and compensated accordingly. The participated crowd-annotators need to pass related qualification tests and have previous experience in evaluating summary quality. We choose three system pairs for the collection on 100 test examples, where ChatGPT is the baseline LLM, and three BART checkpoints from §3.3 are compared against ChatGPT: BART, BART.GPT4, and BRIO.GPT4. To check the inter-annotator agreement, we calculate the Krippendorff's alpha (Krippendorff, 2011) with MASI distance (Passonneau, 2006) following Goyal et al. (2022). We found the average agreement to be 0.064, close to the agreement (0.05) reported by Goyal et al. (2022) for similar evaluation settings." }, { "figure_ref": [], "heading": "Expert Evaluation", "publication_ref": [ "b16", "b63", "b16", "b63", "b44" ], "table_ref": [], "text": "The low agreement of crowdannotation raises concerns about annotation qual-12 https://www.mturk.com/ ity. Related work (Goyal et al., 2022;Zhang et al., 2023) has observed similar phenomena and suggested that the low agreement can result from the inherent subjectivity of human evaluation and the small performance gap of different systems. However, the low agreement makes it very difficult to verify the annotation quality. 13 Therefore, three of the co-authors 14 conducted a careful expert evaluation to better understand this phenomenon and provide more trustworthy evaluation results.\nWe select 50 test examples to perform a pairwise comparison on three crowd-evaluated system groups and an additional group between BART.GPT4 and BRIO.GPT4. We found the average agreement to be 0.044 among the expert annotators after a careful annotation, which re-confirms the hypotheses made in the related work (Goyal et al., 2022;Zhang et al., 2023) regarding the inherent subjectivity of summarization evaluation. Besides, the experts agree with each other 58% of the time, similar to the agreement level (65%) in recent work (Rafailov et al., 2023). We provide further analysis in Appendix B.2, which shows two main scenarios: (1) cases where the annotators unanimously favor LLM summaries; (2) cases where both LLM and smaller LM have good performance, resulting in different annotator preferences. While higher agreement might be achieved with a more constrained evaluation protocol, we believe such a higher agreement can be \"artificial\" and cannot reflect the diverse distribution of human preferences." }, { "figure_ref": [], "heading": "Result Analysis", "publication_ref": [], "table_ref": [], "text": "The crowd-annotation and expert-evaluation results are in Tab. 6 and Tab. 7 respectively. We note:\n(1) The models (BART.GPT4 and BRIO.GPT4) trained with the LLM as the reference can outperform the BART checkpoint trained on the original CNNDM dataset by a large margin, showing the importance of training with a better reference.\n(2) When under a direct comparison in expert evaluation, BRIO.GPT4 can outperform BART.GPT4 on three aspects, which demonstrates the effectiveness of contrastive learning with LLM feedback.\n(3) Both BART.GPT4 and BRIO.GPT4 cannot outperform ChatGPT under human evaluation, despite the fact that they are favored by the evaluation methods based on either ChatGPT (Tab. 2) or GPT4 13 We note that a high agreement also does not automatically entail good annotation quality (Zhang et al., 2022a).\n14 All of the expert annotators receive their undergraduate education from universities in the United States. (Tab. 3). This result highlights the discrepancy between human and LLM-based evaluation, which we further investigate in the following section." }, { "figure_ref": [], "heading": "Meta-Analysis of LLM-based Evaluation", "publication_ref": [], "table_ref": [], "text": "We use the expert evaluation results to evaluate the performance of LLM-based evaluation as well as the crowd-annotation, by computing their agreements with the majority vote of expert evaluation.\nApart from GPTScore and GPTRank, we also compare the performance of G-Eval (Liu et al., 2023a " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b57", "b1", "b46", "b46", "b0", "b24", "b43", "b26", "b49", "b41", "b47", "b56", "b58", "b8", "b17", "b59", "b40", "b3", "b32", "b51", "b66", "b60", "b65", "b49", "b39" ], "table_ref": [], "text": "Training Methods of Text Generation Models The standard MLE training of text generation models has two major limitations: (1) a discrepancy between the training objective, i.e., the cross-entropy loss, and the evaluation criteria (e.g., ROUGE); (2) a discrepancy between the teacherforcing (Williams and Zipser, 1989) training manner and auto-regressive generation behavior during evaluation, which is known as the exposure bias (Bengio et al., 2015;Ranzato et al., 2016). As a result, training methods beyond MLE have been proposed to address these two limitations. Among them a family of methods is based on reinforcement learning (RL), which can optimize the text generation model toward a specific reward function (Ranzato et al., 2016;Bahdanau et al., 2016;Li et al., 2016;Paulus et al., 2018;Li et al., 2019;Stiennon et al., 2020;Pang and He, 2021). Apart from RL, training methods based on supervised learning have also been developed, such as Minimum Risk Training (Shen et al., 2016;Wieting et al., 2019), targeting a sequence-level optimization with various reward signals (Wiseman and Rush, 2016;Edunov et al., 2018). More recently, contrastive learning (Hadsell et al., 2006) has also been adopted, which enhances the model ability by requiring the model to differentiate positive (good) and negative (bad) examples (Yang et al., 2019;Pan et al., 2021;Cao and Wang, 2021;Liu and Liu, 2021;Sun and Li, 2021;Liu et al., 2022b;Zhao et al., 2022;Zhang et al., 2022b). The latest work along this path has explored using contrastive learning to align LLMs with human feedback (Yuan et al., 2023;Zhao et al., 2023), as an alternative to reinforcement learning with human feedback (Stiennon et al., 2020;Ouyang et al., 2022)." }, { "figure_ref": [], "heading": "LLM-based Automatic Evaluation", "publication_ref": [ "b10", "b5", "b11", "b18", "b54", "b34", "b10", "b11", "b18", "b54", "b9", "b62", "b15", "b22" ], "table_ref": [], "text": "Recent work has explored using LLMs for automatic NLP evaluation. GPTScore (Fu et al., 2023) leverages the LLM-predicted probability of text sequences as the quality score. On the other hand, a line of work (Chiang and yi Lee, 2023;Gao et al., 2023;Chen et al., 2023;Wang et al., 2023;Luo et al., 2023). e.g., G-Eval (Liu et al., 2023a), proposes evaluation methods that use LLMs to perform text completion tasks, such as predicting the answer of a Likert scale evaluation or pairwise comparison. Notably, several of these studies (Fu et al., 2023;Liu et al., 2023a;Gao et al., 2023;Chen et al., 2023;Wang et al., 2023) all evaluate the LLM-based evaluation methods on SummEval (Fabbri et al., 2021), the summarization human evaluation benchmark, and found that LLM-based evaluation has a higher correlation with human judgments than previous methods such as ROUGE or BERTScore (Zhang* et al., 2020). Apart from summarization evaluation, LLM-based evaluation has also been used in text classification tasks (Gilardi et al., 2023) and for reward design for RL agents (Kwon et al., 2023)." }, { "figure_ref": [], "heading": "LLM Distillation and LLM-based Data Augmentation", "publication_ref": [ "b55", "b7", "b19", "b18", "b55", "b14", "b35" ], "table_ref": [], "text": "To improve the performance of smaller NLP models, related work has proposed methods of distilling LLMs and using LLMs for data augmentation (Wang et al., 2021;Ding et al., 2022;Kang et al., 2023). Specifically, a line of work (Shrid-har et al., 2022;LI et al., 2022;Hsieh et al., 2023) uses LLMs to generate both final answers and taskrelated descriptions for training smaller models on reasoning tasks. As for work related to text summarization, Wang et al. (2021) introduces using GPT-3 (Brown et al., 2020) to generate reference summaries while Gekhman et al. (2023) proposes using LLMs to annotate the summary factual consistency (Maynez et al., 2020) for the training of smaller factual consistency evaluation models." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we study a new learning setting of text summarization models where the LLMs are set to be the reference. For this setting, we leverage the LLM-based evaluation methods to guide the model training through contrastive learning and empirically demonstrate the efficiency and effectiveness of our methods. Furthermore, we conduct human evaluation and meta-analysis regarding the reliability of LLM-based evaluation, which reveals its benefits as better training references and its limitations in terms of the alignment with human evaluation. We believe our findings shed light on the direction of reliably applying the LLMs to the entire development loop (i.e., training-validation-evaluation) of smaller, task-specific NLP models." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b52", "b63" ], "table_ref": [], "text": "The LLM-based evaluation results we reported are from OpenAI's APIs, which are subject to change. Therefore, the reproducibility of our experiments is limited. To mitigate this problem, we will release the training data, model outputs, and LLM and human evaluation results to facilitate future work. Both the LLM-based and human evaluations we conducted can be resource-intensive, requiring substantial time and budget. As a result, we try to find a balance between the reliability of the evaluation result and the constraints of time and budget when selecting the sample size we used for evaluation. An evaluation at a larger scale is likely to yield more reliable results, which we leave for more dedicated future work in this direction.\nWe chose not to include summary factual consistency as an individual quality aspect in human evaluation and the meta-analysis of LLM-based evaluation. Related work (Tang et al., 2022;Zhang et al., 2023) has found that the factual error rate is low on CNNDM dataset, especially for LLM summaries. During our expert evaluation, the authors also did not observe significant flaws in factual consistency. As a result, it would require a much larger sample size for an evaluation of factual consistency in order to understand the error patterns, which is out of the scope of this work. However, we believe that such an evaluation is important for better understanding the summary quality of LLMs and LLM-guided models, and we hope that the outcome of this work (e.g., the system outputs) can be a helpful resource for future work on this topic." }, { "figure_ref": [], "heading": "B.2 Expert Evaluation Examples", "publication_ref": [], "table_ref": [], "text": "We present expert-annotated examples in Tab. 9.\nFor those examples on which the annotators have different preferences for the overall summary quality, we provide their explanations written after the evaluation below, as a case study of the inherent subjectivity of summarization human evaluation. Example 3 Annotator 1: I selected the BRIO.GPT4 summary because it conveys the same information as the ChatGPT summary more concisely. In the sentence about the nation being split on whether Charles should become king, it felt a little repetitive for the ChatGPT summary to use \"become king\" and \"ascend to the throne\" in the same sentence. Annotator 2: The summaries are nearly identical. Both summaries capture almost the same level of important information. However, I prefer Chat-GPT's summary because it reiterates the fact that public opinion is expressed through a poll, which adds grounding and enhances objectivity to the statements. Annotator 3: These two summaries essentially convey the same information and are almost equivalent in clarity and brevity. I chose the first summary because I personally preferred the way it started with (\"A poll conducted by . . . \") which gives me the source of information that I value more. Example 5\nAnnotator 1: I selected the ChatGPT summary because I found it slightly easier to follow along. The first two sentences both start with statements from Sheriff Hodgson, creating a clear structure and line of reasoning. The last sentence of the BRIO.GPT4 summary ends with \"Hodgson said\", which makes sense but does not contextualize the statement until the very end of the summary. Annotator 2: Both summaries are of good quality, making it a difficult decision for me. Despite its lower coherence and fluency, I lean towards preferring the summary generated by BRIO.GPT-4 due to its conciseness. The summary from ChatGPT includes additional details such as the mention of \"maximum-security Souza-Baranowski state prison\" and provides extra descriptions regarding Hernandez's charm, which I personally find redundant. Annotator 3: Both summaries of good quality, in terms of salience and coherence. The first one provides additional context regarding the final outcome of Aaron Hernandez's sentence, which I found to be more informative than the second summary. Example 6 Annotator 1: I selected the ChatGPT summary because its first and last sentences were slightly more cohesive. The first sentence mentions that Nike has \"faced criticism\" and the last sentence mentions that Nike's vice president \"defended the decision\" in a statement -a direct response to the criticism. On the other hand, the BRIO.GPT4 summary starts by stating that Nike has \"defended their new kits\" but does not include any comments or defense from Nike. Annotator 2: I find it challenging to determine a clear winner between the two summaries as they both possess merits and weaknesses. The summary generated by ChatGPT mentions the key figure, Vice President Charlie Brooks, who defended the design of the kits, but it overlooks any feedback from the team. On the other hand, the summary generated by BRIO.GPT4 fails to mention Charlie Brooks but includes the players' reactions, although it does so in a slightly redundant manner by quoting the midfielder Tobin Heath. In my opinion, the advantages and disadvantages of each summary are relatively balanced, leading me to consider them on equal footing. Annotator 3: The second summary contains more balanced perspectives from both the critics and the national team itself. It also follows an organized structure from introducing the criticism to the reaction of the team. However, the first summary appears to be more straightforward and neutral, without individual responses and words such as \"proud\" which could create certain ambiguity for me. As such, I chose tie because they both match well with the purpose of a summary. Example 7 Annotator 1: I selected the BRIO.GPT4 summary because the last sentence provides specific, key information about Liana Barrientos' legal charges that are not mentioned in the ChatGPT summary, which provides important context for the case's current status. Annotator 2: The quality of both summaries is high. ChatGPT mentions that all of Barrientos's marriages took place in New York State starting from 1999, which is a detail not mentioned in BRIO.GPT4. While BRIO.GPT4 does mention the crucial fact that some of Barrientos's partners could potentially pose threats to homeland security, I found the last sentence to be somewhat grammatically awkward. Therefore, I ultimately gave the edge to ChatGPT. Annotator 3: I prefer the second summary because it provides more essential details about the charges that Liana faces, including \"filing a false instrument\" and \"faces two counts of felony fraud charges\". Compared with the first summary, which essentially reiterates that Liana is a \"serial bride\", the second summary gives more emphasis to the legal aspect and the potential implications of her case." }, { "figure_ref": [], "heading": "B.3 LLM-based Evaluation Setting", "publication_ref": [], "table_ref": [], "text": "In §4.3, we compare the performance of different LLM-based evaluation methods. Specifically, for G-Eval (Liu et al., 2023a) and GPTRank, we use different prompts for different quality aspects, and specify the aspect definition in the prompt, as we defined in Appendix B.1. The prompt templates we used for GPTRank are similar to the one shown in Appendix A.3, with specific quality aspect definitions. As for G-Eval, the prompt is as the following (using the overall quality aspect as an example):\nYou will be given one summary written for a news article.\nYour task is to rate the overall quality of the summary with a score from 1 to 5, where 1 is the lowest and 5 is the highest.\nPlease make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.\nEvaluation Steps:\n1. Read the news article carefully and identify the main topic and key points.\n2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order.\n3. Assign a score for the summary quality on a scale of 1 to 5, where 1 is the lowest and 5 is the highest. -Overall Quality (1-5):\nFor both GPTRank and G-Eval, we set the sampling temperature to 0 to enable more deterministic behaviors. We note G-Eval tends to generate the same scores for different summaries as discussed in Liu et al. (2023a), which likely results in its low agreement with human evaluation. ChatGPT BRIO.GPT4 ChatGPT 6 Nike has faced criticism for the new kits they have designed for the US women's national team, which do not feature the American flag. The kit is all white with hints of black and yellow, and the aqua colour cleats are a tribute to Canada, who are hosting the tournament. Nike's vice president of North America communications, Charlie Brooks, has defended the decision, stating that not all national teams have to represent the flag.\nNike has defended their new kits for the US women's national team following criticism that they do not represent the American flag. The latest design features an all white strip with hints of black and yellow, while the aqua colour cleats are a tribute to Canada who will be hosting the tournament. While patriotic supporters are up in arms, feedback from the team itself has been more positive, with midfielder Tobin Heath saying she is \"proud\" of the new designs.\nChatGPT Tie Tie 7 Liana Barrientos, an alleged \"serial bride\" accused of running a wife-for-hire immigration scheme, was arrested for evading a subway fare just after leaving court on Friday. Barrientos pleaded not guilty to charges that she married 10 men over 11 years and charged a fee for her \"services\". She has been accused of accepting money in at least one of the marriages and all of her marriages took place in New York state, allegedly starting in 1999.\nLiana Barrientos, a woman accused of running a wife-for-hire immigration scheme, was arrested for evading the fare at a Bronx subway station after leaving court. She is accused of marrying 10 men over 11 years and charging a fee for her services, some of whom could pose a threat to American safety, according to investigators. She pleaded not guilty to two felony charges of filing a false instrument, involving marriage licences, and faces two counts of felony fraud charges." }, { "figure_ref": [], "heading": "BRIO.GPT4", "publication_ref": [], "table_ref": [], "text": "ChatGPT BRIO.GPT4\nTable 9: Expert annotation examples of the pairwise comparison between ChatGPT and BRIO.GPT4. We show the three expert annotators' ratings regarding the overall summary quality." }, { "figure_ref": [], "heading": "A Experimental Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 LLM Summary Generation", "publication_ref": [], "table_ref": [], "text": "We use the following prompt to generate the LLM summaries:\nArticle: {{Article}} Summarize the above article in three sentences." }, { "figure_ref": [], "heading": "Summary:", "publication_ref": [], "table_ref": [], "text": "Since text summarization is a conditional generation task that requires high accuracy, we set the sampling temperature to 0 to ensure a more accurate and deterministic behavior of the LLMs." }, { "figure_ref": [], "heading": "A.2 Candidate Generation for Contrastive Learning", "publication_ref": [], "table_ref": [], "text": "The contrastive training (Eq. 10) requires a list of candidate summaries. " }, { "figure_ref": [], "heading": "A.3 Prompt Templates for GPTRank", "publication_ref": [], "table_ref": [], "text": "We use the following prompt template for GP-TRank with list-wise comparison that is used for contrastive learning:\nYou will be given a news article along with a list of summaries numbered as follows: 1. Summary 1, 2. Summary 2, and so on. Please evaluate and rank the summaries in descending order of their quality. First you will give an explanation of your ranking, then you will provide the ranking itself. Please refer to the example below for the format of your response.\nExample Response:\nExplanation: \"Your explanation of the ranking\"\nRanking: \"The ranking, e.g., 4, 2, 7, 3, 5, 6, 8, 1\"\nHere are the actual article and summaries:\nArticle: {{Article}} Summaries:\n1. {{Summary 1}}" }, { "figure_ref": [], "heading": "{{Summary 2}}", "publication_ref": [], "table_ref": [], "text": "3. {{Summary 3}}" }, { "figure_ref": [], "heading": "{{Summary 4}}", "publication_ref": [], "table_ref": [], "text": "For pair-wise comparison that is used for model evaluation, the prompt template is as follows:\nYou will be given a news article along with two summaries. Please compare the quality of these two summaries and pick the one that is better (there can be a tie). First you will give an explanation of your decision then you will provide your decision in the format of 1 or 2 or tie." }, { "figure_ref": [], "heading": "Response format:", "publication_ref": [ "b45" ], "table_ref": [], "text": "Explanation: \"Your explanation here\".\nDecision: 1 or 2 or tie.\nHere's the article: We found that the FLAN-T5 checkpoint fine-tuned with contrastive learning, T5BRIO.GPT4, tends to generate longer summaries. We tried to control the summary length by adjusting the length penalty used during beam search, but found that the length difference still presents. On the other hand, we are able to control the summary length of BRIO.GPT4. We hypothesize this is because FLAN-T5 can learn the preference of LLM-based evaluation more efficiently, which exhibits a preference for longer outputs (Rajani et al., 2023). However, we note that the length preference is not the only factor affecting the LLM-based evaluation, since we only found a moderate Spearman's correlation (0.2366) between the summary length and the ranking of GPTRank. Moreover, out of 20 summary pairs where the ChatGPT summary is longer than the T5BRIO.GPT4 summary, T5BRIO.GPT4 still wins 9 times as evaluated by GPTRank based on GPT4." }, { "figure_ref": [], "heading": "A.5 Experimental Setting on XSum", "publication_ref": [ "b9", "b12", "b13" ], "table_ref": [], "text": "The experimental setting on XSum is similar to the setting on CNNDM ( §3.1). Specifically, at the warm-start stage we generate around 10K summaries using ChatGPT to fine-tune the BART checkpoint pre-trained on the original XSum dataset (https://huggingface.co./ facebook/bart-large-xsum). Then, we generate 1K summaries using GPT4 and continue finetuning the checkpoint with MLE training, resulting in the checkpoint named BART.GPT4. As for contrastive learning, we use GPTRank with GPT4 to generate 500 examples, and the checkpoint from the warm-start stage is fine-tuned to a new checkpoint, BRIO.GPT4. We adopt the definition of the different quality aspects from the previous work (Fabbri et al., 2021;Gehrmann et al., 2021Gehrmann et al., , 2022) ) as the following:" }, { "figure_ref": [], "heading": "B Human and Meta Evaluation Details", "publication_ref": [], "table_ref": [], "text": "(1) Salience: \"This rating measures how well the summary captures the key points of the news article. Consider whether all and only the important information are included in the summary.\" (2) Coherence: \"This rating measures whether the summary is presented in a clear, well-structured, logical, and meaningful way.\"\n(3) Overall Preference/Quality: \"This rating measures how much you like the summary.\"" } ]
Recent studies have found that summaries generated by large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets. Therefore, we investigate a new learning setting of text summarization models that considers the LLMs as the reference or the gold-standard oracle on these datasets. To examine the standard practices that are aligned with this new learning setting, we investigate two LLM-based summary quality evaluation methods for model training and adopt a contrastive learning training method to leverage the LLM-guided learning signals. Our experiments on the CNN/DailyMail and XSum datasets demonstrate that smaller summarization models can achieve similar performance as LLMs under LLM-based evaluation. However, we found that the smaller models can not yet reach LLM-level performance under human evaluation despite promising improvements brought by our proposed training methods. Meanwhile, we perform a meta-analysis on this new learning setting that reveals a discrepancy between human and LLM-based evaluation, highlighting the benefits and risks of this LLM-as-reference setting we investigated.
On Learning to Summarize with Large Language Models as References
[ { "figure_caption": "Results with GPTScore.", "figure_data": "-22.62 -0.271 100.0 100.0 85.4BART-59.55 -0.789 46.85 24.38 79.0GPT3D2-41.21 -0.547 55.40 33.72 78.7Alpaca-44.82 -0.567 51.53 30.18 81.8ChatGPT-45.12 -0.498 58.14 37.46 92.0BART.ChatGPT -41.08 -0.446 54.26 33.98 93.7BART.GPT3D3 -36.13 -0.420 59.50 40.70 85.6BRIO.GPT3D3 -26.20 -0.318 56.21 36.47 83.7", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results with GPTRank and ChatGPT as the reference LLM. Win and Lose are the numbers of times the compared model wins or loses against ChatGPT as evaluated by GPTRank (ties ignored). R1 and R2 are the ROUGE1/2 F1 scores respectively. Len. is the average summary length. BART.ChatGPT and BART.GPT4 are fine-tuned with MLE training and ChatGPT/GPT4 as the reference, BRIO.ChatGPT and BRIO.GPT4 are fine-tuned with contrastive learning (BRIO).", "figure_data": "ModelWin LoseR1R2Len.ChatGPT--100.0 100.0 92.0BART118850.54 29.31 79.0GPT3D2217755.34 33.31 78.7GPT3D3346658.14 37.46 85.4Alpaca237653.41 31.48 81.8BART.ChatGPT366362.04 43.76 94.1BRIO.ChatGPT514961.40 40.74 93.1BART.GPT4435662.08 43.55 91.8BRIO.GPT4574262.79 43.65 92.8", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "). 9 For contrastive learning, 500 or 1000 data points are used for model training with Results with GPTRank and GPT4 as the reference LLM. Win and Lose are the numbers of times the compared model wins or loses against ChatGPT as evaluated by GPTRank (ties ignored). R1 and R2 are the ROUGE1/2 F1 scores respectively. Len. is the average summary length. BART.ChatGPT and BART.GPT4 are fine-tuned with MLE training and ChatGPT/GPT4 as the reference, BRIO.ChatGPT and BRIO.GPT4 are fine-tuned with contrastive learning (BRIO).", "figure_data": "ModelWin LoseR1R2Len.ChatGPT--63.43 44.09 92.0GPT44749100.0 100.0 90.0BART118650.83 29.47 79.0GPT3D2227755.17 33.23 78.7GPT3D3475156.12 34.72 85.4Alpaca158354.77 33.23 81.8BART.ChatGPT316659.52 40.45 94.1BRIO.ChatGPT415757.56 35.74 93.1BART.GPT4356263.22 44.70 91.8BRIO.GPT4514658.65 37.57 92.8", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance comparison of FLAN-T5 and BART as backbone models on CNNDM. GPT4 is the reference LLM and the backbone model of GPTRank. Win and Lose are the numbers of times the compared model wins or loses against ChatGPT as evaluated by GPTRank (ties ignored). R1 and R2 are the ROUGE1/2 F1 scores respectively. Len. is the average summary length. BART.GPT4 and T5.GPT4 are fine-tuned with MLE training while BRIO.GPT4 and T5BRIO.GPT4 are fine-tuned with contrastive learning.", "figure_data": "ModelWin LoseR1R2Len.ChatGPT--63.43 44.09 92.0GPT44749100.0 100.0 90.0BART.GPT4356263.22 44.70 91.8BRIO.GPT4514658.65 37.57 92.8T5.GPT4277262.99 44.31 93.9T5BRIO.GPT4653458.44 36.69 108.4", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results on XSum dataset. GPT4 is the reference LLM and the backbone model of GPTRank. Win and Lose are the numbers of times the compared model wins or loses against ChatGPT as evaluated by GP-TRank (ties ignored). R1 and R2 are the ROUGE1/2 F1 scores respectively. Len. is the average summary length. BART.GPT4 are fine-tuned with MLE training while BRIO.GPT4 are fine-tuned with contrastive learning. uation(Liu et al., 2023b;Rajani et al., 2023). Further discussion is in Appendix A.4. Experiments on XSum We now conduct experiments on XSum(Narayan et al., 2018), another commonly used dataset. We follow the original XSum data format by having the models generate one-sentence summaries. The experimental settings are similar to those in §3.1 & §3.3 and more details are in Appendix A.5. The results in Tab. 5 show a similar trend in that training with better references helps to improve model performance. We note that the gain of contrastive learning is marginal on XSum (BART.GPT4 v.s. BRIO.GPT4), which is likely due to the smaller performance gap between BART.GPT4 and ChatGPT compared with CNNDM, which restricts the improvement space.", "figure_data": "Experiments with FLAN-T5 We repeat the ex-periment in §3.3 but use a three billion FLAN-T5 (Chung et al., 2022) model 11 as the backbonemodel. Results in Tab. 4 suggest that the training al-gorithm can be more important than the model sizefor model performance, as BRIO.GPT4 can outper-form T5.GPT4. The FLAN-T5 checkpoint trainedwith contrastive learning, T5BRIO.GPT4, achievesa strong performance. However, we note that itssummaries are significantly longer than those ofother systems, which makes the result more diffi-cult to interpret as recent work has found a strongcorrelation between the summary rating and lengthin both human and LLM-based summarization eval-", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Crowd-annotations conducted on 3 groups of system pairs on 100 examples. The number of times each system wins is reported (we count both systems as winners when there is a tie).", "figure_data": "Group SystemSalience Coherence Overall1ChatGPT BART83 2684 3487 202ChatGPT BART.GPT468 4568 6362 413ChatGPT BRIO.GPT460 5065 5661 39Group SystemSalience Coherence Overall1ChatGPT BART44 1049 449 22ChatGPT BART.GPT440 2235 2435 183ChatGPT BRIO.GPT432 2939 2433 214BART.GPT4 BRIO.GPT422 4126 3617 39", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "conducted by the Daily Mail has revealed that the majority of the British public do not want Camilla to become queen if Prince Charles succeeds to the throne. The poll also showed that the nation is split on whether Charles should become king at all, with only 43% believing he should ascend to the throne compared to an almost identical number saying he should stand aside in favour of his elder son, Prince William. William and Prince Harry are the most popular members of the Royal Family, closely followed by the Queen and the Duchess of Cambridge.A majority of the British public do not want Camilla to become queen if Prince Charles succeeds to the throne, according to a poll for the Daily Mail. The nation is completely split on whether Charles should become king at all, with only 43% believing he should, compared to an almost identical number saying he should stand aside in favour of his elder son Prince William. William and Prince Harry are the most popular members of the Royal Family, closely followed by the Queen and the Duchess of Cambridge. Russian warship and two bombers were intercepted by the UK's Royal Navy and RAF after they strayed close to UK territory, prompting suspicions that Russia was using access to a Nato war games exercise in Scotland to test UK reaction times. The incidents are the latest in a series of apparent tests by Moscow as tensions between Britain and Russia rise. Sir Michael Graydon, former head of the RAF, called for higher defence spending to defeat a \"resurgent\" Russia, saying that the Kremlin was using Cold War tactics to \"snoop\" on the exercise.Russian warships with missiles, anti-aircraft guns and torpedoes entered the English Channel, forcing the UK to send a heavily-armed frigate to monitor the vessels. Two Russian long-range Bear bombers, capable of carrying nuclear missiles, hugged British airspace, prompting the RAF to deploy two Typhoon fighter jets, raising suspicions that Vladimir Putin was using access to the site to test UK reaction times. The incursions are the latest in a long line of apparent tests by Moscow as tensions between Britain and Russia rise. Bristol County Sheriff Thomas Hodgson has said that Aaron Hernandez, the former New England Patriots player who was sentenced to life for murder, is a master manipulator and will probably do fine in prison. Hodgson said Hernandez knows how to use his charm and manipulate better than anyone he has ever seen, adding that the former football star is generally affable and polite and would try to use those qualities to get what he wanted at the Bristol County House of Corrections. Hernandez will ultimately end up at the maximum-security Souza-Baranowski state prison.Former New England Patriots player Aaron Hernandez has been sentenced to life in prison for the 2013 killing of Odin Lloyd, who was dating the sister of Hernandez's fiancee. Bristol County Sheriff Thomas Hodgson, who had Hernandez in custody for over 18 months, said that Hernandez is a master manipulator and will probably do fine in prison. Hernandez is generally affable and polite and would try to use those qualities to get what he wanted at the Bristol County House of Corrections, Hodgson said.", "figure_data": "ID ChatGPTBRIO.GPT4Annotator 1 Annotator 2 Annotator 31 A giant turnip weighing 33lbs and measuring 4ft longA giant turnip weighing 33lbs and measuring 4ft longChatGPTChatGPTChatGPThas been grown in China's Yunnan Province. Thehas been grown in China's Yunnan Province. Thevegetable was grown naturally without the help ofmonster vegetable, which weighs more than a three-any fertilizers and has become a big attraction, withyear-old child, has been nicknamed the 'fat little girl'pictures of it going viral. This is not the first timein the village where it was grown and pictures of itsuch outsized vegetables have turned up in China, buthave gone viral. The specimen was grown naturallyusually they are grown by farmers using commercialwithout the help of any fertilizers and was so large itfertilizers.hangs over the flower bed.2 Wales midfielder Aaron Ramsey has warned his Ar-Arsenal midfielder Aaron Ramsey has warned hisChatGPTChatGPTChatGPTsenal team-mates that Wales could overtake EnglandEnglish team-mates to beware of Wales overtakingin the FIFA rankings. Wales are currently in 22ndthem in the FIFA rankings, as Wales climbed to theirplace, their highest-ever position, and are unbeatenhighest-ever position in football's world order in thein Euro 2016 qualifying. Ramsey admits that thereApril rankings to move within eight places of Eng-is already banter between himself and his Englishland. Chris Coleman's side are unbeaten in Euroteam-mates at Arsenal, and that Wales are catching2016 qualifying and would be within touching dis-up in the world rankings.tance of the finals in France should they beat Belgiumin June, and Ramsey admits the banter with the likesof Theo Walcott, Jack Wilshere and Danny Welbeckis already flying on the Arsenal training ground.3 A poll BRIO.GPT4ChatGPTChatGPT4 A ChatGPTChatGPTChatGPT5", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" } ]
Yixin Liu; Kejian Shi; Katherine S He; Longtian Ye; Alexander R Fabbri; Pengfei Liu; Dragomir Radev; Arman Cohan
[ { "authors": "Dzmitry Bahdanau; Philemon Brakel; Kelvin Xu; Anirudh Goyal; Ryan Lowe; Joelle Pineau; Aaron C Courville; Yoshua Bengio", "journal": "", "ref_id": "b0", "title": "An actorcritic algorithm for sequence prediction", "year": "2016" }, { "authors": "Samy Bengio; Oriol Vinyals; Navdeep Jaitly; Noam Shazeer", "journal": "MIT Press", "ref_id": "b1", "title": "Scheduled sampling for sequence prediction with recurrent neural networks", "year": "2015" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Shuyang Cao; Lu Wang", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization", "year": "2021" }, { "authors": "Yi Chen; Rui Wang; Haiyun Jiang; Shuming Shi; Rui-Lan Xu", "journal": "", "ref_id": "b4", "title": "Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study", "year": "2023" }, { "authors": "Cheng-Han Chiang; Hung Yi; Lee ", "journal": "", "ref_id": "b5", "title": "Can large language models be an alternative to human evaluations?", "year": "2023" }, { "authors": "Chung Hyung Won; Le Hou; S Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Dasha Chowdhery; Sharan Valter; Gaurav Narang; Adams Wei Mishra; Vincent Yu; Yanping Zhao; Andrew M Huang; Hongkun Dai; Slav Yu; Ed Petrov; Jeff Huai Hsin Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b6", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Bosheng Ding; Chengwei Qin; Linlin Liu; Lidong Bing; R Shafiq; Boyang Joty; Li", "journal": "", "ref_id": "b7", "title": "Is gpt-3 a good data annotator?", "year": "2022" }, { "authors": "Sergey Edunov; Myle Ott; Michael Auli; David Grangier; Marc'aurelio Ranzato", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Classical structured prediction losses for sequence to sequence learning", "year": "2018" }, { "authors": "Alexander R Fabbri; Wojciech Kryściński; Bryan Mc-Cann; Caiming Xiong; Richard Socher; Dragomir Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "SummEval: Re-evaluating summarization evaluation", "year": "2021" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b10", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": "Mingqi Gao; Jie Ruan; Renliang Sun; Xunjian Yin; Shiping Yang; Xiaojun Wan", "journal": "", "ref_id": "b11", "title": "Humanlike summarization evaluation with chatgpt", "year": "2023" }, { "authors": "Sebastian Gehrmann; Tosin Adewumi; Karmanya Aggarwal; Pawan Sasanka Ammanamanchi; Anuoluwapo Aremu; Antoine Bosselut; Raghavi Khyathi; Miruna-Adriana Chandu; Dipanjan Clinciu; Kaustubh Das; Wanyu Dhole; Esin Du; Ondřej Durmus; Chris Dušek; Varun Chinenye Emezue; Cristina Gangal; Tatsunori Garbacea; Yufang Hashimoto; Yacine Hou; Harsh Jernite; Yangfeng Jhamtani; Shailza Ji; Mihir Jolly; Dhruv Kale; Faisal Kumar; Aman Ladhak; Mounica Madaan; Khyati Maddela; Saad Mahajan; Mahamood; Prasad Bodhisattwa; Pedro Henrique Majumder; Angelina Martins; Simon Mcmillan-Major; Mille; Moin Emiel Van Miltenburg; Shashi Nadeem; Vitaly Narayan; Andre Nikolaev; Salomey Niyongabo Rubungo; Ankur Osei; Laura Parikh; Niranjan Perez-Beltrachini; Ramesh Rao; Vikas Raunak; Juan ; Diego Rodriguez; Sashank Santhanam; João Sedoc; Thibault Sellam; Samira Shaikh; Anastasia Shimorina; Marco Antonio Sobrevilla; Hendrik Cabezudo; Nishant Strobelt; Wei Subramani; Diyi Xu; Akhila Yang; Jiawei Yerukola; Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "The GEM benchmark: Natural language generation, its evaluation and metrics", "year": "2021" }, { "authors": "Sebastian Gehrmann; Abhik Bhattacharjee; Abinaya Mahendiran; Alex Wang; Alexandros Papangelis; Aman Madaan; Angelina Mcmillan-Major; Anna Shvets; Ashish Upadhyay; Bingsheng Yao; Bryan Wilie; Chandra Bhagavatula; Chaobin You; Craig Thomson; Cristina Garbacea; Dakuo Wang; Daniel Deutsch; Deyi Xiong; Di Jin; Dimitra Gkatzia; R Dragomir; Elizabeth Radev; Esin Clark; Faisal Durmus; Filip Ladhak; Genta Ginter; Hendrik Indra Winata; Hiroaki Strobelt; Jekaterina Hayashi; Jenna Novikova; Jenny Kanerva; Jiawei Chim; Jordan Zhou; Joshua Clive; João Maynez; Juraj Sedoc; Juraska; D Kaustubh; Dhole; Raghavi Khyathi; Leonardo F R Chandu; Lewis Ribeiro; Li Tunstall; Mahima Zhang; Mathias Pushkarna; Michael Creutz; Mihir White; Moussa Kale; Nico Kamal Eddine; Nishant Daheim; Ondrej Subramani; Paul Pu Dusek; Pawan Liang; Qinqin Sasanka Ammanamanchi; Ratish Zhu; Reno Puduppully; Rifat Kriz; Ronald Shahriyar; Saad Cardenas; Salomey Mahamood; Samuel Osei; Cahyawijaya; Sébastien Sanja Vstajner; Montella; Shailza Shailza; Simon Jolly; Tahmid Mille; Tianhao Hasan; Tosin P Shen; Vikas Adewumi; Vipul Raunak; Vitaly Raheja; Vivian Nikolaev; Yacine Tsai; Yi Jernite; Yisi Xu; Yixin Sang; Yufang Liu; Hou", "journal": "", "ref_id": "b13", "title": "Gemv2: Multilingual nlg benchmarking in a single line of code", "year": "2022" }, { "authors": "Zorik Gekhman; Jonathan Herzig; Roee Aharoni; Chen Elkind; Idan Szpektor", "journal": "", "ref_id": "b14", "title": "Trueteacher: Learning factual consistency evaluation with large language models", "year": "2023" }, { "authors": "Fabrizio Gilardi; Meysam Alizadeh; Maël Kubli", "journal": "", "ref_id": "b15", "title": "Chatgpt outperforms crowd-workers for textannotation tasks", "year": "2023" }, { "authors": "Tanya Goyal; Junyi ; Jessy Li; Greg Durrett", "journal": "", "ref_id": "b16", "title": "News summarization and evaluation in the era of gpt-3", "year": "2022" }, { "authors": "Raia Hadsell; Sumit Chopra; Yann Lecun", "journal": "IEEE Computer Society", "ref_id": "b17", "title": "Dimensionality reduction by learning an invariant mapping", "year": "2006" }, { "authors": "Cheng-Yu Hsieh; Chun-Liang Li; Chih-Kuan Yeh; Hootan Nakhost; Yasuhisa Fujii; Alexander J Ratner; Ranjay Krishna; Chen-Yu Lee; Tomas Pfister", "journal": "", "ref_id": "b18", "title": "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes", "year": "2023" }, { "authors": "Junmo Kang; Wei Xu; Alan Ritter", "journal": "", "ref_id": "b19", "title": "Distill or annotate? cost-efficient fine-tuning of compact models", "year": "2023" }, { "authors": "Yoon Kim; Alexander M Rush", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Sequencelevel knowledge distillation", "year": "2016" }, { "authors": "Klaus Krippendorff", "journal": "", "ref_id": "b21", "title": "Computing krippendorff's alpha-reliability", "year": "2011" }, { "authors": "Minae Kwon; Sang Michael Xie; Kalesha Bullard; Dorsa Sadigh", "journal": "", "ref_id": "b22", "title": "Reward design with language models", "year": "2023" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Jiwei Li; Will Monroe; Alan Ritter; Dan Jurafsky; Michel Galley; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Deep reinforcement learning for dialogue generation", "year": "2016" }, { "authors": "L I Shiyang; Jianshu Chen; Yelong Shen; Zhiyu Chen; Xinlu Zhang; Zekun Li; Hong Wang; Jingu Qian; Baolin Peng; Yi Mao; Wenhu Chen; Xifeng Yan", "journal": "", "ref_id": "b25", "title": "Explanations from large language models make small reasoners better", "year": "2022" }, { "authors": "Siyao Li; Deren Lei; Pengda Qin; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Deep reinforcement learning with distributional semantic rewards for abstractive summarization", "year": "2019" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; D ; Christopher; Diana Christopher R'e; Drew A Acosta-Navas; E Hudson; Esin Zelikman; Faisal Durmus; Frieda Ladhak; Hongyu Rong; Huaxiu Ren; Jue Yao; Keshav Wang; Laurel J Santhanam; Lucia Orr; Mert Zheng; Mirac Yuksekgonul; Nathan S Suzgun; Neel Kim; Niladri S Guha; O Chatterji; Peter Khattab; Qian Henderson; Ryan Huang; Sang Chi; Shibani Michael Xie; Surya Santurkar; Tatsunori Ganguli; Thomas F Hashimoto; Tianyi Icard; Vishrav Zhang; William Chaudhary; Xuechen Wang; Yifan Li; Yuhui Mai; Yuta Zhang; Koreeda", "journal": "", "ref_id": "b27", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuo Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b29", "title": "G-eval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "Yixin Liu; Budhaditya Deb; Milagro Teruel; Aaron L Halfaker; Dragomir R Radev; Ahmed Hassan; Awadallah ", "journal": "", "ref_id": "b30", "title": "On improving summarization factual consistency from natural language feedback", "year": "2022" }, { "authors": "Yixin Liu; Alexander R Fabbri; Pengfei Liu; Yilun Zhao; Linyong Nan; Ruilin Han; Simeng Han; R Shafiq; Chien-Sheng Joty; Caiming Wu; Dragomir R Xiong; Radev", "journal": "", "ref_id": "b31", "title": "Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation", "year": "2023" }, { "authors": "Yixin Liu; Pengfei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "SimCLS: A simple framework for contrastive learning of abstractive summarization", "year": "2021" }, { "authors": "Yixin Liu; Pengfei Liu; Dragomir Radev; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "BRIO: Bringing order to abstractive summarization", "year": "2022" }, { "authors": "Zheheng Luo; Qianqian Xie; Sophia Ananiadou", "journal": "", "ref_id": "b34", "title": "Chatgpt as a factual inconsistency evaluator for text summarization", "year": "2023" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Ramesh Nallapati; Bowen Zhou; Caglar Cicero Dos Santos; Bing Gulcehre; Xiang", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond", "year": "2016" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b38", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke E Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul Francis Christiano; Jan Leike; Ryan J Lowe", "journal": "", "ref_id": "b39", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Xiao Pan; Mingxuan Wang; Liwei Wu; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Contrastive learning for many-to-many multilingual neural machine translation", "year": "2021" }, { "authors": "Richard Yuanzhe; Pang ; He He", "journal": "", "ref_id": "b41", "title": "Text generation by learning from demonstrations", "year": "2021" }, { "authors": "Rebecca Passonneau", "journal": "European Language Resources Association (ELRA", "ref_id": "b42", "title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation", "year": "2006" }, { "authors": "Romain Paulus; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b43", "title": "A deep reinforced model for abstractive summarization", "year": "2018" }, { "authors": "Rafael Rafailov; Archit Sharma; Eric Mitchell; Stefano Ermon; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b44", "title": "Direct preference optimization: Your language model is secretly a reward model", "year": "2023" }, { "authors": "Nazneen Rajani; Nathan Lambert; Sheon Han; Jean Wang; Osvald Nitski; Edward Beeching; Lewis Tunstall", "journal": "", "ref_id": "b45", "title": "Can foundation models label data like humans? Hugging Face Blog", "year": "2023" }, { "authors": "Aurelio Marc; Sumit Ranzato; Michael Chopra; Wojciech Auli; Zaremba", "journal": "", "ref_id": "b46", "title": "Sequence level training with recurrent neural networks", "year": "2016-05-02" }, { "authors": "Shiqi Shen; Yong Cheng; Zhongjun He; Wei He; Hua Wu; Maosong Sun; Yang Liu", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Minimum risk training for neural machine translation", "year": "2016" }, { "authors": "Kumar Shridhar; Alessandro Stolfo; Mrinmaya Sachan", "journal": "", "ref_id": "b48", "title": "Distilling reasoning capabilities into smaller language models", "year": "2022" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeffrey Wu; Daniel Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul F Christiano", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b49", "title": "Learning to summarize with human feedback", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b50", "title": "", "year": "" }, { "authors": "Shichao Sun; Wenjie Li", "journal": "", "ref_id": "b51", "title": "Alleviating exposure bias via contrastive learning for abstractive text summarization", "year": "2021" }, { "authors": "Liyan Tang; Tanya Goyal; Alexander R Fabbri; Philippe Laban; Jiacheng Xu; Semih Yahvuz; Wojciech Kryscinski; Justin F Rousseau; Greg Durrett", "journal": "", "ref_id": "b52", "title": "Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors", "year": "2022" }, { "authors": "Ashwin Vijayakumar; Michael Cogswell; Ramprasaath Selvaraju; Qing Sun; Stefan Lee; David Crandall; Dhruv Batra", "journal": "", "ref_id": "b53", "title": "Diverse beam search for improved description of complex scenes", "year": "2018" }, { "authors": "Jiaan Wang; Yunlong Liang; Fandong Meng; Haoxiang Shi; Zhixu Li; Jinan Xu; Jianfeng Qu; Jie Zhou", "journal": "", "ref_id": "b54", "title": "Is chatgpt a good nlg evaluator? a preliminary study", "year": "2023" }, { "authors": "Shuohang Wang; Yang Liu; Yichong Xu; Chenguang Zhu; Michael Zeng", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "Want to reduce labeling cost? GPT-3 can help", "year": "2021" }, { "authors": "John Wieting; Taylor Berg-Kirkpatrick; Kevin Gimpel; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Beyond BLEU:training neural machine translation with semantic similarity", "year": "2019" }, { "authors": "Ronald J Williams; David Zipser", "journal": "Neural Comput", "ref_id": "b57", "title": "A learning algorithm for continually running fully recurrent neural networks", "year": "1989" }, { "authors": "Sam Wiseman; Alexander M Rush", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "Sequenceto-sequence learning as beam-search optimization", "year": "2016" }, { "authors": "Zonghan Yang; Yong Cheng; Yang Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Reducing word omission errors in neural machine translation: A contrastive learning approach", "year": "2019" }, { "authors": "Zheng Yuan; Hongyi Yuan; Chuanqi Tan; Wei Wang; Songfang Huang; Feiran Huang", "journal": "", "ref_id": "b60", "title": "Rrhf: Rank responses to align language models with human feedback without tears", "year": "2023" }, { "authors": "Lining Zhang; Simon Mille; Yufang Hou; Daniel Deutsch; Elizabeth Clark; Yixin Liu; Saad Mahamood; Sebastian Gehrmann; Miruna Clinciu; Khyathi Raghavi Chandu; João Sedoc", "journal": "", "ref_id": "b61", "title": "haystack: An analysis of high-agreement workers on mturk for summarization", "year": "2022" }, { "authors": "Tianyi Zhang; * ; Varsha Kishore; * ; Felix Wu; * ; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b62", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Tianyi Zhang; Faisal Ladhak; Esin Durmus; Percy Kathleen Mckeown; Tatsunori Hashimoto", "journal": "", "ref_id": "b63", "title": "Benchmarking large language models for news summarization", "year": "2023" }, { "authors": "Xingxing Zhang; Yiran Liu; Xun Wang; Pengcheng He; Yang Yu; Si-Qing Chen; Wayne Xiong; Furu Wei", "journal": "", "ref_id": "b64", "title": "Momentum calibration for text generation", "year": "2022" }, { "authors": "Yao Zhao; Rishabh Joshi; Tianqi Liu; Misha Khalman; Mohammad Saleh; Peter J Liu", "journal": "", "ref_id": "b65", "title": "Slic-hf: Sequence likelihood calibration with human feedback", "year": "2023" }, { "authors": "Yao Zhao; Misha Khalman; Rishabh Joshi; Shashi Narayan; Mohammad Saleh; Peter J Liu", "journal": "", "ref_id": "b66", "title": "Calibrating sequence likelihood improves conditional language generation", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 123.18, 618.41, 166.55, 27.38 ], "formula_id": "formula_0", "formula_text": "pg(S|D) = l S i=1 pg(si|S<i, D),(1)" }, { "formula_coordinates": [ 2, 357.08, 145.3, 167.93, 10.33 ], "formula_id": "formula_1", "formula_text": "Lxent(θ) = -log pg(S * |D; θ),(2)" }, { "formula_coordinates": [ 2, 357.97, 243.44, 167.04, 27.39 ], "formula_id": "formula_2", "formula_text": "p h (S|D) = l S i=1 p h (si|S<i, D),(3)" }, { "formula_coordinates": [ 2, 333.71, 333.55, 187.81, 20.34 ], "formula_id": "formula_3", "formula_text": "L (h) xent (θ) = - S∈S p h (S|D) log pg(S|D; θ), (4" }, { "formula_coordinates": [ 2, 521.52, 336.38, 3.48, 7.77 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 2, 361.39, 538.02, 163.62, 11.65 ], "formula_id": "formula_5", "formula_text": "L(h) xent (θ) = -log pg( Ŝ|D; θ),(5)" }, { "formula_coordinates": [ 2, 362.9, 578.84, 162.11, 14.98 ], "formula_id": "formula_6", "formula_text": "ŝi = arg max s p h (s| Ŝ<i, D),(6)" }, { "formula_coordinates": [ 3, 117.53, 456.93, 172.21, 19.75 ], "formula_id": "formula_7", "formula_text": "pg(Si|D) pg(Sj|D) > 2(j -i), ∀i, j, i < j,(7)" }, { "formula_coordinates": [ 3, 92.28, 514.45, 197.46, 32.56 ], "formula_id": "formula_8", "formula_text": "Lctr(θ) = S i ,S j ∈Sc,i<j max(0, log pg(Sj|D; θ) -log pg(Si|D; θ) + log 2(j -i)).(8)" }, { "formula_coordinates": [ 3, 110.93, 637.31, 178.8, 22.47 ], "formula_id": "formula_9", "formula_text": "pg(S|D) = l S i=1 log pg(si|S<i, D) lS ,(9)" }, { "formula_coordinates": [ 3, 101.15, 694.02, 188.59, 45.51 ], "formula_id": "formula_10", "formula_text": "Lctr(θ) = S i ,S j ∈Sc,i<j max(0, pg(Sj|D; θ) -pg(Si|D; θ) + 1 λ log 2(j -i)),(10)" }, { "formula_coordinates": [ 3, 352.22, 358.87, 172.79, 11.65 ], "formula_id": "formula_11", "formula_text": "L mul (θ) = L(h) xent (θ) + α Lctr(θ),(11)" }, { "formula_coordinates": [ 3, 345.72, 533.75, 179.29, 22.47 ], "formula_id": "formula_12", "formula_text": "ph (S|D) = l S i=1 log p h (si|S<i, D) lS .(12)" } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b7", "b6", "b8", "b9", "b6", "b10", "b5" ], "table_ref": [], "text": "Climate change is a complex problem with significant and far-reaching impacts on natural ecosystems and human societies [1,2]. Raising temperatures, sea-level rise, extreme weather events, changes in precipitation patterns, and ocean acidification are just some of the consequences [3,4]. These effects seriously affect food security, water resources, public health, and the economy. Therefore, reliable and efficient weather forecasting has great economic, scientific, and social significance.\nMeteorological factors such as temperature, precipitation, and humidity can provide analysis support to determine weather variation tendencies. Unlike conventional Numerical Weather Prediction (NWP) [5], which utilizes physical models to simulate meteorological dynamics in the atmosphere To address the above issues, this paper proposes Spatialtemporal Prompt Learning for Federated Weather Forecasting (also called FedWing) that leverages lightweight prompts to share meaningful representation and structural knowledge among participants. Specifically, Adaptive prompts (APs) are adopted to represent each participant's temporal dynamics and encode spatial information from the local dataset. Sharing prompts allows better knowledge sharing across heterogeneous clients, which has been proved in vision [8], time-series [7], and language [9] under the FL setting. To enhance the personalized representational ability for every client with a distinct location characteristics pattern, we regard APs as knowledge representers and perform multi-level APs-based communication during local updating. This means additionally sharing APs across clients, instead of sharing specific information performing only the server-clients communication by specific information carriers [10,7,11]. In the server, we introduce Dynamic Graph Modelling to establish spatial-temporal correlations among clients based on APs and latitude and longitude information uploaded by each participant. The proposed method can both represent nonlinear dynamics of multiple meteorological factors and establish the spatial-temporal correlation among clients to achieve high forecasting accuracy while maintaining high communication efficiency. Using the pre-trained FM as the fixed encoder can efficiently reduce the costs because neither complicated backward propagation computation nor large-scale parameters transmission between the server and clients is needed during the training stage. In addition to a globally shared large model at the server, our proposed method enables each participant to acquire a personalized model that is highly customized to tackle climate changes in a specific geographic area. As shown in Table 1, compared to training a model from scratch using FedAvg [6], a much smaller number of parameters are communicated per round when using pre-trained FM. Besides, higher performance can be achieved when using our proposed framework after the same communication rounds.\nWe quantitatively evaluate the performance of FedWing and state-of-the-art FL algorithms under the adaptive-prompts-based and fine-tune-based frameworks based on publicly available multi-sensor weather multivariate time-series datasets. We also perform extensive ablation studies to validate the effectiveness of FedWing. The main contributions of this work are summarized in three-fold:\n• We propose incorporating a simple prompting mechanism to establish a lightweight framework for federated weather forecasting. This allows each client to acquire a personalized model that is highly customized to tackle climate changes in a specific geographic area while maintaining high communication efficiency.\n• We propose Spatial-temporal Prompt Learning for Federated Weather Forecasting (Fed-Wing), which employs lightweight prompts to represent highly nonlinear temporal dynamics and encode spatial information on the local dataset, then establish dynamic spatial-temporal correlation among participants to achieve better personalized updating.\n• We conduct extensive experiments on three real-world spatial-temporal multivariate timeseries weather datasets from the National Aeronautics and Space Administration (NASA) to demonstrate FedWing's effectiveness and superiority. The results indicate that the proposed FedWing outperforms popular FL algorithms on both multivariate to univariate and multivariate to multivariate forecasting tasks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b11", "b12", "b13", "b14", "b15", "b16", "b18", "b19", "b20", "b5", "b21", "b22", "b23", "b9", "b24", "b25", "b26", "b27", "b6", "b28", "b29", "b30", "b31", "b32", "b6", "b10", "b33", "b34", "b35", "b36", "b37", "b38", "b6", "b39", "b40", "b8", "b6", "b7", "b40", "b8", "b6", "b7", "b6" ], "table_ref": [], "text": "Weather forecasting. Weather forecasting is a crucial tool of meteorology that analyzes the variations in weather patterns. Traditional Numerical Weather Prediction (NWP) has been used to simulate weather processes through physical models [5]. Recently, weather forecasting has made significant strides by incorporating data-driven approaches [12][13][14]. However, these shallow models face difficulties in comprehending highly nonlinear dynamics. RNN-based models have shown promising in weather forecasting [15,16]. Besides, Transformer-based models [17,19,20] can capture non-stationary changes, which have contributed to their widespread use in weather analysis. However, the intricate spatial-temporal correlation challenges these methods. To overcome this challenge, spatial-temporal modeling methods such as ST-GCN [21] can be an effective solution for weather forecasting. Nevertheless, these models overlook that practical forecasting tasks rely on multi-sensor data, and exposure concerns persist across different regions.\nPersonalized Federated Learning. Multi-sensor weather forecasting presents significant data security concerns across regions. Federated learning (FL) is a learning paradigm that facilitates the collaborative training of models without exposing data from each participant, such as meteorological sensors. Vanilla FL suffers from the heterogeneity of client-side private data [6]. Personalized FL (PFL) aims to train a personalized model for each client. Existing PFLs are based on various techniques. Refs. [22][23][24] add a regularization term that benefits decomposing the personalized model optimization from global model learning. Refs. [10,25] share part of the model and keep personalized layers private to achieve personalization. Ref. [26] enables a more flexible personalization by adaptively weighted aggregation. Ref. [27] study PFL from a Model-Agnostic Meta-Learning where a meta-model is learned to generate the initialized local model for each client. In addition, Refs. [28,7] utilize structure information to explore the topological relations among clients.\nPre-trained Foundation Model. Pre-trained foundation models (FMs) provide high-efficient solutions for scenario-specific tasks due to they can understand potential representation for various downstream tasks using much fewer data, empowered by the huge number of parameters and the large data available for training. Nowadays, pre-trained FMs have been proven great success in natural language process (NLP) [29] and vision, such as ViT [30], Bert [31], Dert [32], and CLIP [33]. How to maximize the representation capability of pre-trained foundation models in low-resource devices at the lowest possible cost has become a focus of attention for different real-world applications [7,11].\nPrompt Learning. Prompt learning is widely used in NLP that shows promise in improving the efficiency of language models [34,35], which guide a model to generate more relevant output by providing it with the context in the form of prompts. Due to its requiring few parameters and being more adaptive than fine-tuning, it has been widely applied in vision [36][37][38][39], and time-series [7,40] applications. Some works have introduce prompts techniques to FL [41,9,7,8] to reduce the computation cost [41,9] and achieve personalization [7,8]. However, these methods overlook the spatial-temporal correlation among clients with distinct geographical locations. Among them, Ref. [7] considers multiple variables within a client as individual nodes within a specific space and explores spatial associations between variables rather than geographic location patterns. This paper introduces a spatial-temporal prompt learning mechanism for federated weather forecasting, which utilizes lightweight prompts to represent temporal dynamics and encode spatial information while incorporating geographic information to enhance personalization for each client." }, { "figure_ref": [ "fig_1" ], "heading": "Problem Formulation", "publication_ref": [ "b5", "b41" ], "table_ref": [], "text": "Weather Forecasting. Considering a ground weather station possesses multivariate time series data denoted by X i ∈ R m×n , where m and n indicate the length of the time series and the number of variables, respectively. Each time step's sample is represented by x t ∈ R 1×n . Forecasting multivariate spatial-temporal weather data requires an understanding of the individual station's time series and the complex temporal-spatial pattern of the entire region. Based on the forecasting objective of a single station, we categorize the forecasting task into two classes. Task1: multivariate to univariate forecasting: predicting a specific variable in the future Q periods via all variables from the past P periods. Task2: multivariate to multivariate forecasting: predicting all variable in the future Q periods via all variables from the past P periods. These can be defined as follow:\nTask1:\n[x t-P , x t-P +1 , • • • , x t ] f -→ x T 1 t+1 , x T 1 t+2 , • • • , x T 1 t+Q , Task2: [x t-P , x t-P +1 , • • • , x t ] f -→ x T 2 t+1 , x T 2 t+2 , • • • , x T 2 t+Q ,(1)\nwhere f denotes the learning system on the stations, x T 1 t ∈ R 1×1 is the value of the forecasting variable at the t-th step, and x T 2 t ∈ R 1×n is the value of the forecasting variable at the t-th step.\nTypical Federated Learning. In typical FL, a server manages N clients to train a uniform model collectively [6]. In each communication round t, the server selects a fraction \nn k n F k (w k ),(2)\nwhere n k is the number of samples held by the k-th client. n is the number of samples held by all clients. F k (w k ) denotes the local objective function of k-th client that can be formulated as\nL k (w k ) = ℓ k (w k ; (x i , y i ))\n, where the L is the local loss function.\nSpatial-temporal Federated Weather Forecasting. For the task of spatial-temporal federated weather forecasting, each client holds a distinct dataset due to the complex location characteristics patterns, causing statistical heterogeneity. This makes typical FL unsuitable, and the task is updated to the PFL problem that solves the bi-level optimization below.\nF (v; w): = arg min {v1,v2,...,v N } N k=1 n k n F k (v k ) + λR(v k , w), s.t. w ∈ arg min w G(F 1 (w), F 2 (w), ..., F N (w)),(3)\ni.e. R(v k , w):\n= L p (v k , w),\nwhere each client hold a personalized model parameterized by v i , w denotes the global model. R(•) is a regularization term. Most existing methods struggle to handle the heterogeneity of geographic settings and ignore that the spatial-temporal correlation is influenced by factors beyond geographic location. Therefore, regularization terms such as distance or similarity hard to optimize the personalized model parameters for each client. In addition, the need for real-time weather forecasting and extreme weather warnings emphasizes the importance of efficient knowledge sharing between the client and server. To address these issues, we propose spatial-temporal prompt learning that explores potential correlation among clients using lightweight personalized parameters. 4 Spatial-temporal Prompt Learning for Federated Weather Forecasting\nIn this section, we elaborate on the proposed spatial-temporal Prompt Learning for Federated Weather Forecasting (FedWing), whose structure is shown in Figure 1. Each client possesses a pre-trained FM and utilizes adaptive prompts (APs) to encode the spatial-temporal pattern of the weather data from latent multivariate spaces. We employ a prompt-based communication strategy that transmits APs containing spatial-temporal information, facilitating the sharing of data patterns among clients and the server, and enabling the update of personalized models for each client based on structural information. By utilizing prompts provided by the server, we perform local optimization using a well-crafted prompt-wise loss function. This loss function captures the spatial-temporal representation along with geographic information, taking into account potential correlations with neighboring nodes. Detailed illustrations for each procedure will be provided in the remaining part of this section.\nAdaptive Prompts as Knowledge Representers. Our approach proposes the utilization of APs for communication between the server, clients, and inter-clients. This strategy offers three advantages over transmitting full parameters solely between clients and the server. Firstly, AP-based communication is lightweight, effectively reducing communication overheads, and particularly suitable for low-resourced devices. Secondly, AP-based communication enables each client to learn highly customized models by capturing distinct location-specific patterns. Lastly, AP-based inter-client communication provides sufficient pattern information instead of sharing raw data, allowing for reliable personalized updates without privacy concerns.\nWe introduce APs as knowledge representers. Specifically, in representing the complex nonlinear association between both of inter-time step and variables, we present temporal prompts (P T ) and inter-variables prompts (P V ) in parallel, which are defined as a set of learnable parameters to be attached to the data fed to the fixed FM. The iterative learning process of P T and P V was adopted to achieve more accurate multivariate time-series modeling, as shown in Algorithm 1.\nAlgorithm 1 Learning process of temporal prompts (P T ) and inter-variables prompts (P V ).\nInitialize temporal prompt updating step (mt), inter-variables prompts updating step (nt), total foretasted length of time-series m, and the number of variable n, the foundation model FM for local updating epoch e = 1, 2, ... do for time forecasting step q = 1, 2, ... do\nXtemp = FM (∥Xipt, PT ∥ T ), PT ∈ R qm t ×n ▷ ∥.∥ T : concat along temporal dimension PT ← ∥PT , P ′ T ∈ R m t ×n ∥ T ▷ P ′ T :\nThe next temporal prompt block end for for variable forecasting step p = 1, 2, ... do\nXivar = FM (∥Xipt, PV ∥ S ), PV ∈ R m×pn t ▷ ∥.∥ V : concat along variable dimension PV ← ∥PV , P ′ V ∈ R m×n t ∥ V ▷ P ′ V :\nThe next inter-variable prompt block end for end for After the above learning of P T and P V , we used two independent weight matrices W bt and W bv to weight them for achieving more comprehensive data representation, as X = P T ⊙ W bt + P V ⊙ W bv . To encode the location pattern of the client, we adopt a spatial prompt P S as Eq. 4 to update the P V , P T with geographic location pattern while encoding the spatial information of a client, and represent this client's specific location pattern. And the final output of the local updating is Head(F (X ipt + X)), where Head(•) is a fine-tuned layer.\nP S , X ← N orm(∥P ipt , ϕ, λ∥, ∥P X , P S ∥).(4)\nLocal Training. To represent temporal dynamics, location patterns and explore potential associations among neighboring clients, we proposed the adaptive-prompt-wise loss that includes multi-level regularization terms, i.e., global, local, and neighboring prompts terms. The adaptive prompts-based local loss function can be formulated as below.\nL ap = 1 m • n m i=1 n j=1 (y i,j -ŷi,j ) 2 + R({P i }; {P j } l ; {P i } l ; {P } * ),(5)\nwhere m and n denote the temporal and variable dimension of local weather time-series, respectively, y and ŷ are the ground truth and predictions, respectively, R({P i }; {P j } l ; {P i } l ; {P } * ) is the regularization term utilized to measure the distance between the local {P i }, the corresponding personalized {P i } l , neighboring {P j } l , and global adaptive prompts {P } * . The local loss function can be formulated as\nL ap = 1 m • n m i=1 n j=1 (y i,j -ŷi,j ) 2 + 1 λ 2 L 2 ({P i }, {P } * ) + 1 λ 2 L 2 ({P i }, {P i } l ) + 1 τ 2 • 1 (|N |/S G ) -1 j∈N L 2 ({P i }, {P j } l ) + 4{log 2 (λ) + log 2 (τ )}.(6)\nHere, the λ and τ are importance coefficients that obey λ, τ ∈ (0, 1), the L 2 is L2 regularization (e.g., Euclidean distance, cosine similarity, etc.). S G represents the subgraph step used to adjust the scope of interaction between clients. The inter-client regularization term\n1 τ 2 • 1 (|N |/S G )-1\nj∈N L 2 ({P i }, {P j }) forces local models to move closer to clients with similar patterns and away from those with significantly different patterns, thereby enabling more comprehensive personalized updates on clients.\nGraph-based Server Aggregation. We introduce Dynamic Graph Modelling (DGM) in the server's aggregation process to construct the spatial-temporal correlation among clients for better personalization. DGM receives the APs uploaded by participants and geographic information (e.g., latitude and longitude coordinates) to generate graphs that reflect the potential association between clients. Specifically, APs {P i } N i=1 have been categorized into three classes before fed into DGM: (1) Temporal and Inter-variables Prompts {P T,i , P\nV,i } N i=1 ; (2) Spatial Prompts {P S,i } N i=1 ; (3) Full Adaptive Prompts {P i } N i=1 .\nWe first construct the three distinct static graphs corresponding to the above three classes: A T V , A S , A based on cosine similarity. In addition, the server generates the static graph according to the geographic information via Haversine formula [42]:\nA Geo i,j = 2R • tan -1 sin 2 ( ∆ϕ 2 ) + cos(ϕ i ) • cos(ϕ j ) • sin 2 ( ∆λ 2 ) 1 -(sin 2 ( ∆ϕ 2 ) + cos(ϕ i ) • cos(ϕ j ) • sin 2 ( ∆λ 2 ))) , i, j ∈ N , i ̸ = j,(7)\nwhere ϕ i and ϕ j are the latitude coordinates of client i and j, respectively, ∆ϕ = ϕ i -ϕ j is the difference in latitude between the two points in radians, ∆λ = λ i -λ j is the difference in longitude between the client i and client j, R is the radius of the Earth is 6536.9088 km.\nTo understand the dynamic spatial correlation among clients, we applied linear transformation parameterized by two learnable matrices W i are W j to two clients during constructing the dynamic graph. The importance of i-th client to j-th client with vector Z i and Z j , respectively, can be expressed as e i,j = α(W i Z i , W j Z j ). Then we use an additional matrix W to compute the edge's weight and build an adjacent as end for 8:\nA i,j = e i,j 1 + e -W [WiZi-Wj Zj ] .(8\nGraph-guide Adaptive Prompts Aggregation and Updating: 9:\nfor each set of Adaptive Prompts do 10:\nA/AT V /AS ← DGM({P }set) 11:\nend for 12:\nfor m = 1, 2, ... do 13: For three different APs classes, four adjacent matrices are constructed according to Eqs. 7, 8, as A Geo , A ST , A G , and A. Then we aggregated them according to the attention mechanism to achieve more accurate correlation representation, and reconstructive APs according to these adjacent matrices as:\n{Pi} N i=1 ← αA{Pi} N i=1 + (1 -α)A ′ {Pi} N i=1 ▷ Update personalized APs 14: {F M,layer,i } N i=1 ← αA{F M,layer,i } N i=1 + (1 -α)A ′ {F M,layer,i } N i=1 ▷\nA ′ ← Attention(A Geo , A S , A T V , A) = softmax (A Geo -A S )A ⊤ T V √ d k A, {P i } N i=1 ← αA{P i } N i=1 + (1 -α)A ′ {P i } N i=1 ,(9)\nwhere √ d k is the dimension of adjacent matrix, and α is importance coefficient. The term A Geo -A S highlights the discrepancy between the actual geographic correlation and the encoded spatial correlation, enabling the dynamic adjustment of spatial-temporal correlation among clients to achieve a more precise graph modeling.\nOptimization for FedWing. The optimization objective of FedWing is to solve a bi-level optimization problem on federated weather forecasting. FedWing applies an APs-based communication strategy, which allows the local model to be updated based on the fixed FM while keeping low parameters interacting with the server to minimize the sum of loss for all clients. The optimization objective of FedWing can be formulated below." }, { "figure_ref": [], "heading": "arg min", "publication_ref": [ "b5", "b42", "b21", "b26", "b43", "b44", "b27" ], "table_ref": [], "text": "{Pi};A N i=1 [ n i n F i ({P i }; D i ) + R({P i }; {P j } l ; {P i } l ; {P } * )] + τ G(A),(10)\ns.t. {P } * ∈ arg min {P1},...,{P N } N i=1 n i n F i ({P i }), {P } l ∈ arg min {Pi} l j∈N A j,i S({P i } l , {P j } l )\nwhere {P } denotes APs that include P T , P V , and P G , {P } * is the global APs, the local model was parameterized by {P } after receiving the pre-trained FM. The {P j } l is personalized local models from other clients that achieve by the additional regularization term G(•). The learned graph with the adjacent matrix A (computed by A ′ , A) is expected to be sparse and able to preserve proximity relationships among clients. The implementation of FedWing is presented in Algorithm 2. 1 , named Average Precipitation (AvePRE), Surface Temperature (SurTEMP), and Surface Upstream (SurUPS) collected by 88, 525, and 238 ground weather devices, respectively. All three datasets cover the hour-by-hour variability of 12 weather-related variables, and detailed information about datasets and setting can be found at Appendix A. 1.\nBaselines and Implementation. We compare our proposed method with popular FL algorithms, including FedAvg [6], FedProx [43], pFedMe [22], Per-FedAvg [27], FedATT [44], APFL [45], FedAMP [? ], and SFL [28], while keeping the FM is consistent. The introduction about baselines, the hyper-parameters of the foundation model, and the pre-training strategy can be found in Appendix A. 2, Appendix A. 3, and Appendix A. 4, respectively. For all baselines, we have two implementations for the local model: Conventional Fine-tuning: FM with fully connected layers as the fine-tune head (# of trainable parameters: 215,089); The proposed APs-based tuning: FM with the proposed adaptive prompts (# of trainable parameters: 159,649).\nWe use a batch size of 256, and AdamW optimizer with weigh decay 1e -4 and initial learning rate 0.01. For three datasets, the participant rate C = 0.3 by default, respectively, and the importance coefficients are γ = 0.7 and τ = 0.3. For the graph training on the server aggregation, the epoch is 40. The optimizer is SGD, with a learning rate is 0.001. The α = 0.99 during aggregation. The objective of our study is to forecast the next 12 hours using the data from the previous 12 hours. Then parameters of temporal prompt updating step m t , inter-variables prompts updating step n t , and subgraph step S G are set to 1 (other setting can be found in Appendix C). Main experiments are conducted in 25 local training epoch within 50 federated communication round. Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are utilized as evaluation metrics. We implement all the models using PyTorch and conduct experiments on one NVIDIA RTX 3090 GPU." }, { "figure_ref": [], "heading": "Performance Comparison", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 2 reports the result of our method and baselines with different tuning strategy (TS) under Task1 and Task2. The results suggest that: (1) compared with the conventional fine-tuning (BA is HDs, our proposed APs-based tuning (TS is APs) leads to higher forecasting performance while keeping fewer parameters (∼ 74%) under either Task1 and Task2; (2) when the proposed APs fixed as TS, our proposed method outperform baselines in all of three dataset and different forecasting tasks;\n(3) compared with the graph-based personalized FL strategy, SFL, our method achieves higher performance when the communication parameters are consistency and adopted the same APs setting, which demonstrates that FedWing can provide more meaningful knowledge representation and focus on the spatial-temporal correlations instead of considering the client's parameters only. In summary, the APs-based tuning strategy proposed in this study demonstrates superior performance compared to conventional fine-tuning for two distinct weather forecasting tasks. It achieves this with fewer parameters exchanged between the client and server. This suggests that the proposed method can achieve improved communication efficiency. Additionally, the proposed methods demonstrate superior performance across two tasks, while utilizing the same TS as the baselines. These results provide validation for the effectiveness and superiority of the proposed FedWing." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "This section presents the results of ablation experiments to demonstrate the effectiveness of the adaptive prompts, graph-based aggregation, and the local loss function. All ablation experiments are conducted on the AvePRE dataset and keep the same setting as the former experiments.\nWe evaluate the performance of APs in seven different forms: ( " }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "To tackle the heterogeneous data challenges resulting from variations in geographic areas in federated weather forecasting, this paper models the spatial-temporal weather data within a federated prompt learning framework that utilizes lightweight prompts to facilitate the sharing of spatial-temporal representations and structural knowledge. By utilizing prompts-based communication, the server can establish the topology relationships among participants, explore potential spatial-temporal cor-relations, and mitigate communication overhead without transmitting private data. Furthermore, our proposed method allows each participant to obtain a personalized model specifically tailored to address climate changes in a particular geographic area, in addition to the globally shared model residing at the server. Extensive experiments conducted on real-world multivariate weather time-series datasets demonstrate the superior performance and effectiveness of the proposed method.\nLimitations. Firstly, our experiments are conducted on real datasets comprising hundreds of groundbased weather stations. However, the number of ground-based equipment in a given region (state) far exceeds this amount. As a result, we currently lack sufficient computational power to apply our approach to scenarios involving thousands or even tens of thousands of equipment. Secondly, our approach assumes that the central server has access to the location information of each weather station within the region. However, in a cross-country or global-scale forecast system, specific latitude and longitude information may not be available due to the relevant protocols. Nevertheless, our method still has significant potential for weather forecasting at the global scale. " }, { "figure_ref": [], "heading": "B Discussion about Privacy", "publication_ref": [], "table_ref": [ "tab_2", "tab_7" ], "text": "Federated Learning is vulnerable to data leakage, although data is not directly exposed to other clients during collaborative training of a global model. An attacker can reverse-engineer raw data using the gradient update transmitted from a client, especially when the chosen batch size and local training step in the local training phase are small. In our setting, the same privacy leakage concerns may exist when transmitting other participants' adaptive prompts (e.g., P V , P T , P S , etc.) parameters during client-specific training. In the proposed FedWing framework, each participant holds their own local model, including a pre-trained foundation model, adaptive prompts, and some forecasting heads used for fine-tuning with private data. Among them, the adaptive prompts and forecasting heads are trainable, but the heads are not involved in the parameter-sharing process between clients and the server. This means that it is difficult to reverse-engineer the original data using gradients because not all gradients will be exposed. Additionally, we include coefficients on the local loss function, which makes it difficult for an attacker to perform inference attacks to infer the original data, even when the training reaches an infinite number of rounds: e → ∞.\nexperimental setting discussed in the main manuscript (refer to Table 2), when the updating steps of P T and P S are both set to 1, the model achieves optimal performance on Task2. However, to optimize performance on Task1, both updating steps need to be set to 6. (2) When the updating steps of P T and P S are set to 2, 3, 4, or 12, the performance cannot be optimized, despite the periodicity observed in weather variations. This section focuses only on the model performance with two consistent update steps and does not consider more varied combinations, as these would require extensive meteorological expert knowledge due to the non-stationary nature of the weather process and its irregular periodicity. We plan to explore this further in future work. Question: Why did we select the combination {1, 1} for our method?\nAnswer: The combination {1, 1} demonstrates the best result on Task2, although it is not as effective as other combinations on Task1. Given the significant uncertainty in the weather process, characterized by its non-periodic nature and environmental disturbance factors, we choose the {1, 1} combination to flexibly handle different weather processes. The other combinations are well-crafted for specific weather processes. Moreover, although the {1, 1} combination requires more iterative updates (e.g., 24 iterations to predict the weather for the next 24 hours), the time consumption is not significantly different from using the {24, 24} combination, as the difference in prompt shape due to the updating step is also present. In terms of computational resources, the multiple iterations do not result in a sharp increase because a fixed FM is used to update the adaptive prompts. Step of Subgraph Step. To examine the impact of S G on model performance, a series of experiments were conducted using various combinations of parameters. We selected values for S G from within the range {1, 2, 4, 6, 8, 10}, while the inter-client communication range and subgraph size considered in the local loss function and graph-based aggregation process were chosen from their corresponding ranges {88, 44, 22, 15, 11, 9}. We present the experimental results in Table 8. The results demonstrate that S G = 1 results in the model achieve suboptimal performance for Task1 and Task2, while optimal results are achieved for Task1 and Task2 when S G = 10 and S G = 6, respectively. Generally, as S G decreases, more clients will be involved in local loss functions and graph-based aggregation. In our experimental setup, not all clients participate in every round of communication for training because of the significant overhead it incurs. Thus, when S G is large, including initialized clients while ignoring those involved in training may negatively impact performance. In our setting, S G = 1 is the default configuration, which considers all clients and provides flexibility for special cases. Since only adaptive prompts P T , P S , P V with few parameters need consideration, S G = 1 does not result in significant communication overhead." }, { "figure_ref": [], "heading": "D Algorithm Analysis D.1 Optimization Objective", "publication_ref": [], "table_ref": [], "text": "As Eq. 10 mentioned in the main manuscript, arg min\n{Pi};A N i=1 [ n i n F i ({P i }; D i ) + R({P i }; {P j } l ; {P i } l ; {P } * )] + τ G(A),(13)\nThe main optimization objective of our proposed algorithm is to optimize the adaptive prompts {P }, which include P T , P V , P S . The parameters of the low-parameterized forecasting head, which only exist during Task1 and not Task2, will be excluded from the subsequent analysis. \nwe can obtain that with probability at least 1 -δ, the following holds for specific adaptive prompts, \nwhere H is the hypothesis set of head h, d is the VC-dimension of H. The a follow from the definition of Rademacher complexity\nR n (F) = E σ sup f ∈F 1 n n i=1 σ i f (x i ) ,(24)\nwhere σ 1 , σ 2 , . . . , σ n are independent Rademacher random variables that take values in {-1, 1} with equal probability, E σ denotes the expectation over the Rademacher variables, " }, { "figure_ref": [], "heading": "A Foundation Model, Dataset, and Baseline", "publication_ref": [], "table_ref": [], "text": "This section provides missing information in the main manuscript about the structure, implementation, datasets, and baselines." }, { "figure_ref": [], "heading": "A.1 Detailed Information of Datasets", "publication_ref": [], "table_ref": [], "text": "All three meteorological datasets based on multivariate time series proposed in our work are collected by NASA data website. The detailed information of these datasets in presented in Table 4.\nAvePRE. The dataset was collected by 88 meteorological satellites spanning a latitude and longitude range of (38.41055, -91.08764) to (34.75988, -86.7999). The dataset contains 12 different meteorological variables designed for forecasting surface precipitation to prevent the negative impacts of extreme rainfall on human lives and properties. The dataset includes all data monitored by these sensing devices from April 1, 2012, to February 28, 2016.\nSurTEMP. The dataset was collected by 525 meteorological satellites and observatories spanning a latitude and longitude range of (33.90689, 84.55078) to (30.63791, -79.56200). The dataset contains 12 different meteorological variables designed for forecasting surface temperature to prevent surface drought, which can cause sea level rise and ice melting. The dataset includes all data monitored by these devices from January 3, 2019, to May 2, 2022.\nSurUPS. The dataset was collected by 238 meteorological satellites, observatories, and solar radiation monitors spanning a latitude and longitude range of (38.84179, 81.22352) to (37.03761, -76.90420). The dataset contains 12 different meteorological variables designed for forecasting upstream longwave flux to prevent regions from abnormal thunderstorm activity. The dataset includes all data monitored by these devices from January 2, 2019, to July 29, 2022. All these datasets were observed in hours, where missing data beyond 12 consecutive hours are padded with zeros, while missing up to 2 consecutive hours are padded by interpolation.\nIn the training process, we partition the three datasets as follows: for the pre-trained foundation model, we use the first 50% of the dataset for training and validation, where the first 40% is the training set and the first 40% to the first 50% is the validation set. For the conventional fine-tuning and the proposed APs-based fine-tuning, we use the last 50% of the complete dataset for the experiments and divide the training, validation and test sets in the ratio of 6:2:2." }, { "figure_ref": [], "heading": "A.2 Baselines", "publication_ref": [], "table_ref": [], "text": "We compare our proposed GradPFL with popular FL algorithms, including FedAvg, FedProx, pFedMe, PerFedAvg, FedATT, APFL, FedAMP, and SFL.\nFedAvg. Aggregating locally trained models to obtain a globally representative model via average strategy, while preserving the privacy of each individual's data.\nFedProx. An extension of FedAvg that adds a proximal term to the objective function to encourage closer alignment with the global model 2 .\npFedMe. A pFL approach that adapts the global model to each user's local data distribution while taking into account the similarity between users to improve model generalization 3 .\nPer-FedAvg. A variation of the FedAvg algorithm that allows for personalized model updates for each client by adding client-specific parameters to the global model and optimizing them in a decentralized manner during training 4 FedATT. An FL algorithm that uses attention techniques to address the heterogeneity of local data distributions (the code comes from this repository 5 )." }, { "figure_ref": [], "heading": "APFL.", "publication_ref": [], "table_ref": [], "text": "A variant of Federated Learning that enables asynchronous communication among the clients, allowing them to perform local updates at their own pace and reducing the overall communication cost of the system (the code comes from this repository 6 ).\nFedAMP. An FL algorithm that aims to improve the convergence speed and communication efficiency of federated optimization (the code comes from this repository 7 ).\nSFL. An PFL algorithm with graph structure information to make a more personalized model according to client-wise personalization 8 ." }, { "figure_ref": [], "heading": "A.3 Hyper-parameters of The Foundation Model", "publication_ref": [], "table_ref": [], "text": "The foundational model employed in this study is the Encode-only Transformer. Detailed information regarding the model's hyperparameter settings is presented in Table 5. " }, { "figure_ref": [], "heading": "A.4 Pre-Training Strategy for Foundation Model", "publication_ref": [ "b5" ], "table_ref": [], "text": "The pre-training strategy employed in our work for the Transformer foundation model on multivariate time series. In this approach, a binary noise mask, denoted by M , is independently created for each training sample and epoch, which is then applied to the input, denoted by X, resulting in the masked input X = M ⊙ X. For multivariate time series data, each variable is masked using an independent mask vector of length w, which alternates between segments of 0 and 1. The state transition probabilities are modeled as having a length that follows a geometric distribution with a mean of l m . This is then followed by an unmasked segment of mean length l u = 1-r r l m , where r is the masking probability. The mask rate r in our work is set to 0.15, and mean masked length l m is set to 3. The objective function for the pre-training process is formulated as follows:\nHere, X and X represent the ground truth and forecasting value, respectively. However, the objective function differs from the MSE loss function in that it considers only the prediction values on the masked locations, instead of all the elements in the multivariate time series data. It is important to note that we perform FL-based pre-training, where the epoch of local training is set to 20 within a communication round of 20. The participation rate C is 0.5, and the aggregation strategy is set to FedAvg [6] by default. Pre-trained foundation models can be found in the Supplementary file." }, { "figure_ref": [], "heading": "A.5 Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) are utilized to evaluate the performance of our proposed FedWing and baseline, which can be formulated as\nwhere n is the number of time series, T is the number of forecasting periods, y i,j is the actual value of the j-th period of the i-th time series, and ŷi,j is the predicted value of the j-th period of the i-th time series. Smaller MAE and RMSE means better model prediction performance." }, { "figure_ref": [], "heading": "C More Experiments Results", "publication_ref": [], "table_ref": [], "text": "This section presents additional experimental results regarding differential privacy and explores the major impact factors in our proposed FedWing framework." }, { "figure_ref": [], "heading": "C.1 FedWing with Differential Privacy", "publication_ref": [], "table_ref": [], "text": "To protect the privacy of each client, we introduce noise to the gradient during the server's graphbased aggregation and prompt-based inter-client communication. We compare the performance of FedWing with and without the addition of noise. This noise is multiplied by a factor and added to the shared parameters, including adaptive prompts P T , P V , P S . For this experiment, we set the factor τ = 0.01 to implement differential privacy. Table 6 presents the results of these approaches on three datasets, showing a reduced performance of FedWing for forecasting after the addition of noise. However, as indicated in Table 2 (refer to the main manuscript), it still outperforms other baselines. Moreover, since FedWing only utilizes the adaptive prompts on the server-side to generate the graph that constructs the spatial-temporal correlation among clients, adding noise solely to adaptive prompts is sufficient to ensure privacy protection. This results in a mitigated decline in performance due to differential privacy, compared to adding noise to all trainable parameters. " }, { "figure_ref": [], "heading": "C.2 Study of Major Impact Factors", "publication_ref": [], "table_ref": [], "text": "In this section, we perform experiments to investigate the impact of key factors of our proposed FedWing framework based on the AvePRE dataset. These factors include the updating steps of Adaptive Prompts P T and P V (see Algorithm 1), as well as the step of subgraph S G (refer to the local loss function, Eq. 6 ). The experimental settings for this section are as follows: the local updating epoch is set to 5, the communication round is set to 10, and the remaining settings are the same as those in the main manuscript's experiments." }, { "figure_ref": [], "heading": "Updating", "publication_ref": [], "table_ref": [], "text": "Step of Adaptive Prompts. We investigated six different combinations of adaptive prompt updating steps, namely {1, 2, 3, 4, 6, 12}, to explore the influence of these steps during local updating. The impact of the adaptive prompt updating steps on model performance during the local model updating process is presented in Table 7. The shapes of the prompts (P T and P V ) are affected by the number of steps, as described in Algorithm 1. The results indicate the following: (1) In the" } ]
Federated weather forecasting is a promising collaborative learning framework for analyzing meteorological data across participants from different countries and regions, thus embodying a global-scale real-time weather data predictive analytics platform to tackle climate change. This paper is to model the meteorological data in a federated setting where many distributed low-resourced sensors are deployed in different locations. Specifically, we model the spatial-temporal weather data into a federated prompt learning framework that leverages lightweight prompts to share meaningful representation and structural knowledge among participants. Prompts-based communication allows the server to establish the structural topology relationships among participants and further explore the complex spatial-temporal correlations without transmitting private data while mitigating communication overhead. Moreover, in addition to a globally shared large model at the server, our proposed method enables each participant to acquire a personalized model that is highly customized to tackle climate changes in a specific geographic area. We have demonstrated the effectiveness of our method on classical weather forecasting tasks by utilizing three spatial-temporal multivariate time-series weather data.
Spatial-temporal Prompt Learning for Federated Weather Forecasting
[ { "figure_caption": "nC of all clients to participate in the training. The server broadcasts the global model w to these clients, who then obtain it on their respective private datasets D k . Each selected client obtains their local model w k by using the global model in their local training process: w k ← w -η∇ℓ(w; x i , y i ), (x i , y i ) ∈ D k . The k-th client uploads their trained local model w k to the server, which aggregates them to update the global model as w = w k . The aim of the server is to minimize the average loss of the global model w on all clients' local datasets. F (w): = arg min w1,w2,...,w N N k=1", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Schematic diagram of the basic architecture of the proposed framework, the mentioned Adaptive Prompts (APs) comprise the Spatial Prompt, Temporal Prompt, and Inter-variables Prompt. Prompt-based Interclient communication (indicated by the red arrow) exchanges APs information among clients. Communication based on prompts (indicated by the green arrow) enables the exchange of APs between clients and the server.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "D. 2 2 n i=1 c 2 i222Proof of Generalization Bound Theorem D.1 Consider a federated weather forecasting system with m clients. Let D 1 , D 2 , ..., D m be the true data distribution and D1 , D2 , ..., Dm be the empirical data distribution. Denote the head h as the hypothesis from H and d be the VC-dimension of H. The total number of samples over all clients is N . Then with probability at least 1 -δ: max ({P1},{P2},...,{Pm}) We start from the McDiarmid's inequality asP[g(X 1 , ..., X n ) -E[g(X 1 , ..., X n )] ≥ ϵ] ≤ exp (-|g(x 1 , x 2 , ..., x n ) -g(x 1 , x 2 , ..., x n )| ≤ c i(16)Eq. 15 equals toP[g(•) -E[g(•)] ≤ ϵ] ≥ 1 -) -E[g(•)] ≤ ϵ(18)Let δ = exp (-2ϵ), the above can be rewritten as with the adaptive prompts at least 1 -δ, g(•) -E[g(•)]", "figure_data": "", "figure_id": "fig_2", "figure_label": "222", "figure_type": "figure" }, { "figure_caption": "Considering there are (m + 1)|{P }| prompts in total ({P } including P T , P V , P S ), by using Boole's inequality, with probability at least 1 -δ, the following holds, max({P1},{P2},...,{Pm}) 1)|{P }| δ (22)where N is the total number of samples over all clients.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "x 1 ,1x 2 , . . . , x n are the input data points, and the b follows from Jensen's inequality, so max ({P1},{P2},...,{Pm})", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ") Initialized learning rate η, private dataset {Di} N i=1 , fixed foundation model FM , and adaptive prompts {Pi} N i=1 , participation rate C, the frozen layer of the FM {F M,layer,i } N i=1 . 2: Server executes: 3: Initialized adaptive prompts {PT,i, PV,i, W bt,i , W bs,i , PS,i} N i=1 as {Pi} N i=1 . 4: for each communication round T = 1, 2, ... do", "figure_data": "Algorithm 2 FedWing5:for each client i = 1, 2, ..., N in parallel do6:{Pi} ← LocalUpdate(FM , Di, {Pi} N i=1 )7:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Receives FM , {F M,layer,i } N i=1 and global and personalized adaptive prompts {P } l , {P } * 20: for each local epoch e = 1, 2, ... do", "figure_data": "Update fixed FM layer15: 16:end for {Pi} * ← n n kN i=1 P s , wr ← n n kN i=1 wr,i▷ Update global APs and remaining parameters17: end for18: LocalUpdate(FM , {Di} N i=1 , {Pi} N i=1 , {F layer,i } N i=1 ):19: 21:Update {F M,layer,i } N i=122:Temporal and Inter-variables Prompts Updating (According to Algorithm 1)23:Compute local loss by Eq. 6.24:Update and upload Adaptive Prompts {P } N i=125: end for", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance comparison of the proposed FedWing with FL baselines based on the pre-trained foundation model with different tuning strategy (TS), including HDs (Conventional Fine-tuning) and APs (The proposed APs-based tuning), under Task1 and Task2, evaluation metrics for each item are presented in the format of MAE/RMSE, the Bold and Underline denote the best and second best results respectively, all results are in units of 100 times the original result for a clearer comparison. Three weather multivariate time series datasets from the National Aeronautics and Space Administration (NASA)", "figure_data": "TSAlgorithmAvePRE Task1 Task2SurTEMP Task1 Task2SurUPS Task1 Task2FedAvg [6]34.6/44.8 56.0/90.1 47.6/64.4 56.5/78.3 53.5/74.2 54.1/74.6FedProx [43]31.7/42.1 54.4/87.2 44.4/62.7 52.9/76.4 51.2/69.5 52.3/72.4Per-FedAvg [27] 30.9/40.7 54.3/71.5 41.4/60.9 51.8/73.3 50.2/69.7 51.7/71.8HDsAPFL [45] FedAMP [? ]32.5/43.8 56.1/84.9 46.2/63.1 59.4/77.3 54.3/73.7 53.8/73.4 31.9/41.3 54.7/84.2 43.8/62.9 52.3/73.7 51.5/70.0 53.2/73.4FedATT [44]34.5/44.7 63.2/89.8 48.7/63.1 61.0/79.4 58.8/73.6 64.6/82/0pFedMe [22]32.2/42.7 64.0/85.2 42.9/61.8 50.7/74.6 51.7/70.1 52.5/72.0SFL [28]30.0/40.2 53.1/81.2 39.9/62.6 51.7/76.1 48.0/69.1 51.0/70.4FedAvg [6]32.4/42.8 51.0/76.3 41.2/61.7 54.4/76.8 52.1/72.2 53.2/73.8FedProx [43]27.1/38.0 47.1/70.2 39.7/61.5 51.7/75.2 48.1/67.1 51.0/67.6Per-FedAvg [27] 29.3/37.9 45.3/67.4 37.8/60.0 51.3/72.2 47.6/68.2 50.1/69.5APFL [45]29.5/38.7 46.0/67.7 38.6/64.2 55.7/75.7 56.2/67.1 59.7/68.2APsFedAMP [? ]27.1/37.4 46.7/69.7 39.2/61.0 51.2/73.1 51.5/67.9 52.1/69.3FedATT [44]30.5/40.8 58.7/79.7 38.4/63.7 52.4/79.1 50.9/70.0 53.5/72.6pFedMe [22]28.2/39.7 47.5/69.9 38.5/61.4 50.5/74.1 48.4/66.9 51.2/68.8SFL [28]31.1/39.2 46.4/68.8 37.6/59.3 54.2/73.7 47.2/66.0 49.8/67.2FedWing (Ours) 23.7/32.9 44.3/65.5 35.7/55.0 51.4/71.2 43.9/62.5 45.2/63.95 ExperimentsDatasets.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of ablation studies about adaptive prompts setting and the proposed local loss function, evaluation metrics for each item are presented in the format of MAE/RMSE, the Bold denotes the optimal results, note that all results displayed are the original results ×100.", "figure_data": "PVPTWbv Wbt PSAggregation StrategyLocal LossTask1Task2w/ow--w{Pi} N i=1 ← AT {Pi} N i=1 + (1 -α)AS{Pi} N i=1Our loss MSE29.9/40.4 53.7/78.4 31.7/42.4 54.4/80.0ww/o--w{Pi} N i=1 ← AS{Pi} N i=1 + (1 -α)AS{Pi} N i=1Our loss MSE28.2/37.2 57.1/85.0 29.2/39.0 58.2/85.9w/o w/o--w{Pi} N i=1 ← AS{Pi} N i=1Our loss MSE30.8/41.2 52.0/77.7 31.8/42.4 54.8/78.9wwwww/o{Pi} N i=1 ← AST {Pi} N i=1Our loss MSE30.1/40.9 48.7/74.7 31.6/42.1 50.9/76.0ww/o--w/o{Pi} N i=1 ← AS{Pi} N i=1Our loss MSE29.4/39.8 56.2/84.7 31.1/40.8 59.0/87.8w/ow--w/o{Pi} N i=1 ← A{Pi} N i=1Our loss MSE30.1/40.6 53.7/79.0 31.7/43.5 54.2/80.5wwwww{Pi} N i=1 ← αA{Pi} N i=1 + (1 -α)A ′ {Pi} N i=1Our loss MSE23.7/32.8 44.3/65.5 25.0/34.4 47.7/68.0", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "FullAdaptive Prompts. Note that the dependency among P V , P T , W bv , W bt , e.g., if P V exits but P T is not exist, W bs , W bt will also not exist. In addition, based on the above different APs forms, we perform ablation experiments on different local loss functions: (1) Conventional MSE without any regularization terms; (4) Our proposed (Eq. 6). These results as shown in Table3indicate that: (1) Our proposed local loss function outperforms the MSE loss in different tasks under all ablation settings regarding APs; (2) when keeping the loss function consistent, any of the prompts (P T , P V , P S ) in APs can boost the model's performance. Overall, the proposed APs can effectively represent nonlinear temporal dynamics and potential spatial information on clients, and the effectiveness and superiority of our proposed APs-based communication strategy have been demonstrated.", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Information of three weather forecasting datasets, which the bold is the forecasting weatherrelated variable in each dataset in the multivariate to unvariate forecasting task.", "figure_data": "DatasetPeriodDevicesFeatures", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Impact of P T , P V updating steps, the Bold and Underline denote the best and the second best result respectively, all results are 100× the original result for a clearer comparison.", "figure_data": "Updating step of PT Updating step of PVTask Class MAE ↓ RMSE ↓11Task1 Task239.9 51.550.2 79.522Task1 Task238.1 53.748.8 85.033Task1 Task237.1 52.947.9 80.444Task1 Task138.6 53.247.7 80.166Task1 Task235.7 52.646.1 80.71212Task1 Task239.3 53.750.3 84.8", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Impact of subgraph step, the Bold and Underline denote the best and the second best result respectively, all results are 100× the original result for a clearer comparison.Step of subgraph SG Range of Loss & Grap-based Agg. Task Class MAE ↓ RMSE ↓", "figure_data": "188Task1 Task236.9 51.747.0 79.4244Task1 Task239.1 51.549.9 79.0422Task1 Task238.1 51.849.7 78.8615Task1 Task238.8 54.049.1 81.9811Task1 Task239.3 54.449.7 81.8109Task1 Task235.9 52.445.6 79.6", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
Chen Shengchao; Aaii; Guodong Long Aaii; Tao Shen; Tianyi Zhou; Jing Jiang
[ { "authors": "Jerry M Thomas R Karl; Thomas C Melillo; Peterson", "journal": "Cambridge University Press", "ref_id": "b0", "title": "Global climate change impacts in the United States: a state of knowledge report from the US Global Change Research Program", "year": "2009" }, { "authors": "Tord Kjellstrom; David Briggs; Chris Freyberg; Bruno Lemke; Matthias Otto; Olivia Hyatt", "journal": "Annual review of public health", "ref_id": "b1", "title": "Heat, human performance, and occupational health: a key issue for the assessment of global climate change impacts", "year": "2016" }, { "authors": "Stefan Hagemann; Cui Chen; Douglas B Clark; Sonja Folwell; Simon N Gosling; Ingjerd Haddeland; Naota Hanasaki; Jens Heinke; Fulco Ludwig; Frank Voss", "journal": "Earth System Dynamics", "ref_id": "b2", "title": "Climate change impact on available water resources obtained using multiple global climate and hydrology models", "year": "2013" }, { "authors": "Stéphane Hallegatte; Nicola Ranger; Olivier Mestre; Patrice Dumas; Jan Corfee-Morlot; Celine Herweijer; Robert Muir; Wood ", "journal": "Climatic change", "ref_id": "b3", "title": "Assessing climate change impacts, sea level rise and storm surge risk in port cities: a case study on copenhagen", "year": "2011" }, { "authors": "Peter Bauer; Alan Thorpe; Gilbert Brunet", "journal": "Nature", "ref_id": "b4", "title": "The quiet revolution of numerical weather prediction", "year": "2015" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b5", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Shengchao Chen; Guodong Long; Tao Shen; Jing Jiang", "journal": "", "ref_id": "b6", "title": "Prompt federated learning for weather forecasting: Toward foundation models on meteorological data", "year": "2023" }, { "authors": "Guanghao Li; Wansen Wu; Yan Sun; Li Shen; Baoyuan Wu; Dacheng Tao", "journal": "", "ref_id": "b7", "title": "Visual prompt based personalized federated learning", "year": "2023" }, { "authors": "Haodong Zhao; Wei Du; Fangqi Li; Peixuan Li; Gongshen Liu", "journal": "", "ref_id": "b8", "title": "Reduce communication costs and preserve privacy: Prompt tuning method in federated learning", "year": "2022" }, { "authors": "Xiaoxiao Li; Meirui Jiang; Xiaofei Zhang; Michael Kamp; Qi Dou", "journal": "", "ref_id": "b9", "title": "Fedbn: Federated learning on non-iid features via local batch normalization", "year": "2021" }, { "authors": "Yue Tan; Guodong Long; Jie Ma; Lu Liu; Tianyi Zhou; Jing Jiang", "journal": "", "ref_id": "b10", "title": "Federated learning from pre-trained models: A contrastive learning approach", "year": "2022" }, { "authors": "Ling Chen; Xu Lai", "journal": "IEEE", "ref_id": "b11", "title": "Comparison between arima and ann models used in short-term wind speed forecasting", "year": "2011" }, { "authors": "I Nicholas; Ravi Sapankevych; Sankar", "journal": "IEEE computational intelligence magazine", "ref_id": "b12", "title": "Time series prediction using support vector machines: a survey", "year": "2009" }, { "authors": "Cyril Voyant; Marc Muselli; Christophe Paoli; Marie-Laure Nivet", "journal": "Energy", "ref_id": "b13", "title": "Numerical weather prediction (nwp) and hybrid arma/ann model to predict global radiation", "year": "2012" }, { "authors": "Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-Kin Wong; Wang-Chun Woo", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "year": "2015" }, { "authors": "Aditya Grover; Ashish Kapoor; Eric Horvitz", "journal": "", "ref_id": "b15", "title": "A deep hybrid model for weather forecasting", "year": "2015" }, { "authors": "Haoyi Zhou; Shanghang Zhang; Jieqi Peng; Shuai Zhang; Jianxin Li; Hui Xiong; Wancai Zhang", "journal": "", "ref_id": "b16", "title": "Informer: Beyond efficient transformer for long sequence time-series forecasting", "year": "2021" }, { "authors": "Tian Zhou; Ziqing Ma; Qingsong Wen; Xue Wang; Liang Sun; Rong Jin", "journal": "", "ref_id": "b17", "title": "Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting", "year": "2022" }, { "authors": "Haixu Wu; Jiehui Xu; Jianmin Wang; Mingsheng Long", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting", "year": "2021" }, { "authors": "Shengchao Chen; Ting Shu; Huan Zhao; Guo Zhong; Xunlai Chen", "journal": "", "ref_id": "b19", "title": "Tempee: Temporalspatial parallel transformer for radar echo extrapolation beyond auto-regression", "year": "2023" }, { "authors": "Bing Yu; Haoteng Yin; Zhanxing Zhu", "journal": "", "ref_id": "b20", "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "year": "2017" }, { "authors": "Nguyen Canh T Dinh; Josh Tran; Nguyen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Personalized federated learning with moreau envelopes", "year": "2020" }, { "authors": "Filip Hanzely; Slavomír Hanzely; Samuel Horváth; Peter Richtárik", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Lower bounds and optimal algorithms for personalized federated learning", "year": "2020" }, { "authors": "Tian Li; Shengyuan Hu; Ahmad Beirami; Virginia Smith", "journal": "PMLR", "ref_id": "b23", "title": "Ditto: Fair and robust federated learning through personalization", "year": "2021" }, { "authors": "Liam Collins; Hamed Hassani; Aryan Mokhtari; Sanjay Shakkottai", "journal": "PMLR", "ref_id": "b24", "title": "Exploiting shared representations for personalized federated learning", "year": "2021" }, { "authors": "Michael Zhang; Karan Sapra; Sanja Fidler; Serena Yeung; Jose M Alvarez", "journal": "", "ref_id": "b25", "title": "Personalized federated learning with first order model optimization", "year": "2020" }, { "authors": "Alireza Fallah; Aryan Mokhtari; Asuman Ozdaglar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach", "year": "2020" }, { "authors": "Fengwen Chen; Guodong Long; Zonghan Wu; Tianyi Zhou; Jing Jiang", "journal": "", "ref_id": "b27", "title": "Personalized federated learning with graph", "year": "2022" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b28", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b29", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b30", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b31", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b32", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "", "ref_id": "b33", "title": "Autoprompt: Eliciting knowledge from language models with automatically generated prompts", "year": "2020" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "", "ref_id": "b34", "title": "Exploiting cloze questions for few shot text classification and natural language inference", "year": "2020" }, { "authors": "Yuan Yao; Ao Zhang; Zhengyan Zhang; Zhiyuan Liu; Tat-Seng Chua; Maosong Sun", "journal": "", "ref_id": "b35", "title": "Cpt: Colorful prompt tuning for pre-trained vision-language models", "year": "2021" }, { "authors": "Yuhang Zang; Wei Li; Kaiyang Zhou; Chen Huang; Chen Change Loy", "journal": "", "ref_id": "b36", "title": "Unified vision and language prompt learning", "year": "2022" }, { "authors": "Shangchen Zhou; Kelvin Chan; Chongyi Li; Chen Change Loy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Towards robust blind face restoration with codebook lookup transformer", "year": "2022" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "Springer", "ref_id": "b38", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Hao Xue; Flora D Salim", "journal": "", "ref_id": "b39", "title": "Prompt-based time series forecasting: A new task and dataset", "year": "2022" }, { "authors": "Tao Guo; Song Guo; Junxiao Wang; Wenchao Xu", "journal": "", "ref_id": "b40", "title": "Promptfl: Let federated participants cooperatively learn prompts instead of models-federated learning in age of foundation model", "year": "2022" }, { "authors": "Robusto Carl", "journal": "The American Mathematical Monthly", "ref_id": "b41", "title": "The cosine-haversine formula", "year": "1957" }, { "authors": "Tian Li; Anit Kumar Sahu; Manzil Zaheer; Maziar Sanjabi; Ameet Talwalkar; Virginia Smith", "journal": "Proceedings of Machine learning and systems", "ref_id": "b42", "title": "Federated optimization in heterogeneous networks", "year": "2020" }, { "authors": "Jing Jiang; Shaoxiong Ji; Guodong Long", "journal": "World Wide Web", "ref_id": "b43", "title": "Decentralized knowledge acquisition for mobile internet applications", "year": "2020" }, { "authors": "Yuyang Deng; Mohammad Mahdi Kamani; Mehrdad Mahdavi", "journal": "", "ref_id": "b44", "title": "Adaptive personalized federated learning", "year": "2020" }, { "authors": "Yutao Huang; Lingyang Chu; Zirui Zhou; Lanjun Wang; Jiangchuan Liu; Jian Pei; Yong Zhang", "journal": "", "ref_id": "b45", "title": "Personalized cross-silo federated learning on non-iid data", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 173.91, 254.73, 330.76, 34.46 ], "formula_id": "formula_0", "formula_text": "[x t-P , x t-P +1 , • • • , x t ] f -→ x T 1 t+1 , x T 1 t+2 , • • • , x T 1 t+Q , Task2: [x t-P , x t-P +1 , • • • , x t ] f -→ x T 2 t+1 , x T 2 t+2 , • • • , x T 2 t+Q ,(1)" }, { "formula_coordinates": [ 4, 334.86, 431.78, 169.8, 22.31 ], "formula_id": "formula_1", "formula_text": "n k n F k (w k ),(2)" }, { "formula_coordinates": [ 4, 108, 488.55, 106.57, 9.65 ], "formula_id": "formula_2", "formula_text": "L k (w k ) = ℓ k (w k ; (x i , y i ))" }, { "formula_coordinates": [ 4, 198.26, 563.06, 306.41, 51.08 ], "formula_id": "formula_3", "formula_text": "F (v; w): = arg min {v1,v2,...,v N } N k=1 n k n F k (v k ) + λR(v k , w), s.t. w ∈ arg min w G(F 1 (w), F 2 (w), ..., F N (w)),(3)" }, { "formula_coordinates": [ 4, 276.87, 616.53, 53.98, 11.72 ], "formula_id": "formula_4", "formula_text": "= L p (v k , w)," }, { "formula_coordinates": [ 5, 143.87, 635.64, 360.14, 22.02 ], "formula_id": "formula_5", "formula_text": "Xtemp = FM (∥Xipt, PT ∥ T ), PT ∈ R qm t ×n ▷ ∥.∥ T : concat along temporal dimension PT ← ∥PT , P ′ T ∈ R m t ×n ∥ T ▷ P ′ T :" }, { "formula_coordinates": [ 5, 143.87, 676.71, 360.14, 22.02 ], "formula_id": "formula_6", "formula_text": "Xivar = FM (∥Xipt, PV ∥ S ), PV ∈ R m×pn t ▷ ∥.∥ V : concat along variable dimension PV ← ∥PV , P ′ V ∈ R m×n t ∥ V ▷ P ′ V :" }, { "formula_coordinates": [ 6, 218.08, 144.95, 286.59, 9.68 ], "formula_id": "formula_7", "formula_text": "P S , X ← N orm(∥P ipt , ϕ, λ∥, ∥P X , P S ∥).(4)" }, { "formula_coordinates": [ 6, 168.69, 221.88, 335.98, 30.32 ], "formula_id": "formula_8", "formula_text": "L ap = 1 m • n m i=1 n j=1 (y i,j -ŷi,j ) 2 + R({P i }; {P j } l ; {P i } l ; {P } * ),(5)" }, { "formula_coordinates": [ 6, 144.59, 312.75, 360.08, 61.73 ], "formula_id": "formula_9", "formula_text": "L ap = 1 m • n m i=1 n j=1 (y i,j -ŷi,j ) 2 + 1 λ 2 L 2 ({P i }, {P } * ) + 1 λ 2 L 2 ({P i }, {P i } l ) + 1 τ 2 • 1 (|N |/S G ) -1 j∈N L 2 ({P i }, {P j } l ) + 4{log 2 (λ) + log 2 (τ )}.(6)" }, { "formula_coordinates": [ 6, 109.2, 412.98, 59.33, 14.1 ], "formula_id": "formula_10", "formula_text": "1 τ 2 • 1 (|N |/S G )-1" }, { "formula_coordinates": [ 6, 108, 515.38, 396, 24.55 ], "formula_id": "formula_11", "formula_text": "V,i } N i=1 ; (2) Spatial Prompts {P S,i } N i=1 ; (3) Full Adaptive Prompts {P i } N i=1 ." }, { "formula_coordinates": [ 6, 115.44, 571.61, 389.23, 29.51 ], "formula_id": "formula_12", "formula_text": "A Geo i,j = 2R • tan -1 sin 2 ( ∆ϕ 2 ) + cos(ϕ i ) • cos(ϕ j ) • sin 2 ( ∆λ 2 ) 1 -(sin 2 ( ∆ϕ 2 ) + cos(ϕ i ) • cos(ϕ j ) • sin 2 ( ∆λ 2 ))) , i, j ∈ N , i ̸ = j,(7)" }, { "formula_coordinates": [ 6, 241.53, 701.77, 259.26, 22.65 ], "formula_id": "formula_13", "formula_text": "A i,j = e i,j 1 + e -W [WiZi-Wj Zj ] .(8" }, { "formula_coordinates": [ 7, 108, 218.88, 396, 21.85 ], "formula_id": "formula_14", "formula_text": "{Pi} N i=1 ← αA{Pi} N i=1 + (1 -α)A ′ {Pi} N i=1 ▷ Update personalized APs 14: {F M,layer,i } N i=1 ← αA{F M,layer,i } N i=1 + (1 -α)A ′ {F M,layer,i } N i=1 ▷" }, { "formula_coordinates": [ 7, 151.43, 422.73, 353.24, 41.26 ], "formula_id": "formula_15", "formula_text": "A ′ ← Attention(A Geo , A S , A T V , A) = softmax (A Geo -A S )A ⊤ T V √ d k A, {P i } N i=1 ← αA{P i } N i=1 + (1 -α)A ′ {P i } N i=1 ,(9)" }, { "formula_coordinates": [ 7, 135.13, 591.76, 369.54, 30.32 ], "formula_id": "formula_16", "formula_text": "{Pi};A N i=1 [ n i n F i ({P i }; D i ) + R({P i }; {P j } l ; {P i } l ; {P } * )] + τ G(A),(10)" }, { "formula_coordinates": [ 7, 130.32, 626.71, 351.36, 30.48 ], "formula_id": "formula_17", "formula_text": "s.t. {P } * ∈ arg min {P1},...,{P N } N i=1 n i n F i ({P i }), {P } l ∈ arg min {Pi} l j∈N A j,i S({P i } l , {P j } l )" }, { "formula_coordinates": [ 18, 156.03, 285.45, 348.64, 30.32 ], "formula_id": "formula_18", "formula_text": "{Pi};A N i=1 [ n i n F i ({P i }; D i ) + R({P i }; {P j } l ; {P i } l ; {P } * )] + τ G(A),(13)" }, { "formula_coordinates": [ 19, 231.66, 574.22, 273, 30.32 ], "formula_id": "formula_21", "formula_text": "R n (F) = E σ sup f ∈F 1 n n i=1 σ i f (x i ) ,(24)" } ]
10.1016/j.socscimed.2022.114870
2023-12-06
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b43", "b66", "b13", "b10", "b30", "b31", "b55", "b11", "b5", "b52", "b25", "b44", "b32", "b71", "b2", "b68", "b34", "b52" ], "table_ref": [], "text": "Through personal experience sharing, humans are able to feel the sting of another person's pain and the warmth of another person's joy. This process of empathy is foundational in the ability to connect with others, develop emotional resilience, and take prosocial actions in the world (Coke et al., 1978;Morelli et al., 2015;Vinayak and Judge, 2018;Cho and Jeon, 2019). Today, there is more visibility into the lives of others than ever before, yet loneliness and apathy are widespread (Buecker et al., 2021; Figure 1: Examples of empathically similar and dissimilar stories. Highlighted are the features of our empathic similarity framework (main event, emotion, and moral/takeaway). Narrator A and B are more likely to empathize with one another over their shared feelings of isolation. Konrath, 2013;Konrath et al., 2011). While these challenges cannot be solved with technology alone, AI systems can be developed to bolster emotional support, empathy, and truly meaningful connections through fostering personal experience sharing (Sagayaraj et al., 2022;Chaturvedi et al., 2023;Berridge et al., 2023). In order to do so, these systems must be able to reason about complex social and emotional phenomena between people.\nIn this work, we introduce the task of modeling empathic similarity, which we define as people's perceived similarity and resonance to others' experiences. For example, in Figure 1, empathic similarity aims to capture that Narrator A, who feels lonely in their small town, is likely to empathize with Narrator B, who is feeling isolated at their new job. Crucially, empathic similarity differs from traditional notions of textual similarity that have been the main focus of NLP work (e.g., semantic similarity; Reimers and Gurevych, 2019); Narrator A will likely not empathize with Narrator C, despite both stories having higher semantic similarity.\nWe operationalize empathic similarity around alignment in three features of a personal story (highlighted in Figure 1): its main event, its emo-tional reaction, and its overall moral or story takeaway (Hodges et al., 2010;Morelli et al., 2017;Krebs, 1976;Wondra and Ellsworth, 2015;Bal and Veltkamp, 2013;Walker and Lombrozo, 2017;Labov and Waletzky, 1997), as motivated by social psychology and narratology literature. From our definition, empathic similarity arises from the interplay of the main events, emotions, and morals in story, where some components or all components must be similar in order for two narrators to resonate with one another. For example, Narrator A and B both experience loneliness, even though their actual situations are different (living in a small town versus working at a company).\nTo enable machines to model empathic similarity, we introduce EMPATHICSTORIES,1 a corpus of 1,500 personal stories, with crowdsourced annontations of the free-text summaries of the main event, emotion, and moral of the stories, as well as an empathic similarity score between 2,000 pairs of stories. We find that finetuning on our paired stories dataset to predict empathic similarity improves performance on automatic metrics as compared to off-the-shelf semantic similarity methods.\nWhile automatic evaluation a valuable signal of model quality, it is crucial to showcase the realworld impact of our task on improving empathy towards people's stories. As such, we conducted a full user study with 150 participants who wrote their own personal journal entries and were presented stories retrieved by our model (and by a semantic similarity baseline). Our results show that users empathize significantly more with stories retrieved by our finetuned empathic similarity model compared to those from a semantic similarity baseline (SBERT; Reimers and Gurevych, 2019). Our findings highlight the applicability of our framework, dataset, and model towards fostering meaningful human-human connections by enabling NLP systems to reason about complex interpersonal social-emotional phenomena." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b57", "b15", "b16", "b37", "b12", "b41", "b18", "b46", "b56", "b34", "b17", "b52", "b50", "b6", "b69", "b70", "b45", "b33", "b7", "b74", "b4", "b6", "b27", "b61" ], "table_ref": [], "text": "Document similarity is a well-defined task in NLP (Salton et al., 1997;Damashek, 1995;Deerwester et al., 1990;Landauer and Dumais, 1997), but few have applied this work to matching personal narratives based on shared emotional experiences (Chaturvedi et al., 2018;Lin et al., 2014). One study used Latent Dirichlet Allocation (LDA) to cluster cyberbullying stories and match these stories based on similarity in theme (Dinakar et al., 2012), but discovered that only 58.3% found the matched story to be helpful if provided to the narrator of the original story.\nOther work has explored ways to bridge the features of a story and human-perceived similarity of stories (Nguyen et al., 2014). Saldias and Roy (2020) found that people use Labov's action (series of events) and evaluation (narrator's needs and desires) clauses to identify similarity in personal narratives (Labov and Waletzky, 1997). Their findings support our decision to focus on modeling events, emotions, and morals within stories.\nMost relevant to our work are recent advances in social and emotional commonsense reasoning using using language models. Specifically, prior methods have used finetuning of language models such as BERT (Devlin et al., 2019;Reimers and Gurevych, 2019) and GPT-2 (Radford et al.) to model events and the emotional reactions caused by everyday events (Rashkin et al., 2019(Rashkin et al., , 2018;;Sap et al., 2019b;Bosselut et al., 2019;Wang et al., 2022;West et al., 2022;Mostafazadeh et al., 2020) as well as predicting empathy, condolence, or prosocial outcomes (Lahnala et al., 2022a;Kumano et al., 2017;Boukricha et al., 2013;Zhou and Jurgens, 2020;Bao et al., 2021). Understanding the emotional reactions elicited by events is a challenging task for many NLP systems, as it requires commonsense knowledge and extrapolation of meanings beyond the text alone. Prior works use commonsense knowledge graphs to infer and automatically generate commonsense knowledge of emotional reactions and reasoning about social interactions (Sap et al., 2019c,b;Bosselut et al., 2019;Hwang et al., 2021). However, there are still many under-explored challenges in developing systems that have social intelligence and the ability to infer states between people (Sap et al., 2022).\nIn contrast to previous works, we present a task for reasoning between pairs of stories, beyond predicting social commonsense features of texts alone. Our work builds on top of prior work by developing a framework around empathic resonance in personal narratives in addition to assessing the human effect of AI-retrieved stories on empathic response beyond automatic metrics. Unlike previous works, our human evaluation is a full user study to see how the model performs given a story that the users told themselves, which is much more aligned with real-world impact." }, { "figure_ref": [], "heading": "Empathic Aspects of Personal Stories", "publication_ref": [], "table_ref": [], "text": "Modeling empathic similarity of stories requires reasoning beyond their simple lexical similarities (see Figure 1). In this section, we briefly discuss how social science scholars have conceptualized empathy ( §3.1) and draw on empathy definitions relevant for the NLP domain (Lahnala et al., 2022b). Then, we introduce our framework for modeling empathic similarity of stories and its three defining features ( §3.2)." }, { "figure_ref": [], "heading": "Background on Empathy and Stories", "publication_ref": [ "b53", "b25", "b72", "b44", "b32", "b71", "b25", "b22", "b1", "b8", "b26", "b67" ], "table_ref": [], "text": "Empathy, broadly defined as the ability to feel or understand what a person is feeling, plays a crucial role in human-human connections. Many prior works in social psychology and narrative psychology find that the perceived similarity of a personal experience has effects on empathy (Roshanaei et al., 2019;Hodges et al., 2010;Wright, 2002;Morelli et al., 2017;Krebs, 1976;Wondra and Ellsworth, 2015). For example, Hodges et al. (2010) found that women who shared similar life events to speakers expressed greater empathic concern and reported greater understanding of the speaker.\nAs with these prior works, our work uses sharing of personal stories as a means to expressing similarity in shared experiences. Personal storytelling as a medium itself has the ability to reduce stress, shift attitudes, elicit empathy, and connect others (Green and Brock, 2000;Andrews et al., 2022;Brockington et al., 2021). In fact, some research has shown that when telling a story to a second listener, speakers and listeners couple their brain activity, indicating the neurological underpinnings of these interpersonal communications (Honey et al., 2012;Vodrahalli et al., 2018)." }, { "figure_ref": [], "heading": "Empathic Similarity in Personal Stories", "publication_ref": [ "b19", "b64", "b25", "b44", "b32", "b71", "b68", "b2" ], "table_ref": [], "text": "We define empathic similarity as a measure of how much the narrators of a pair of stories would empathize with one another. While there are many ways to express empathy, we focus specifically on situational empathy, which is empathy that occurs in response to a social context, conveyed through text-based personal narratives (Fabi et al., 2019).\nWe operationalize an empathic similarity framework grounded in research from social and narra-tive psychology discussed in §3.1. Our framework differs from prior work (Sharma et al., 2020) in that it is expanded to the relationship between two people's experiences, rather than how empathetically someone responds, and focuses on learning a continuous similarity signal as opposed to detecting the presence of empathy. This distinction is important, as someone may be able to express condolences to a personal experience, but not necessarily relate to the experience itself. The core features of empathic similarity we identify are explained below, and we show how these features contribute to empathic similarity in Appendix A.\n(1) Main event. Prior work demonstrates that people empathize more with experiences that are similar to their own (Hodges et al., 2010;Morelli et al., 2017;Krebs, 1976). We formalize this as the main event of the story expressed in a short phrase (e.g. \"living in a small town\").\n(2) Emotional Reaction. Although two people may relate over an experience, they may differ in how they emotionally respond to the experience (e.g. \"overwhelmed with fear of being all alone\" vs \"loneliness of not having a real connection\"). Prior work shows that people have a harder time empathizing with others if they felt that the emotional response to an event was inappropriate (Wondra and Ellsworth, 2015).\n(3) Moral. Readers are able to abstract a higherlevel meaning from the story, often referred to as the moral of the story (Walker and Lombrozo, 2017) (e.g. \"the importance of having people around\"). In studying fictional narratives, prior work has found that people can empathize with the takeaway of a story, despite its fictional nature (Bal and Veltkamp, 2013)." }, { "figure_ref": [ "fig_0" ], "heading": "EMPATHICSTORIES Dataset", "publication_ref": [], "table_ref": [], "text": "We introduce EMPATHICSTORIES, a corpus of personal stories containing 3,568 total annotations. Specifically, the corpus includes empathic similarity annotations of 2,000 story pairs, and the main events, emotions, morals, and empathy reason annotations for 1,568 individual stories. An overview of our data annotation pipeline is shown in Figure 2 and data preprocessing steps are included in Appendix D. In Appendix H, we show that using LLMs for human annotation is not viable for our task. " }, { "figure_ref": [], "heading": "Data Sources", "publication_ref": [ "b59", "b56" ], "table_ref": [], "text": "We collect a diverse set of stories from sources including social media sites, spoken narratives, and crowdsourced stories. We take approximately 500 stories from each of the following sources (for a full breakdown see Appendix F). These sources contain English-written stories revolving around deep emotional experiences and open-ended conversation starters.\n(1) Online Personal Stories.\nWe scrape stories from subreddits2 about personal experiences (r/offmychest, r/todayiamhappy, and r/casualconversation). We also include a small set of stories from a public college confessions forum.\n(2) Crowdsourced Personal Stories. We use a subset of autobiographical stories from the existing Hippocorpus dataset (Sap et al., 2020), which contains recalled and imagined diary-like personal stories obtained from crowdworkers.\n(3) Spoken Personal Narratives. We use stories from the Roadtrip Nation corpus (Saldias and Roy, 2020), which contains transcribed personal stories about people's career trajectories and life stories." }, { "figure_ref": [], "heading": "Individual Story Annotation", "publication_ref": [], "table_ref": [ "tab_0", "tab_2" ], "text": "Using these stories, we designed an annotation framework on Amazon Mechanical Turk (MTurk) that asks workers to label individual story features. Then, we asked for short free responses on (1) the main event of the story, (2) the main emotional state induced by the main event, and (3) moral(s) of the story. The story and annotated summary statistics are shown in Table 1. The themes from stories are shown in Table 2, and themes for annotated summaries as well as our topic modeling approach are presented in Appendix E." }, { "figure_ref": [], "heading": "Paired Story Annotation", "publication_ref": [ "b52", "b54", "b62", "b50" ], "table_ref": [ "tab_3" ], "text": "Sampling Empathic Story Pairs. We devise a sampling method to create a sample of balanced empathically similar and dissimilar story pairs, since random sampling across all possible pairs would likely result in an unbalanced dataset with more dissimilar stories than similar stories. First, we split the 1,568 stories into a train, dev, and test set using a 75/5/20 split. Using SBERT (Reimers and Gurevych, 2019), we compute a composite similarity score using average cosine similarity of the embeddings for the story and our 3 empathy features for every possible story pair within the dataset. We randomly sample stories from each bin such that bins with higher composite similarity scores are more likely to be chosen.\nAnnotation Procedure With the sampled story pairs, we released an annotation task on Amazon MTurk, asking workers to read pairs of stories and rate various aspects of empathic similarity between the stories. Two annotators rated each story pair.\nFrom early testing, we found that the task was difficult because of the large amount of text in the stories and the cognitive load of projecting into two narrator's mental states. To simplify the task, we used ChatGPT (gpt-3.5-turbo) to summarize all the stories before presenting the pairs to annotators. While summarization may remove specific details of the stories, we find that the main event, emotion, and moral takeaway are still present. 3At the beginning of the task, we first provide the annotator with 6 examples of empathically similar stories: one positive and one negative example for stories that are empathically similar/dissimilar based on each feature: main event, emotion, and moral of the story. After reading the two stories, we ask workers to provide explanations of whether and why the narrators would empathize with one another, to prime annotators to think about the empathic relationship between the stories. We then ask workers to provide four similarity ratings on a 4-point Likert scale (1 = strongly disagree, 4 = strongly agree): (1) overall empathic similarity (how likely the two narrators would empathize with each other), (2) similarity in the main events, (3) emotions, and (4) morals of the stories.\nAgreement We aggregate annotations by averaging between the 2 raters. Agreement scores for empa-thy, event, emotion, and moral similarity across the entire dataset are shown in Table 3. While these agreement scores are seemingly on the lower side, using a softer constraint, we see that most common disagreements are at most 1 likert point away (73% of points are at most 1 distance away). We are aiming for a more descriptive annotation paradigm and thus do not expect annotators to perfectly agree (Rottger et al., 2022). Furthermore, our agreement rates are in line with other inherently personal and affect-driven annotation tasks (Sap et al., 2017;Rashkin et al., 2018). Given the difficulty of our task (reading longer stories and projecting the mental state of 2 characters), our agreement is in line with prior work, which achieve around 0.51 -0.91 PPA and 0.29 -0.34 KA." }, { "figure_ref": [], "heading": "Modeling Empathic Similarity", "publication_ref": [], "table_ref": [], "text": "To enable the retrieval and analysis of empathically similar stories, we design a task detailed below. In Appendix B, we also propose an auxiliary reasoning task to automatically extract event, emotion, and moral features from stories, which could be used in future work to quickly generate story annotations." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "Our ultimate retrieval task is given a query story Q and selects a story S i from a set of N stories {S 1 , S 2 , ..., S N } such that i = argmax i sim(f θ (S i ), f θ (Q)). Here, sim(•, •) is a similarity metric (e.g. cosine similarity) between two story representations f θ (S i ) and f θ (Q) that are learned from human ratings of empathic similarity.\nEmpathic Similarity Prediction. The overall task is, given a story pair (S 1 , S 2 ), return a similarity score sim(f θ (S i ), f θ (Q)) such that sim(•, •) is large for empathically similar stories and small for empathically dissimilar stories." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b9", "b52", "b9", "b38", "b61", "b9" ], "table_ref": [], "text": "We propose finetuning LLMs to learn embeddings that capture empathic similarity using cosine distance, for efficient retrieval at test time. In contrast, a popular approach is to use few-shot prompting of very large language models (e.g., GPT-3 and Chat-GPT), which have shown impressive performance across a variety of tasks (Brown et al., 2020). However, in a real deployment setting, retrieval through prompting every possible pair of stories is expensive and inefficient. 4: Model performance for empathic similarity prediction task across correlation, accuracy, and retrieval metrics. r = Pearson's correlation, ρ = Spearman's correlation, Acc = accuracy, P = precision, R = recall, P k=1 = precision at k where k is 1, τ rank = Kendall Tau of ranking and ρ rank = Spearman of ranking. Note that all scores are multiplied by 100 for easier comparison, and the maximum for each metric is 100. In bold is the best performing and underlined is the second-best performing condition for the metric.\nBaseline Models. We compare performance to finetuning with SBERT (multi-qa-mpnet-base-dot-v1) (Reimers and Gurevych, 2019;Brown et al., 2020) and BART model (bart-base) (Lewis et al., 2019). As a few-shot baseline, we evaluate GPT-3 (text-davinci-003) and ChatGPT's (gpt-3.5-turbo) ability to distinguish empathically similar stories by using a k-shot prompting setup as done in Sap et al. (2022);Brown et al. (2020). For the query story pair, we ask for an empathic similarity score from 1-4. We compare across k = 0 examples and k = 5 examples from the training set. We also evaluate these models' ability to generate humanlike main event, emotion description, and moral summaries for each story. Again, we use a k-shot prompting setup, comparing across k = 0 and k = 10 examples. See Appendix G and Appendix C for prompts used and finetuning details. Empathy Similarity Prediction. We propose a biencoder architecture finetuned with mean-squared error (MSE) loss of the cosine-similarity between story pairs, as compared to the empathic similarity gold labels. For each of the encoders, we use a shared pretrained transformer-based model and further finetune on the 1,500 annotated story pairs in our training set. We obtain the final embedding using mean pooling of the encoder last hidden state." }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [ "b0", "b63" ], "table_ref": [], "text": "To evaluate the quality of empathic similarity predictions, we first compare the Spearman's and Pearson's correlations between the cosine similarity of the sentence embeddings and the gold empathic similarity labels. Next, we bin scores into binary similar/dissimilar categories (> 2.5 and ≤ 2.5 respectively) compute the accuracy, precision, recall, and F1 scores. Finally, we compute a series of retrieval-based metrics including precision at k = 1 (what proportion of the top-ranked stories by our model are the top-ranked story as rated by human annotators), Kendall's Tau (Abdi, 2007), and Spearman's correlation (Schober et al., 2018) for the ranking of the stories (how close the overall rankings are).\nShown in Table 4, our results indicate that finetuning SBERT and BART with EMPATHICSTO-RIES results in performance gains across all metrics. SBERT has relatively high off-the-shelf performance, as it is trained with 215M examples specifically for semantic similarity tasks. However, we see that finetuning with our dataset, which contains far fewer training examples relative to SBERT's pretraining corpus, improves performance. (+ 5.35 ρ, +2 accuracy). BART, which is not specifically pre-trained for semantic similarity tasks, shows even greater gains across retrieval metrics when finetuned on our dataset. (22.89 ρ, +7.75 accuracy). We find that for BART models, fine tuning improvements (p = 0.02, p = 0.0006 respectively), as measured with McNemar's test on the accuracy scores and Fisher's transformation on correlations, are significantly higher than baselines.\nWhile GPT-3 and ChatGPT have high performance on the precision at k retrieval metric, in practice, it is not feasible to prompt the models with every pair of stories in the retrieval corpus." }, { "figure_ref": [], "heading": "User Study", "publication_ref": [ "b74", "b4", "b64" ], "table_ref": [], "text": "Prior work's versions of human evaluations (Zhou and Jurgens, 2020;Bao et al., 2021;Sharma et al., 2020) are humans verifying or ranking model outputs based on inputs from test data. This provides a valuable signal of model quality, but isn't representative of how a model could be used in real-world applications due to input distribution mismatch and lack of personal investment in the task. Our hu- man evaluation is a full user study to see how the model performs in retrieving a story that is empathically similar to a story that the users told themselves. Through our user study, we demonstrate the applicability of the task to improve empathy towards retrieval of human stories, as well as how our dataset was used to develop the empathic similarity retrieval task and why the task matters in the real-world. Our hypothesis is: Users will empathize more with stories retrieved by our model (BART finetuned on EMPATHICSTORIES) than stories retrieved by SBERT." }, { "figure_ref": [], "heading": "Participants and Recruitment", "publication_ref": [], "table_ref": [], "text": "We recruited a pool of 150 participants from Prolific. Participants were primarily women (58%, 38% men, 3% non-binary, 1% undisclosed) and white (73%, 8% Black, 9% other or undisclosed, 4% Indian, 3% Asian, 2 % Hispanic, 1% Native American). The mean age for participants was 37 (s.d. 11.6), and participants on average said they would consider themselves empathetic people (mean 4.3, s.d. 0.81 for Likert scale from 1-5)." }, { "figure_ref": [], "heading": "Study Protocol", "publication_ref": [ "b42", "b65" ], "table_ref": [], "text": "Participants rated their mood, wrote a personal story, then rated their empathy towards the stories retrieved by the baseline and proposed models. They additionally answered questions about the story they wrote (main event, emotion, and moral of the story) and their demographic information (age, ethnicity, and gender).\nUser Interface. We designed a web interface similar to a guided journaling app and distributed the link to the interface during the study. The interface connects to a server run on a GPU machine Writing Prompts and Stories Retrieved. We carefully designed writing prompts to present to the participants to elicit highly personal stories, inspired by questions from the Life Story Interview (McAdams, 2007), an approach from social science to gather key moments from a person's life. Conditions. We used a within-subject study design, where each participant was exposed to 2 conditions presented in random order. In Condition 1, participants read a story retrieved by our best performing model on the empathic similarity task (BART + finetuning). In Condition 2, participants read a story retrieved by SBERT. For both models, we select the best response that minimizes cosine distance. Measures. To measure empathy towards each story, we used a shortened version of the State Empathy Survey (Shen, 2010), which contains 7 questions covering affective (sharing of others' feelings), cognitive (adopting another's point of view), and associative (identification with others) aspects of situational empathy. We also ask users to provide a free-text explanation of whether and why they found the retrieved story empathically resonant, to gain qualitative insights into their experience." }, { "figure_ref": [ "fig_1", "fig_3" ], "heading": "Effects on Empathy", "publication_ref": [], "table_ref": [], "text": "With our results shown in Figure 3, we found through a paired t-test (N = 150) that users significantly empathized more with stories retrieved by our model finetuned on EMPATHICSTORIES than off-the-shelf SBERT (t(149) = 2.43, p < 0.01, Cohen's d = 0.26), validating our hypothesis. In addition, this effect was present across all three dimensions of empathy: affective (t(149) = 1.87, p = 0.03, Cohen's d = 0.21), cognitive (t(149) = Interestingly, the difference in empathic response across conditions is strongest for associative empathy, which measures how much the user can identify with the narrator of the story.\nWe examine reasons why users empathized with retrieved stories across conditions (Figure 5). Across both conditions, empathy towards a story was often related to how well-read, genuine, and consistent the story was, and if the user could empathize with the narrator's emotional reactions. When participants did not empathize with a retrieved story, this was more often than not due to stark differences in the main events of their own story and the model's selected story. This effect was strongest for our finetuned model, as it was trained on data with a more open definition of empathy than just sharing the same situation. In certain cases, this could result in the events being too different for the user to empathize with.\nInterestingly, we see that our model chose stories that aligned better on events and emotions with respect to the story they wrote, and participants thought the stories were more original compared to SBERT-retrieved stories. In cases where the participant did not empathize with the retrieved story, SBERT-retrieved stories were considered less consistent, less genuine, less, original, did not read as well, and did not match on emotions as well compared to our model.\nFrom qualitative responses, we see that our model retrieved stories that user empathized with based on the situation described, the emotions the narrator felt, and the takeaway of the story. For example, one participant shared that \"I found no moment where I didn't fully understand the author, and I share a very similar story about my father...its absolutely amazing...I enjoyed this study very much.\" Other participants wrote, \"I empathize heavily with this story because it has many similarities to my own. Kind of a 'started from the bottom, now we're here' vibe, which I love to see\" and \"I can relate to the feelings of abandonment and regret expressed.\"" }, { "figure_ref": [], "heading": "Future Directions for Empathic Similarity", "publication_ref": [ "b10" ], "table_ref": [], "text": "In summary, few prior works on text-based empathy have looked at modeling empathy in two-way interpersonal settings for human-to-human connection, as most focus on detecting empathy or generating empathetic utterances, and even fewer of these works have shown tangible outcomes in human studies. With increasing polarization, loneliness, and apathy (Buecker et al., 2021), personal experiences are a fundamental way people connect, yet existing social recommendation is not targeted for human-human connectivity and empathy. Empathically encoded story embeddings could be useful for a variety of NLP tasks, including retrieval, text generation, dialogue, and translation, for example in the following settings:\n• Using empathic reasoning to incorporate story retrieval in dialogue generation.\n• Generating stories that users resonate with more in conversational AI\n• Extending this work to multilingual settings and better understand translating experiences in ways that preserve empathic meaning\n• Better understand cognitive insights, such as linguistic patterns of emotion-driven communication\n• Applications and building interactions that foster story sharing across geographic, ethnic, and cultural bridges, such as developing better social media recommendation or personalization.\nWe encourage future works to explore these directions in developing more human-centered approaches for interactions with NLP systems." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work explores how we can model empathic resonance between people's personal experiences. We focused specifically on unpacking empathy in text-based narratives through our framework of the events, emotions, and moral takeaways from personal narratives. We collected EMPATHICSTORIES, a diverse dataset of high-quality personal narratives with rich annotations on individual story features and empathic resonance between pairs of stories. We presented a novel task for retrieval of empathically similar stories and showed that large-language models finetuned on our dataset can achieve considerable performance gains in our task. Finally, we validated the real-world efficacy of our BARTfinetuned retrieval model in a user study, demonstrating significant improvements in feelings of empathy towards stories retrieved by our model compared to off-the-shelf semantic similarity retrieval.\nEmpathy is a complex and multi-dimensional phenomenon, intertwined with affective and cognitive states, and it is foundational in our ability to form social relationships and develop meaningful connections. In a world where loneliness and apathy are increasingly present despite the numerous ways we are now able to interact with technologybased media, understanding empathy, developing empathic reasoning in AI agents, and building new interactions to foster empathy are imperative challenges. Our work lays the groundwork towards this broader vision and demonstrates that AI systems that can reason about complex interpersonal dynamics have the potential to improve empathy and connection between people in the real-world." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "With regards to our data collection and annotation framework, our annotations for empathic similarity are not first-person, which are sub-optimal given that it may be difficult for annotator's to project the emotional states of two narrators. In addition, because of the complexity of our annotation task, we opted to use ChatGPT summaries of the stories during our paired story annotation, which could introduce biases depending on the quality of the generated summaries. However, given the inherent difficulty of the task, we found this reduction necessary to achieve agreement and reduce noise in our dataset, and we found that important features will still present in the summaries. Future work could use our human experimental setup to collect first person labels over the entire stories, rather than the automatic summaries.\nAnother limitation of our modeling approach is that our finetuned model takes in data that captures empathic relations across our framework of events, emotions, and morals. However, the learned story representations are general purpose and are not personalized to a user's empathic preferences. Personalization could improve model performance across automatic and human evaluation metrics, as there may exist finer-grained user preferences in how users empathize with certain stories, and what aspects users focus on. Furthermore, future work could explore training using a contrastive setup to learn more contextualized story embeddings.\nLastly, future work should explore longitudinal effects of recieving stories retrieved by our system. Our survey measures (State Empathy Scale) are used for short, quick assessments of immediate empathy rather than \"fixed\" or \"trait\" empathy. While our model might perform well in this one-shot interaction settings, it is also important to study the last empathic effects of reading stories retrieved by the model and measure changes in a user's longer term empathy, mood, and feelings of connection." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b29", "b21", "b39" ], "table_ref": [], "text": "While such a system might foster empathy and connectedness, it is important to consider the potential harms brought about by this work. As with many recommenders, our model is susceptible to algorithmic biases in the types of stories it retrieves, as well as creating an echo chamber for homogeneous perspectives (Kirk et al., 2023). Embedding diversity in the recommended stories is important in both broadening the perspective of users and preventing biases.\nMany social platforms struggle with the issue of content moderation and content safety. In its proposed state, our model does not do anything to guarantee the safety of content that is shared with users. Hateful speech and triggering experiences should not be propagated by our model regardless of the extent to which users relate to these stories (Goel et al., 2023;Lima et al., 2018).\nFinally, the goal of our work is to connect people to other human experiences. Story generation and NLG that aims to mimic or appropriate human experiences is not something we endorse, and we encourage the use of machine-text detectors in systems that retrieve empathic stories. In line with Oren Etzioni (2018)'s three rules of AI, we also discourage presenting humans with machinegenerated stories without disclosing that the story is written by an AI author. Before training any models to learn empathic similarity ratings, it is important to understand the mechanisms behind empathic similarity in textbased personal narratives. In particular, we are interested in how structural elements of stories (events, emotional trajectories, and morals) relate to empathy. The question we aim to answer through our analysis of the text is what qualities of personal experiences people resonate with most and how does this relate to the personal experience they self disclose." }, { "figure_ref": [ "fig_5" ], "heading": "A Understanding Aspects of Empathic Similarity", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "First, we look at the correlation between humanrated similarity in event, emotion, and moral of the stories to the empathic similarity rating. We show in Table 5 that the correlation of the similarity between events, emotions, and morals to the empathic similarity rating is high for all three features. This indicates that similarity in these components is related to similarity in empathic resonance between stories. Using a paired t-test between high and low empathically similar story pairs, we find that empathically similar story pairs have statistically significantly higher similarities in events, emotions, and morals, with the largest increase in moral similarity and roughly equivalent increases in event and emotion similarities.\nNext, we look at the differences between semantic similarity and human-rated empathic similarity. As shown in Figure 6, we can see that the distributions of similarity scores are different for human-rated empathic similarity scores as compared to semantic similarity scores obtained from SBERT. Semantic similarity of stories is weakly positively correlated with empathic similarity (ρ = 0.17), with event-based features correlating the most (ρ = 0.067), followed by emotionbased features (ρ = 0.0069) and lastly moral features (ρ = -0.048). These results indicate that semantic similarity is naturally related to empathic similarity, but might not capture relationships between emotions and takeaways in pairs of stories." }, { "figure_ref": [], "heading": "B Empathy Reasoning Task", "publication_ref": [ "b28", "b48", "b40", "b3", "b73" ], "table_ref": [], "text": "Empathy Reasoning Task Definition. Given a story context c, we finetune a sequence-to-sequence (seq2seq) model to generate an event (v), emotion (e), and moral (m), concatenating annotated summaries to construct the gold label and modeling p(v, e, m|c) (Kim et al., 2022). The model is trained to minimize negative log likelihood of predicting each word in the constructed gold label.\nEmpathy Reasoning Results. We evaluate empathy reasoning performance using BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), ME-TEOR (Banerjee and Lavie, 2005), and BertScore (Zhang et al., 2020), taking the human-written free-text annotations as gold references. From Ta-ble 6, we see that finetuning BART with humanwritten story summaries improves performance across all metrics. The BART model finetuned on EMPATHICSTORIES demonstrates improved performance across 3/4 metrics in event and moral reasons. For emotion reasons, ChatGPT demonstrates better performance in 2/4 metrics, with the finetuned BART model close behind. We note that the BART-base model has 140M parameters, whereas ChatGPT has upwards of 175B parameters." }, { "figure_ref": [], "heading": "C Finetuned Model Training Details", "publication_ref": [], "table_ref": [], "text": "We use a 75:5:20 train:dev.:test split on both individual stories and pairs of stories. For the empathic similarity prediction task, we use learning rates of 1e-6 and 5e-6 for SBERT and BART respectively, and a linear scheduler with warmup. For the empathic reasoning task, we use a learning rate of 1e-5. For both tasks, we use a batch size of 8 and finetune for 30-50 epochs, monitoring correlation and validation loss to select the best-performing models. We trained all models on 4x Nvidia A40s with 256GB of RAM and 64 cores, and all model training times were under 12 hours." }, { "figure_ref": [], "heading": "D Data Pre-Processing", "publication_ref": [ "b24" ], "table_ref": [], "text": "For all of the data sources, we remove stories that are shorter than 5 sentences long, longer than 500 words, and which have a severe toxicity score of less than 0.005 using Detoxify (Hanu and Unitary team, 2020). While the latter step may filter out meaningful stories and introduce bias in the story selections (Sap et al., 2019a), we err on the side of removing any stories that could be potentially harmful, even if not severely so. Our research team then selected stories that were appropriate to share (did not contain excessive profanity or explicit sexual content), and which had a first-person narrator and concrete resolution to the story. We chose stories with a concrete resolution in order to avoid rant posts, which were common on social media pages. In addition, we manually corrected overt grammatical errors as well as references to the platform the story was shared on (e.g. addressing Redditors). Our final set of stories contains 1,568 curated, high-quality personal narratives." }, { "figure_ref": [], "heading": "E Story and Annotation Themes", "publication_ref": [ "b23" ], "table_ref": [ "tab_8", "tab_9" ], "text": "Below we show the top themes across each story's emotion (Table 7) and moral (Table 8) annotations.\nNote that we did not include topics for the events since these were similar to Table 2. To identify these topics, we use Latent Dirichlet Allocation (LDA) and KeyBERT on the clusters (Grootendorst, 2020). " }, { "figure_ref": [], "heading": "F Collected Stories Breakdown", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "A breakdown of the amount of stories per source can be found in Table 9. " }, { "figure_ref": [], "heading": "G GPT-3 and ChatGPT Prompts", "publication_ref": [], "table_ref": [], "text": "Below are prompts we fed to GPT-3 and ChatGPT for our few-shot baselines. Note that in addition to the prompts, we provided sampled examples from our training corpus.\n• Event summary: What is the main event being described in the story? Response must be at least 1 sentence and 50-1000 characters including spaces.\n• Emotion summary: Describe the emotions the narrator feels before and after the main event and why they feel this way. Answer as though you were explaining how the narrator felt to someone who knew nothing about the situation. Response must be at least 2 sentences and 150-1000 characters including spaces.\n• Moral summary: What is the high-level lesson or takeaway (ie. moral) of the story?\nResponse must be at least 1 sentence and 100-1000 characters including spaces.\n• Empathic similarity: Rate the extent to which you agree with the statement \"the narrators of the two stories would empathize with each other.\" We define empathy as feeling, understanding, and relating to what another person is experiencing. Note that it is possible to have empathy even without sharing the exact same experience or circumstance. Importantly, for two stories to be empathetically similar, both narrators should be able to empathize with each other (if narrator A's story was shared in response to narrator B's story, narrator B would empathize with narrator A and vice versa). Give your answer on a scale from 1-4 (1 -not at all, 2 -not so much, 3very much, 4 -extremely)" }, { "figure_ref": [], "heading": "H Using LLMs as a Proxy for Human", "publication_ref": [ "b20" ], "table_ref": [], "text": "Recent works raise the question of whether LLMs can be used to proxy human annotations (Gilardi et al., 2023). The motivation behind this method is that obtaining human labels across many pairs of stories is costly, and this cost only compounds as the number of stories in the corpus increases. As such, we provide additional analyses as to whether or not these models can truly perform at the same level as human annotators for our task, which involves heavy empathy and emotion reasoning." }, { "figure_ref": [], "heading": "H.1 Individual Story Annotation", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "We prompt ChatGPT (gpt-3.5-turbo) to generate summaries of each story's main event, emotion, and moral, in addition to a list of reasons why a narrator might empathize with the story. We compare these summaries against human-written summaries using BLEU, ROUGE, METEOR, and BertScore (Table 10), showing that ChatGPT has relatively low performance across all four metrics." }, { "figure_ref": [ "fig_7" ], "heading": "H.2 Paired Story Annotation", "publication_ref": [], "table_ref": [], "text": "We feed the same prompt given to human annotators into ChatGPT, asking for a Likert score from Finally, we bin the ChatGPT annotations into agree/disagree categories, and compute the classification precision (0.59), recall (0.40), F1 score (0.48), and accuracy (0.59) as compared to human gold labels. These scores offer insight as to how well ChatGPT predicts the direction of the empathic similarity annotation, but we see that accuracy is low when comparing to human labels. In Figure 7, we see that ChatGPT similarity scores are skewed to the left, indicating that humans are more likely to find empathic similarities between experiences. These results are also supported by the higher number of false negatives when comparing ChatGPT classification to human gold labels." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank all of our participants, annotators, and teammates for their invaluable contributions to this project. Special thanks to Sharifa Algohwinem and Wonjune Kang for their technical feedback throughout the project and thanks to Ji Min Mun, Akhila Yerukola, and Ishaan Grover for paper feedback. This work was supported by an NSF GRFP under Grant No. 2141064 and the IITP grant funded by the Korean Ministry of Science and ICT No.2020-0-00842." } ]
The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives. We introduce a new task of identifying similarity in personal stories based on empathic resonance, i.e., the extent to which two people empathize with each others' experiences, as opposed to raw semantic or lexical similarity, as has predominantly been studied in NLP. Using insights from social psychology, we craft a framework that operationalizes empathic similarity in terms of three key features of stories: main events, emotional trajectories, and overall morals or takeaways. We create EMPATHICSTORIES, a dataset of 1,500 personal stories annotated with our empathic similarity features, and 2,000 pairs of stories annotated with empathic similarity scores. Using our dataset, we finetune a model to compute empathic similarity of story pairs, and show that this outperforms semantic similarity models on automated correlation and retrieval metrics. Through a user study with 150 participants, we also assess the effect our model has on retrieving stories that users empathize with, compared to naive semantic similarity-based retrieval, and find that participants empathized significantly more with stories retrieved by our model. Our work has strong implications for the use of empathy-aware models to foster human connection and empathy between people.
Modeling Empathic Similarity in Personal Narratives
[ { "figure_caption": "Figure 2 :2Figure 2: Overview of annotation pipeline starting with (a) individual story event, emotion, and moral to (b) using these annotations to sample balanced story pairs and (c) rating empathic similarity scores", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Total empathy for the story retrieved by our model vs. SBERT. Error bars show standard error.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Breakdown of empathy dimensions for the story retrieved by our model vs. SBERT", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Reasons why participants did or did not empathize with the retrieved story.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "2.05, p = 0.02, Cohen's d = 0.21), and associative empathy (t(149) = 2.61, p = 0.005, Cohen's d = 0.27), as shown in Figure 4 (empathy values are the summed scores from the empathy survey).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparing the empathic similarity and semantic similarity core distributions", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparing the empathic similarity score distributions between ChatGPT and human labels", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Story and annotation statistics", "figure_data": "# sents # wordsStory13.17235.14Main Event1.4832.51Emotional Reaction 2.3946.08Moral1.3831.35", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Themes across main events of the stories.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Similarity agreement scores (PPA = pairwise percent agreement, KA = Krippendorff's Alpha)", "figure_data": "AnnotationPPA KAOverall.80.14Empathic similarityTrain Dev.79 .81.14 .11Test.83.17Overall.86.27Event similarityTrain Dev.86 .84.26 .25Test.87.30Overall.83.23Emotion similarityTrain Dev.83 .79.23 .15Test.84.25Overall.80.19Moral similarityTrain Dev.80 .80.18 .14Test.82.20", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Correlation between similarity scores for individual features compared to overall empathic similarity score. r = Pearson's correlation coefficient. ρ = Spearman's correlation coefficient.", "figure_data": "FeatureModelBLEU ROUGE METEOR BertScoreBART1.1616.8721.2613.30+ finetuning9.5632.7229.1439.79EventGPT-3 + 10 examples1.40 7.7224.77 32.2226.31 23.6033.39 36.84ChatGPT1.8525.3525.3634.93+ 10 examples7.2330.0232.8137.59BART0.4015.7316.956.53+ finetuning2.0826.6123.5426.24EmotionGPT-3 + 10 examples1.56 0.0822.37 21.0927.90 12.0821.28 19.97ChatGPT1.6623.2129.6222.19+ 10 examples1.0925.4327.6726.46BART0.0211.7815.520.40+ finetuning13.7733.5229.6632.26MoralGPT-3 + 10 examples5.86 4.3828.10 28.6327.87 18.9731.64 28.15ChatGPT4.4525.0326.1630.99+ 10 examples6.6327.9127.5133.97", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Quality of event, emotion, and moral summaries across models. Scores are multiplied by 100 for readability, and the max. for each metric is 100.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Themes across emotion descriptions of the stories.", "figure_data": "TopicKeywords% Storiesmotivation and encouragement motivation, success, achieving40.31%overcoming and resilienceovercome, resilient, rehab25.57%happiness and fulfilmentopportunities, happiness, meaningful 17.60%social support and gratitudecompanionship, gratitude, stress16.52%", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Themes across morals of the stories.", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Breakdown of retrieved stories per data source.", "figure_data": "Data SourceNumber of StoriesHippocorpus483Road Trip Narratives476Reddit -Today I Am Happy198Reddit-Casual Conversations195Reddit-Off My Chest162Facebook -[Redacted] Confessions54", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Quality of ChatGPT story empathy reasoning annotations (scores are multiplied by 100 for readability, and the maximum for each metric is 100)", "figure_data": "SummaryBLEU ROUGE METEOR BertScoreMain Event2.8626.3728.2036.53Emotion Description1.4323.0128.8723.36Moral7.6727.6427.3333.24", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" } ]
Jocelyn Shen; Maarten Sap; Pedro Colon-Hernandez; Hae Won Park; Cynthia Breazeal
[ { "authors": "Hervé Abdi", "journal": "Sage", "ref_id": "b0", "title": "The kendall rank correlation coefficient", "year": "2007" }, { "authors": "Mary E Andrews; Bradley D Mattan; Keana Richards; Samantha L Moore-Berg; Emily B Falk", "journal": "Social Science & Medicine", "ref_id": "b1", "title": "Using first-person narratives about healthcare workers and people who are incarcerated to motivate helping behaviors during the COVID-19 pandemic", "year": "2022" }, { "authors": "P ; Matthijs Bal; Martijn Veltkamp", "journal": "PLOS ONE", "ref_id": "b2", "title": "How Does Fiction Reading Influence Empathy? An Experimental Investigation on the Role of Emotional Transportation", "year": "2013" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b3", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Jiajun Bao; Junjie Wu; Yiming Zhang; Eshwar Chandrasekharan; David Jurgens", "journal": "ACM", "ref_id": "b4", "title": "Conversations Gone Alright: Quantifying and Predicting Prosocial Outcomes in Online Conversations", "year": "2021" }, { "authors": "Clara Berridge; Yuanjin Zhou; Julie M Robillard; Jeffrey Kaye", "journal": "Frontiers in Psychology", "ref_id": "b5", "title": "Companion robots to mitigate loneliness among older adults: Perceptions of benefit and possible deception", "year": "2023" }, { "authors": "Antoine Bosselut; Hannah Rashkin; Maarten Sap; Chaitanya Malaviya; Asli Celikyilmaz; Yejin Choi", "journal": "", "ref_id": "b6", "title": "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction", "year": "2019" }, { "authors": "Hana Boukricha; Ipke Wachsmuth; Maria Nella Carminati; Pia Knoeferle", "journal": "IEEE", "ref_id": "b7", "title": "A Computational Model of Empathy: Empirical Evaluation", "year": "2013" }, { "authors": "Guilherme Brockington; Ana ; Paula Gomes Moreira; Maria Stephani Buso; Sérgio Gomes Da Silva; Edgar Altszyler; Ronald Fischer; Jorge Moll", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b8", "title": "Storytelling increases oxytocin and positive emotions and decreases cortisol and pain in hospitalized children", "year": "2021" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b9", "title": "Language Models are Few-Shot Learners", "year": "2020" }, { "authors": "Susanne Buecker; Marcus Mund; Sandy Chwastek; Melina Sostmann; Maike Luhmann", "journal": "Psychological Bulletin", "ref_id": "b10", "title": "Is loneliness in emerging adults increasing over time? A preregistered cross-temporal meta-analysis and systematic review", "year": "2021" }, { "authors": "Rijul Chaturvedi; Sanjeev Verma; Ronnie Das; Yogesh K Dwivedi", "journal": "Technological Forecasting and Social Change", "ref_id": "b11", "title": "Social companionship with artificial intelligence: Recent trends and future avenues", "year": "2023" }, { "authors": "Snigdha Chaturvedi; Shashank Srivastava; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Where Have I Heard This Story Before? Identifying Narrative Similarity in Movie Remakes", "year": "2018" }, { "authors": "Eun Cho; Soohyun Jeon", "journal": "BMC Medical Education", "ref_id": "b13", "title": "The role of empathy and psychological need satisfaction in pharmacy students' burnout and well-being", "year": "2019" }, { "authors": "Jay S Coke; C Daniel Batson; Katherine Mcdavis", "journal": "Journal of Personality and Social Psychology", "ref_id": "b14", "title": "Empathic mediation of helping: A two-stage model", "year": "1978" }, { "authors": "Marc Damashek", "journal": "Science", "ref_id": "b15", "title": "Gauging Similarity with n-Grams: Language-Independent Categorization of Text", "year": "1995" }, { "authors": "Scott Deerwester; Susan T Dumais; George W Furnas; Thomas K Landauer; Richard Harshman", "journal": "Journal of the American Society for Information Science", "ref_id": "b16", "title": "Indexing by latent semantic analysis", "year": "1990" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b17", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Karthik Dinakar; Birago Jones; Henry Lieberman; Rosalind Picard; Carolyn Rose; Matthew Thoman; Roi Reichart", "journal": "", "ref_id": "b18", "title": "You Too?! Mixed-Initiative LDA Story Matching to Help Teens in Distress", "year": "2012" }, { "authors": "Sarah Fabi; Lydia ; Anna Weber; Hartmut Leuthold", "journal": "PLoS ONE", "ref_id": "b19", "title": "Empathic concern and personal distress depend on situational but not dispositional factors", "year": "2019" }, { "authors": "Fabrizio Gilardi; Meysam Alizadeh; Maël Kubli", "journal": "", "ref_id": "b20", "title": "ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks", "year": "2023" }, { "authors": "Vasu Goel; Dhruv Sahnan; Subhabrata Dutta; Anil Bandhakavi; Tanmoy Chakraborty", "journal": "PNAS Nexus", "ref_id": "b21", "title": "Hatemongers ride on echo chambers to escalate hate speech diffusion", "year": "2023" }, { "authors": "Melanie C Green; Timothy C Brock", "journal": "Journal of Personality and Social Psychology", "ref_id": "b22", "title": "The role of transportation in the persuasiveness of public narratives", "year": "2000" }, { "authors": "Maarten Grootendorst", "journal": "", "ref_id": "b23", "title": "Keybert: Minimal keyword extraction with bert", "year": "2020" }, { "authors": "Laura Hanu; Unitary Team", "journal": "", "ref_id": "b24", "title": "Detoxify. Github", "year": "2020" }, { "authors": "Sara D Hodges; Kristi J Kiel; D I Adam; Darya Kramer; B Renee Veach; Villanueva", "journal": "Personality and Social Psychology Bulletin", "ref_id": "b25", "title": "Giving Birth to Empathy: The Effects of Similar Experience on Empathic Accuracy, Empathic Concern, and Perceived Empathy", "year": "2010" }, { "authors": "Christopher J Honey; Christopher R Thompson; Yulia Lerner; Uri Hasson", "journal": "The Journal of Neuroscience", "ref_id": "b26", "title": "Not Lost in Translation: Neural Responses Shared Across Languages", "year": "2012" }, { "authors": "Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jeff Bras; Keisuke Da; Antoine Sakaguchi; Yejin Bosselut; Choi", "journal": "", "ref_id": "b27", "title": "COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs", "year": "2021" }, { "authors": "Hyunwoo Kim; Youngjae Yu; Liwei Jiang; Ximing Lu; Daniel Khashabi; Gunhee Kim; Yejin Choi; Maarten Sap", "journal": "", "ref_id": "b28", "title": "ProsocialDialog: A Prosocial Backbone for Conversational Agents", "year": "2022" }, { "authors": "Rose Hannah; Bertie Kirk; Paul Vidgen; Scott A Röttger; Hale", "journal": "", "ref_id": "b29", "title": "Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback", "year": "2023" }, { "authors": "Sara Konrath", "journal": "", "ref_id": "b30", "title": "The Empathy Paradox: Increasing Disconnection in the Age of Increasing Connection", "year": "2013" }, { "authors": "Sara H Konrath; H Edward; Courtney O'brien; Hsing", "journal": "Personality and Social Psychology Review", "ref_id": "b31", "title": "Changes in Dispositional Empathy in American College Students Over Time: A Meta-Analysis", "year": "2011" }, { "authors": "Dennis Krebs", "journal": "Journal of Personality and Social Psychology", "ref_id": "b32", "title": "Empathy and altruism", "year": "1976" }, { "authors": "Shiro Kumano; Ryo Ishii; Kazuhiro Otsuka", "journal": "", "ref_id": "b33", "title": "Comparing Empathy Perceived by Interlocutors in Multiparty Conversation and External Observers", "year": "2017" }, { "authors": "William Labov; Joshua Waletzky", "journal": "Journal of Narrative & Life History", "ref_id": "b34", "title": "Narrative analysis: Oral versions of personal experience", "year": "1997" }, { "authors": "Allison Lahnala; Charles Welch; Lucie Flek", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "CAISA at WASSA 2022: Adapter-Tuning for Empathy Prediction", "year": "2022" }, { "authors": "Allison Lahnala; Charles Welch; David Jurgens; Lucie Flek", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "A Critical Reflection and Forward Perspective on Empathy and Natural Language Processing", "year": "2022" }, { "authors": "Thomas K Landauer; Susan T Dumais", "journal": "Psychological Review", "ref_id": "b37", "title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge", "year": "1997" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b38", "title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "year": "2019" }, { "authors": "Lucas Lima; Julio C S Reis; Philipe Melo; Fabricio Murai; Leandro Araujo; Pantelis Vikatos; Fabricio Benevenuto", "journal": "", "ref_id": "b39", "title": "Inside the Right-Leaning Echo Chambers: Characterizing Gab, an Unmoderated Social System", "year": "2018" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "ROUGE: A Package for Automatic Evaluation of Summaries", "year": "2004" }, { "authors": "Yung-Shen Lin; Jung-Yi Jiang; Shie-Jue Lee", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b41", "title": "A Similarity Measure for Text Classification and Clustering", "year": "2014" }, { "authors": "Dan P Mcadams", "journal": "", "ref_id": "b42", "title": "The Life Story Interview -II", "year": "2007" }, { "authors": "Sylvia A Morelli; Matthew D Lieberman; Jamil Zaki", "journal": "Social and Personality Psychology Compass", "ref_id": "b43", "title": "The Emerging Study of Positive Empathy", "year": "2015" }, { "authors": "Sylvia A Morelli; Desmond C Ong; Rucha Makati; Matthew O Jackson; Jamil Zaki", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b44", "title": "Empathy and well-being correlate with centrality in different social networks", "year": "2017" }, { "authors": "Nasrin Mostafazadeh; Aditya Kalyanpur; Lori Moon; David Buchanan; Lauren Berkowitz; Or Biran; Jennifer Chu-Carroll", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "GLUCOSE: GeneraLized and COntextualized Story Explanations", "year": "2020" }, { "authors": "Dong Nguyen; Dolf Trieschnigg; Mariët Theune", "journal": "Association for Computing Machinery", "ref_id": "b46", "title": "Using Crowdsourcing to Investigate Perception of Narrative Similarity", "year": "2014" }, { "authors": "Oren Etzioni", "journal": "", "ref_id": "b47", "title": "Three rules of Artificial Intelligence", "year": "2018" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "year": "2002" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b49", "title": "Language Models are Unsupervised Multitask Learners", "year": "" }, { "authors": "Antoine Hannah Rashkin; Maarten Bosselut; Kevin Sap; Yejin Knight; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Modeling Naive Psychology of Characters in Simple Commonsense Stories", "year": "2018" }, { "authors": "Maarten Hannah Rashkin; Emily Sap; Noah A Allaway; Yejin Smith; Choi", "journal": "", "ref_id": "b51", "title": "Event2Mind: Commonsense Inference on Events, Intents, and Reactions", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b52", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "year": "2019" }, { "authors": "Mahnaz Roshanaei; Christopher Tran; Sylvia Morelli; Cornelia Caragea; Elena Zheleva", "journal": "IEEE", "ref_id": "b53", "title": "Paths to Empathy: Heterogeneous Effects of Reading Personal Stories Online", "year": "2019" }, { "authors": "Paul Rottger; Bertie Vidgen; Dirk Hovy; Janet Pierrehumbert", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks", "year": "2022" }, { "authors": "Fabiola Mary; Ignisha Sagayaraj; R Rajathi George; L R Vedhapriyavadhana; Priya", "journal": "SN Computer Science", "ref_id": "b55", "title": "Artificial Intelligence to Combat the Sting of the Pandemic on the Psychological Realms of Human Brain", "year": "2022" }, { "authors": "Belen Saldias; Deb Roy", "journal": "", "ref_id": "b56", "title": "Exploring aspects of similarity between spoken personal narratives by disentangling them into narrative clause types", "year": "2020" }, { "authors": "Gerard Salton; Amit Singhal; Mandar Mitra; Chris Buckley", "journal": "Information Processing & Management", "ref_id": "b57", "title": "Automatic text structuring and summarization", "year": "1997" }, { "authors": "Maarten Sap; Dallas Card; Saadia Gabriel; Yejin Choi; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "a. The Risk of Racial Bias in Hate Speech Detection", "year": "2019" }, { "authors": "Maarten Sap; Eric Horvitz; Yejin Choi; Noah A Smith; James Pennebaker", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models", "year": "2020" }, { "authors": "Maarten Sap; Le Ronan; Emily Bras; Chandra Allaway; Nicholas Bhagavatula; Hannah Lourie; Brendan Rashkin; Noah A Roof; Yejin Smith; Choi", "journal": "", "ref_id": "b60", "title": "ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning", "year": "2019" }, { "authors": "Maarten Sap; Ronan Lebras; Daniel Fried; Yejin Choi", "journal": "", "ref_id": "b61", "title": "Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs", "year": "2022" }, { "authors": "Maarten Sap; Marcella ; Cindy Prasettio; Ari Holtzman; Hannah Rashkin; Yejin Choi", "journal": "", "ref_id": "b62", "title": "Connotation Frames of Power and Agency Modern Films", "year": "2017" }, { "authors": "Patrick Schober; Christa Boer; Lothar A Schwarte", "journal": "Anesthesia & analgesia", "ref_id": "b63", "title": "Correlation coefficients: appropriate use and interpretation", "year": "2018" }, { "authors": "Ashish Sharma; Adam S Miner; David C Atkins; Tim Althoff", "journal": "", "ref_id": "b64", "title": "A Computational Approach to Understanding Empathy Expressed in Text-Based Mental Health Support", "year": "2020" }, { "authors": "Lijiang Shen", "journal": "Western Journal of Communication", "ref_id": "b65", "title": "On a Scale of State Empathy During Message Processing", "year": "2010" }, { "authors": "Seema Vinayak; Jotika Judge", "journal": "International Journal of Health Sciences", "ref_id": "b66", "title": "Resilience and Empathy as Predictors of Psychological Wellbeing among Adolescents", "year": "2018" }, { "authors": "Kiran Vodrahalli; Po-Hsuan Chen; Yingyu Liang; Christopher Baldassano; Janice Chen; Esther Yong; Christopher Honey; Uri Hasson; Peter Ramadge; Kenneth A Norman; Sanjeev Arora", "journal": "NeuroImage", "ref_id": "b67", "title": "Mapping between fMRI responses to movies and their natural language annotations", "year": "2018" }, { "authors": "Caren M Walker; Tania Lombrozo", "journal": "Cognition", "ref_id": "b68", "title": "Explaining the moral of the story", "year": "2017" }, { "authors": "Zhilin Wang; Anna Jafarpour; Maarten Sap", "journal": "", "ref_id": "b69", "title": "Uncovering Surprising Event Boundaries in Narratives", "year": "2022" }, { "authors": "Peter West; Chandra Bhagavatula; Jack Hessel; Jena D Hwang; Liwei Jiang; Ronan Le Bras; Ximing Lu; Sean Welleck; Yejin Choi", "journal": "", "ref_id": "b70", "title": "Symbolic Knowledge Distillation: from General Language Models to Commonsense Models", "year": "2022" }, { "authors": "Joshua D Wondra; Phoebe C Ellsworth", "journal": "Psychological Review", "ref_id": "b71", "title": "An appraisal theory of empathy and other vicarious emotional experiences", "year": "2015" }, { "authors": "Kevin Wright", "journal": "Communication Research Reports", "ref_id": "b72", "title": "Motives for communication within on-line support groups and antecedents for interpersonal use", "year": "2002" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b73", "title": "BERTScore: Evaluating Text Generation with BERT", "year": "2020" }, { "authors": "Naitian Zhou; David Jurgens", "journal": "Association for Computational Linguistics", "ref_id": "b74", "title": "Condolence and Empathy in Online Communities", "year": "2020" } ]
[]
10.48550/ARXIV.2002.03518
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b4", "b11", "b0", "b5", "b7", "b10", "b9" ], "table_ref": [], "text": "The approximately linear mapping between crosslingual word embeddings in different languages is based on assumption that the word semantic meaning is conserved in a translation (Mikolov et al., 2013). The linearity is only approximate because the corresponding words in different languages have different cultural background and different dependencies on context (Patra et al., 2019;Zhao and Gilman, 2020;Cao et al., 2020;Peng et al., 2020). We expect that a sentence has a less ambiguous meaning than a word, simply because the sentence context reduces ambiguity of each of its words. Therefore, a sentence semantics should be better conserved in a translation -the idea used in (Reimers and Gurevych, 2020). In order to verify this expectation, we consider here a linear mapping between multilingual embeddings in two languages. Unlike the removal of a language-specific bias in each language separately (Yang et al., 2021;Xie et al., 2022), this mapping depends on both languages of interest and, while computationally cheap, may provide a better correspondence between the embeddings. Our contribution:\n1. We suggest simple and computationally light improvement of the correspondence of sentence embeddings between two languages.\nThe 'sentence' can be one or several contiguous sentences. 2. For our evaluation we introduce a dataset based on wikipedia news. 3. We demonstrate a non-orthogonality of the linear mapping between multilingual embeddings as an example and a measure of deficiency of a multilingual embedding model." }, { "figure_ref": [], "heading": "Cross-Lingual linear mapping", "publication_ref": [], "table_ref": [], "text": "Translation of a word can lose or add some of its meanings. But meaning of a sentence or of several contiguous sentences is better defined, and the translation in most cases (except special idiomatic cases) should preserve the semantics. Embeddings of the translated sentences should be well related to embeddings of the original sentences: the semantic similarities should be preserved. In this section we assume that the 'sentence' is either a (not too short) sentence, or a larger segment of a text. Suppose we have n sentences, translated from language L to language L ′ , and then embedded into a space of the same dimension D in each of these languages: the embeddings e 1 , ...e n in L and the embeddings e ′ 1 , ...e ′ n in L ′ . If the measure of semantic similarity in both spaces is cosine, then we should expect that the normalized embeddings e i and e ′ i are related by rotation (orthonormal transform T):\ne ′ = T e (1)\nwith the orthogonality condition i\nT ij T ik = δ jk (2)\nIf semantic similarity is measured by euclidean distance, and the embeddings are not normalized, then we should allow the orthogonal transform to be accompanied by dilation and shift:\ne ′ = αT e + b(3)\nIn the following section we will allow any linear transformation (A, b) between the embeddings in L and L ′ : ẽ = Ae + b (4)\nFor our illustration here we created embeddings by one of SOTA aligned multilingual sentenceembedding model, on a set of translated sentences (Section 3.2). We optimize the linear transformation on a set of embeddings, so that the mean squared distance between ẽ and e ′ is minimal. In the next section we consider the obtained linear transformation (A, b) from two points of view: 1. Replacement of the original embeddings e by the transformed embeddings ẽ can serve as a fast and computationally cheap way to improve cross-lingual matching or clustering of a mix of texts of both languages. 2. We can observe how close is the optimized transformation (A, b) to the 'ideal' relation eq.3, and thus judge how good the embeddings are.\n3 Observations" }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b2", "b1", "b8" ], "table_ref": [], "text": "For obtaining the linear transformation eq.4 between embeddings, in Section 3.2 we use dataset Tatoeba 1 . Tatoeba has 13 languages with at least 100K sentences translated from English to the language. We consider performance of the obtained transformations on sentences and text segments of different style from multilingual WikiNews dataset 2 which we created from real news (Appendix A). The samples have WikiNews articles in English as well as at least one other language, among 34 languages. We will limit ourselves to five languages L ′ that have at least 100K samples (of translations from L English) in Tatoeba, and at least 400 samples in Wikinews (Appendix A): German (de), Spanish (es), French (f r), Italian (it), Portuguese (pt) and Russian (ru). Wikinews is used here for evaluation, in Section 3.2, in two variations:\n1. WN: Title of news article in English is paired with the same title in language L ′ . 2. WN-text: Title of news article in English is paired with the lower half of the text of the article in language L ′ . We selected the lower part in order to avoid easy lexical intersections 1 https://huggingface.co./datasets/tatoeba 2 https://github.com/PrimerAI/primer-research of first phrases of the text with the title. (The article is split by whichever end of sentence os closer to the middle.) We also evaluate on Flores dataset (Guzmán et al., 2019;Goyal et al., 2022;Team et al., 2022) 3 , and on a Tatoeba subset left aside from training." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b6" ], "table_ref": [], "text": "We obtained the transformation (A, b) (eq.4) for each language L ′ = de, es, f r, it, pt, ru by (1) obtaining embeddings e for English sentences and embeddings e ′ for the sentence translations to language L ′ , and ( 2) training a simple linear layer with bias, using embeddings e i as the inputs, and embeddings e ′ i as the labels, with the distance |ẽ i -e ′ i | serving as loss function. For each language, 10K embedding pairs were set aside for the testing, and 10K embedding pairs were set aside and used for validation during the training. We used state of the art embeddings paraphrase-multilingual-mpnetbase-v2 (Reimers and Gurevych, 2019) 4 for obtaining the embeddings e and e ′ .\nWe can evaluate the benefit of replacing the original embeddings e by the transformed embeddings ẽ in different ways. In Table 1 we consider several examples: dD, dC, f D, f C -defined below.\nThe measure\ndD = d - d min (d, d)(5)\ncompares the achieved average distance\nd = 1 N N i | ẽi -e ′ i |(6)\nand the original distance\nd = 1 N N i |e i -e ′ i |(7)\nwhere the embeddings e are taken for a test dataset of size N . The measure\ndC = 1 N N i cos( ẽi , e ′ i ) -cos(e i , e ′ i )(8)\ncompares the cosines. The measure\nf D = 1 N N i H(|e i -e ′ i | -| ẽi -e ′ i |)(9)\nTable 1: Performance of the linear transform e → ẽ (eq.4), trained on Tatoeba dataset, and evaluated on (set aside) Tatoeba, WN (Wiki-news title-to-title), WN-text (Wiki-news title-to-halftext), and Flores. Performance is estimated as improvement in average distance dD (eq.5) and in average cosine dC (eq.8), fraction of samples with improved distance f D (eq.9), and fraction of samples with improved cosine f C (eq.10). \nf C = 1 N N i H(cos( ẽi , e ′ i ) -cos(e i , e ′ i )) (10)\nrepresents the fraction of the samples for which the cosine increased.\nThe transformation e → ẽ helps if dD and dC are positive (the higher the better), and if the fractions f D and f C are higher than 0.5 (the higher the better). Table 1 shows that these conditions are satisfied for almost all cases. The only exception is the measure f C for Italian in Flores dataset (less that half samples are improved). Noticeably, German gets the highest improvement." }, { "figure_ref": [], "heading": "Orthogonality", "publication_ref": [ "b12" ], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_2", "tab_1", "tab_2", "tab_2", "tab_3", "tab_4", "tab_1", "tab_2" ], "text": "If the embedding model would produce ideal alignment, then the sentence embeddings in different languages would be close to identical e ′ = e: unlike a single word, a sentence normally has unambiguous semantics. The transform T (eq.1) would then become an identity. If the embedding model does not perfectly align the embeddings e and e ′ (or does not align them at all), but still correctly embed their semantics in each of the languages L and L ′ , then the optimized linear transformation (A, b) (eq.4) must be orthogonal as in eq.3.\nIn order to evaluate how close our linear transformation A (trained on Tatoeba) to being orthogonal (Eq.2), we consider the values\np jk = i A ij A ik |A j | • |A k | , j ̸ = k (11)\nwhere\n|A j | = i A 2 ij .\nThe closer these values p jk to zero, the closer A to being orthogonal. In Table 2 we show simple aggregates of p ij over all i ̸ = j: ⟨|p|⟩ -averaged absolute value, σ(p) -standard deviation, minimum min(p) and maximum max(p). Table 2 lists more languages than Table 1 because there is no need here to apply A to other datasets. The highest by far deviation from orthogonality in Table 2 is for Berber language, followed by Esperanto. The minimal and maximal values are colored yellow when they exceed 0.383, meaning that for at least one pair i, j the angle is less than 75% of orthogonal (cos(π/2 * 0.75) ≈ 0.383. For comparison, in Table 3 we show similar data for A trained on United Nations Parallel Corpus UNPC (Ziemski et al., 2016) 5 (with 500K samples used for training and 10K for validation). The UN texts have a specific style and deal with loaded topics; they may be more difficult for embedding semantics and for keeping semantics of the translated sentences the same as it is in English. Indeed, for each of the three languages common for Tatoeba Table 2 andUNPC Table 3 (Spanish, French and Russian) all the aggregate indicators ⟨|p|⟩, σ(p), min(p) and max(p) are several times larger for UNPC-trained matrix A (Table 3). The orthogonal transformation can be accompanied by dilation (coefficient α in Eq.3), which means that the values α i = |A i | should not depend on i. In Tables 4 and5 we show normalized standard deviation\nσ(α) ᾱ = 1 ᾱ 1 D i (α i -ᾱ) 2(12)\nand normalized range\nr(α) = max i α i -min i α i ᾱ(13)\nThe tables contain also ᾱ -the averaged α i , and the minimal and maximal values.\nSimilarly to the orthogonality conditions, the dilation measures σ(α) ᾱ and r(α) are better (lower) for the transformation trained on Tatoeba (Table 2) than on UNPC (Table 3), for all three languages Spanish (es), French (f r) and Russian (ru). In Tatoeba table Berber and Esperanto languages have the worst σ(α) ᾱ and r(α). The normalized range r(α) is high for most langauges in both tables 2 and 3. Altogether, we have to conclude that orthogonality is only crudely satisfied by the linear transform (A, b). " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We considered a simple method improving the alignment between sentence embeddings in two languages: a linear transformation, tuned on embeddings of the paired sentences. In the example we analysed, a training on sentences also improves alignment between titles and texts (lowerhalf texts) of the articles -the articles from our WikiNews dataset. If embeddings were capable of perfectly encoding semantics even when not perfectly aligned, then the linear transformation would be an orthogonal transformation, accompanied by dilation and shift. Measuring deviation from this condition allows us to judge the quality of the embeddings. For example, we observed lower quality for embeddings of Berber and Esperanto languages compared to others considered here, and also a lower quality of UNPC-trained transformations compared to Tatoeba-trained transformations." }, { "figure_ref": [], "heading": "A WikiNews", "publication_ref": [], "table_ref": [], "text": "The WikiNews dataset6 comprises 15,200 news articles from the multilingual WikiNews website7 , including 9,960 non-English articles written in 33 different languages. These articles are linked to one of 5,240 sets of English news articles as WikiNews pages in other languages. Therefore, these WikiPages in different languages can be assumed to be describing the same news event, thus we can assume that the news titles and contents are of the linked NewsPages are semantically alligned. Here the non-English articles are written in a variety of languages including Spanish, French, German, Portuguese, Polish, Italian, Chinese, Russian, Japanese, Dutch, Swedish, Tamil, Serbian, Czech, Catalan, Hebrew, Turkish, Finnish, Esperanto, Greek, Hungarian, Ukrainian, Norwegian, Arabic, Persian, Korean, Romanian, Bulgarian, Bosnian, Limburgish, Albanian, and Thai. Each sample in the multilingual WikiNews dataset includes several variables, such as pageid, title, categories, language, URL, article content, and the publish date. In some cases, foreign " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Randy Sawaya for many discussions and review of the paper." } ]
Semantics of a sentence is defined with much less ambiguity than semantics of a single word, and it should be better preserved by translation to another language. If multilingual sentence embeddings intend to represent sentence semantics, then the similarity between embeddings of any two sentences must be invariant with respect to translation. Based on this suggestion, we consider a simple linear crosslingual mapping as a possible improvement of the multilingual embeddings. We also consider deviation from orthogonality conditions as a measure of deficiency of the embeddings.
Linear Cross-Lingual Mapping of Sentence Embeddings
[ { "figure_caption": "Aggregates over orthogonality conditions Eq.11 for A trained on Tatoeba dataset, for languages containing at least 100K samples. Min and max beyond 25%", "figure_data": "deviation from orthogonality (cos(0.75π/2) ≈ 0.383)are colored yellow.lang ⟨|p|⟩ σ(p) min(p) max(p)ber0.204 0.254 -0.8610.845de0.019 0.025 -0.1540.337eo0.059 0.074 -0.3620.345es0.004 0.005 -0.0350.038fr0.019 0.024 -0.1940.397he0.027 0.034 -0.3530.516it0.011 0.014 -0.0710.071ja0.032 0.042 -0.3600.623pt0.013 0.017 -0.1000.135ru0.018 0.023 -0.1500.219tr0.027 0.035 -0.3220.498uk0.020 0.026 -0.1910.281", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Aggregates over orthogonality conditions Eq.11 for A trained on UNPC.", "figure_data": "lang ⟨|p|⟩ σ(p) min(p) max(p)ar0.026 0.033 -0.1470.157es0.014 0.018 -0.1300.107fr0.144 0.195 -0.7690.795ru0.404 0.476 -0.9580.950zh0.039 0.050 -0.2540.495", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Nonuniformity of dilation of embeddings transformation (Eqs.12, 13). For the transformation trained on Tatoeba dataset.", "figure_data": "langᾱσ(α) ᾱr(α) min(α) max(α)ber0.637 0.336 1.8560.2581.440de0.814 0.039 0.2750.7530.977eo0.640 0.192 1.0560.3771.053es0.964 0.005 0.0500.9511.000fr0.845 0.034 0.2300.7910.986he0.814 0.060 0.3330.7270.998it0.889 0.021 0.1700.8410.992ja0.809 0.073 0.4190.7051.044pt0.877 0.022 0.1740.8380.990ru0.836 0.031 0.2240.7890.976tr0.835 0.054 0.3070.7511.007uk0.860 0.037 0.2390.7971.002", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Nonuniformity of dilation of embeddings transformation (Eqs.12, 13). For the transformation trained on UNPC.", "figure_data": "langᾱσ(α) ᾱr(α) min(α) max(α)ar0.761 0.088 0.4700.6300.988es0.840 0.043 0.2530.7670.980fr0.938 0.190 1.1260.7001.756ru1.338 0.444 2.9080.6614.551zh0.865 0.114 0.5590.6961.180", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Example of samples from the multilingual WikiNews datasetWikiNews sites may have news titles but no content, in which case the text variable is left empty. Samples with the same pageid in the dataset correspond to the same news event, which are linked together as the same WikiNews pages with other languages. The published date of an English sample is scraped and converted to DateTime format, but dates in foreign samples are left as is. Table6shows the example samples of the dataset.The number of samples for the languages used in Table 1: de: 1053; es: 1439; f r: 1311; it: 618; pt: 1023; ru: 436.", "figure_data": "index pageid lang titlecontent0232226 en\"Very serious\": Chinese govern-A report by the Chinese governmentment releases corruption reportstates corruption is \"very serious\". ...1232226 csČína připustila, že tamníZpráva čínské vlády připouští, že korupce vkorupce je vážný problémzemi je stále \"velmi vážná\", jelikož úřady ...2232226 esChina admite que la corrupción en el país es \"muy seria\"s29 de diciembre de 2010Beijing, China -Un reporte del gobierno de la República Popular China ...", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Oleg Vasilyev; Fumika Isono; John Bohannon
[ { "authors": "Steven Cao; Nikita Kitaev; Dan Klein", "journal": "", "ref_id": "b0", "title": "Multilingual alignment of contextual word representations", "year": "2020" }, { "authors": "Naman Goyal; Cynthia Gao; Vishrav Chaudhary; Peng-Jen Chen; Guillaume Wenzek; Da Ju; Sanjana Krishnan; Marc'aurelio Ranzato; Francisco Guzmán; Angela Fan", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b1", "title": "The Flores-101 evaluation benchmark for low-resource and multilingual machine translation", "year": "2022" }, { "authors": "Francisco Guzmán; Peng-Jen Chen; Myle Ott; Juan Pino; Guillaume Lample; Philipp Koehn; Vishrav Chaudhary; Marc'aurelio Ranzato", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English", "year": "2019" }, { "authors": "Tomas Mikolov; Quoc V Le; Ilya Sutskever", "journal": "", "ref_id": "b3", "title": "Exploiting similarities among languages for machine translation", "year": "2013" }, { "authors": "Barun Patra; Joel Ruben; Antony Moniz; Sarthak Garg; Matthew R Gormley; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces", "year": "2019" }, { "authors": "Xutan Peng; Mark Stevenson; Chenghua Lin; Chen Li", "journal": "", "ref_id": "b5", "title": "Understanding linearity of cross-lingual word embedding mappings", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b6", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "year": "2020" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b8", "title": "No language left behind: Scaling humancentered machine translation", "year": "2022" }, { "authors": "Zhihui Xie; Handong Zhao; Tong Yu; Shuai Li", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Discovering low-rank subspaces for languageagnostic multilingual representations", "year": "2022" }, { "authors": "Ziyi Yang; Yinfei Yang; Daniel Cer; Eric Darve", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "A simple and effective method to eliminate the self language bias in multilingual representations", "year": "2021" }, { "authors": "Jiawei Zhao; Andrew Gilman", "journal": "European Language Resources Association", "ref_id": "b11", "title": "Non-linearity in mapping based cross-lingual word embeddings", "year": "2020" }, { "authors": "Michał Ziemski; Marcin Junczys-Dowmunt; Bruno Pouliquen", "journal": "European Language Resources Association (ELRA", "ref_id": "b12", "title": "The United Nations parallel corpus v1.0", "year": "2016" } ]
[ { "formula_coordinates": [ 1, 397.58, 613.12, 127.56, 12.3 ], "formula_id": "formula_0", "formula_text": "e ′ = T e (1)" }, { "formula_coordinates": [ 1, 395.86, 663.09, 129.29, 10.77 ], "formula_id": "formula_1", "formula_text": "T ij T ik = δ jk (2)" }, { "formula_coordinates": [ 1, 385.06, 761.08, 140.08, 12.3 ], "formula_id": "formula_2", "formula_text": "e ′ = αT e + b(3)" }, { "formula_coordinates": [ 2, 376.96, 442.2, 148.18, 28.79 ], "formula_id": "formula_3", "formula_text": "dD = d - d min (d, d)(5)" }, { "formula_coordinates": [ 2, 373.07, 501.18, 152.07, 33.71 ], "formula_id": "formula_4", "formula_text": "d = 1 N N i | ẽi -e ′ i |(6)" }, { "formula_coordinates": [ 2, 371.14, 564.02, 154, 33.71 ], "formula_id": "formula_5", "formula_text": "d = 1 N N i |e i -e ′ i |(7)" }, { "formula_coordinates": [ 2, 322.66, 638.15, 202.48, 33.71 ], "formula_id": "formula_6", "formula_text": "dC = 1 N N i cos( ẽi , e ′ i ) -cos(e i , e ′ i )(8)" }, { "formula_coordinates": [ 2, 321.19, 700.99, 203.95, 33.71 ], "formula_id": "formula_7", "formula_text": "f D = 1 N N i H(|e i -e ′ i | -| ẽi -e ′ i |)(9)" }, { "formula_coordinates": [ 3, 73.53, 593.03, 216.34, 33.71 ], "formula_id": "formula_8", "formula_text": "f C = 1 N N i H(cos( ẽi , e ′ i ) -cos(e i , e ′ i )) (10)" }, { "formula_coordinates": [ 3, 350.16, 292.48, 174.98, 25.74 ], "formula_id": "formula_9", "formula_text": "p jk = i A ij A ik |A j | • |A k | , j ̸ = k (11)" }, { "formula_coordinates": [ 3, 335.97, 334.63, 80.15, 14 ], "formula_id": "formula_10", "formula_text": "|A j | = i A 2 ij ." }, { "formula_coordinates": [ 4, 114.89, 471.98, 174.98, 29.46 ], "formula_id": "formula_11", "formula_text": "σ(α) ᾱ = 1 ᾱ 1 D i (α i -ᾱ) 2(12)" }, { "formula_coordinates": [ 4, 119.7, 539.28, 170.17, 24.43 ], "formula_id": "formula_12", "formula_text": "r(α) = max i α i -min i α i ᾱ(13)" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Assisting humans in real-world scenarios is a fundamental capability that AI agents should possess. An AI helper, communicating with humans in natural language based on visual observation in the environment and oracle information could significantly enhance work productivity and serve as an accessibility tool for individuals with disabilities. Figure 1 illustrates an example of a delivery person seeking assistance in navigating through an unfamiliar building. With the help of a navigationhelper agent, the delivery person can ask questions" }, { "figure_ref": [], "heading": "Helper", "publication_ref": [], "table_ref": [], "text": "Task performer History path\nTask:\nDeliver the package to the mail room." }, { "figure_ref": [], "heading": "Inquiry (help request):", "publication_ref": [ "b26", "b25", "b29" ], "table_ref": [], "text": "Which direction should I go?\nResponse:\nFollow the hallway on your right and through the opened door with a blue sign above.\nDestination A navigation helper provides responses to help a task performer who is delivering a package. The helper has access to oracle information that is not available to the task performer, such as the location of the destination and map of the environment.\nabout directions and receive responses that are tailored to visual information about the current surroundings.\nBuilding such a helper agent poses significant challenges. It requires understanding the visual environment and the task performer's inquiries and leveraging oracle information to provide effective responses. Evaluating these agents also requires a task performer to show how well the helper agent performs in the real collaborative real scenario, where the task performer follows instructions and further sends inquiries when needed to the helper with the goal of completing tasks. Using a human as an oracle task performer would be the most intuitive setting, but it is impractical due to the high cost and low efficiency.\nIn this work, we introduce the Respond to Help Requests (R2H) benchmark, designed to automatically evaluate conversational multi-modal navigation helpers in a cooperative dynamic with another agent as the task performer. The R2H benchmark arXiv:2305.14260v2 [cs.CL] 17 Oct 2023 incorporates pre-trained performer agents to follow the responses from the helper agent, and the helper agent's performance is then reflected in the performance of the fixed task performer. Leveraging three existing vision-and-dialog navigation datasets, CVDN (Thomason et al., 2020), AlFRED (Shridhar et al., 2020) and AVDN (Fan et al., 2022), our R2H benchmark introduces two novel tasks: the Respond to Dialog History task (RDH) and the Respond during Interaction task (RdI). In the RDH task, the helper agent generates a response to the inquiry in the dialog history from humans, aiming at facilitating the task completion of the performer agent. In the RdI task, the helper agent needs to generate multiple responses from the start of the navigation process with no prior dialog history till the task's success. To this end, R2H benchmark offers a pragmatic evaluation of the response from helper agents in both single-and multi-turn helperperformer cooperation.\nWe also present a multi-modal helper agent SeeRee for R2H benchmark, which leverages the oracle knowledge about the task and environment such as the destination location in a navigation task, to generate responses to inquiries from the task performer. SeeRee employs pre-trained vision and language models to handle multi-modal inputs. To manage long input sequences, SeeRee leverages a novel Conditional Optimized Sparse (COS) attention mask. Moreover, we introduce a Parse by Step, which leverages Large Language Model to transform ground-truth human responses into structured step-by-step navigation instructions. Those parsed instructions (instead of human responses) serve as a better training source with improved performance in helping the task performer. In experiments, SeeRee surpasses the baseline in generating effective responses and validates the COS attention mask and Parse by Step method. SeeRee's responses have been evaluated through human assessments, demonstrating high accuracy and a significant improvement in task success rate compared to the baseline.\nAdditionally, we ask human testers to rate the faithfulness and naturalness of the responses evaluate the response from helper agents with automatic scores. As a result, our experiments indicate that a higher language similarity to human helpers does not necessarily lead to a more successful conversational helper agent.\nThe main contributions are concluded as follows:\n• We present the Respond to Help Requests (R2H) benchmark as a test-bed for automatically evaluating the capabilities of multimodal conversational navigation-helper agent, that helps task performers complete tasks by providing natural language responses to inquiries based on environment information.\n• We build two task helper agents, a novel taskoriented multi-modal helper agent, SeeRee, utilizing the Conditional Optimized Sparse (COS) attention mask and noise-free step-bystep instructions (Parse by Step) and a multimodal LLM helper with mPLUG-Owl (Ye et al., 2023).\n• Our experiments on the R2H benchmark and human evaluations of two helper agents over baseline also indicate that a closer linguistic resemblance to human helpers does not automatically translate into a more effective conversational helper agent." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "The R2H Benchmark", "publication_ref": [ "b26", "b8", "b20", "b19", "b23" ], "table_ref": [], "text": "Significant differences exist between building a helper agent and developing task performer agents for dialog-based multi-modal navigation tasks.\nTask performers, such as ones in CVDN (Thomason et al., 2020), DialFRED (Gao et al., 2022), andAVDN (Fan et al., 2022), are challenged as command followers, evaluated based on their performance of following dialog histories from human annotations, as shown in Figure 2i. However, the helper agent works as a supportive role, responds to questions from the task performer to facilitate task success. Therefore, the evaluation of the helper agent's performance could not be solely dependent on the agent itself, but on how the task performer benefited from the response. Therefore building the helper agent requires evaluations in a collaborative setting with task performers.\nInvolving humans as task performers to evaluate the helper agent is ideal but expensive. Alternatively, inspired by Padmakumar et al. (2022), Nguyen and Daumé III (2019) and Roman et al. (2020) that build helper and performer agents to collaborate as shown in Figure 2ii, we introduce the Respond to Help Requests (R2H) benchmark, involving task performer in the evaluation process as shown in Figure 2iii). In this way, the helper agent can be assessed comprehensively and realistically. R2H benchmark tests the agent's ability evaluates task performers, where performers follow instructions in human annotations, (ii) evaluates helperperformer pairs, where the helper and performer agents need to learn and be evaluated together jointly, or (iii) evaluates helper agents only (our R2H benchmark), where helpers need to provide appropriate instructions according to performer and environment information.\nto respond effectively in a wide range of scenarios, including two novel tasks, the Respond to Dialog History (RDH) task and the Respond during Interaction (RdI) task, building upon three existing vision-and-dialog navigation datasets. RDH task evaluates helper agents in a situation where partial human dialog history is provided and the RdI task aims at challenging the helper agents with real collaborative scenarios.\n2.1 Respond to Dialog History Task task, I think that is a stand alone task. Respond to Dialog History (RDH) Task focuses on evaluating the accuracy and completeness of the response from helper agents. The helper agent is challenged with understanding the dialog history and responds to help the task performer based on information about the task and environment in the form of image sequences. We developed environment-specific scripts to generate the image sequence which is introduced in Section 2.4. After the responses ri generated, they will be concatenated with all available human dialog history h i-1 = {q 0 , r 0 , . . . , q i-1 , r i-1 } in the corresponding trajectory before the inquiries q i from human task performers. As a result, the generated response from the helper agent forms a new dialog history ĥ = {h i-1 , q i , ri } which becomes the input to the task performer." }, { "figure_ref": [], "heading": "Respond during Interaction Task", "publication_ref": [], "table_ref": [], "text": "The Respond during Interaction (RdI) task challenges the ability of helper agents to cooperate consistently with the task performer. Similar to the RDH task, the RdI task involves a pre-trained task performer agent predicting navigation actions based on dialog histories. However, unlike the RDH task, no dialog history is initially provided in the RdI task, and the helper agent needs to respond to the navigation inquiries from the task performer constantly during the navigation. The task performer agent initiates inquiries qi for help when needed and navigates based on the ongoing dialog between itself and the helper agent ĥi = {q 0 , r0 , . . . , qi , ri }, where ri is the real-time responses from the helper agent to qi . The dialog ĥi as a result of interaction serves as the primary source of guidance for the task performer agent, making the consistent quality of the helper agent's responses crucial for successful navigation. Additionally, since there is multi-turn helper-performer cooperation involved in the RdI task, the helper's communication efficiency can be evaluated by the number of conversation turns required for a fixed performer's outcome." }, { "figure_ref": [], "heading": "Task Performer Agent", "publication_ref": [], "table_ref": [], "text": "Our R2H benchmark requires task performer agents to form helper-performer cooperation. In the RDH task, the task performer agent predicts navigation actions based on dialog history in specific environments. As for the RdI task, the task performer also needs to generate navigation inquiries to accomplish the navigation task better. R2H benchmark adopts state-of-the-art opensourced task performer agents for vision-andlanguage navigation datasets. The task performer agent is pre-trained on the original training set with human dialogs h i including the response from the human helper r i and predicts actions based on ĥ for completing the task. Therefore, the progress and success made by the task performer can be seen as a test of the accuracy and completeness of the responses generated by the helper agent." }, { "figure_ref": [], "heading": "Adapting Existing Datasets", "publication_ref": [ "b26", "b2", "b8", "b13" ], "table_ref": [], "text": "R2H benchmark establishes the same tasks across different datasets.\nDatasets R2H benchmark is built upon existing vision-and-dialog navigation datasets with dialogs between task performers and helpers.\n• CVDN (Thomason et al., 2020) is situated in the Matterport3D simulator (Chang et al., 2017) with photo-realistic scenes. The dataset records the collaboration between the human task performer and the human helper to complete navigation tasks of finding target objects in different indoor environments.\n• DialFRED (Gao et al., 2022) is built on Ai2thor (Kolve et al., 2017) simulator with synthetic views. Similar to the CVDN, the helper and task performer collaborate to navigate to targets. However, each trajectory only corresponds to one pair of inquiry and response, making it unsuitable for RdI tasks.\n• AVDN (Fan et al., 2022) is an aerial visionand-language dataset that includes dialogs, navigation trajectories, and visual observation between the helper and task performer. The dataset is annotated upon a continuous state photo-realistic drone simulator where the goal of the task performer is to control the drone in the simulator with a top-down view to navigate to a certain destination.\nEnvironment-specific Adaption Given the variability of available information across different datasets and environments, the R2H benchmark is designed with a harmonizing approach, converting the environment-specific information into image sequences. Script-based samplers are designed for each environment to generate the image sequences by leveraging oracle information. The sampler outputs image sequences showing the task performer's views on the shortest path to the destination. Especially for the CVDN dataset, following the data collection process, a connectivity graph for viewpoints is used to generate the shortest path and therefore the image sequence length is variable but limited to views within 5 viewpoints of the current position. Examples are shown in the appendix. For the AVDN dataset, since the allocentric direction description, such as \"turn south\" could be involved, we keep the image sequence all oriented with the north at the top and indicate the drone direction with a red arrow. As a result, helper agents can be evaluated in different datasets with the same input format, which enhanced the benchmark's versatility to adapt further to new datasets." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b0" ], "table_ref": [], "text": "Since we aim to evaluate how capable the response generated by helper agents is in helping the task performer, we adopt the primary metrics for task completion from each dataset: Goal Progress (GP) in CVDN evaluates the distance of the progress made towards the destination, where it is computed as the trajectory length, deducted by the remaining trajectory from the current location to the destination viewpoint; Success Rate (SR) in DialFRED, shows the ratio of tasks being completed successfully; Success weighted by inverse Path Length (SPL) (Anderson et al., 2018) as the dominant metric in AVDN measures the Success Rate weighted by the total length of the navigation trajectory.\n3 Models" }, { "figure_ref": [ "fig_2" ], "heading": "SeeRee", "publication_ref": [ "b16", "b17", "b5", "b5" ], "table_ref": [], "text": "In this section, we introduce the helper agent that can see and respond, named SeeRee. SeeRee generates responses to task-oriented navigation inquiries from the task performer. As is illustrated in Figure 3, our helper agent SeeRee generates natural language responses to the task performer's inquiries based on the task and environment information that is not aware by the task performer. The image sequences are padded to a fixed length (Lin et al., 2022), encoded by Video Swin Transformer (Liu et al., 2022) and then concatenated with BERT text embeddings (Devlin et al., 2019). Finally, the embeddings are fed into a multi-modal transformer which generates the natural language response in an auto-regressive way. SeeRee is trained end-toend with Mask Language Modeling (Devlin et al., 2019) and please refer to the appendix for the training detail.\n1 1 1 1 0 1 1 1 … … … 1 1 1 1 … … … 0 1 1 1 1 … … … 1 1 1 1 … … … 0 1 1 1 1 … … … 1 1 1 1 … … … M M Conditional Optimized" }, { "figure_ref": [], "heading": "Multi-modal Transformer", "publication_ref": [ "b11", "b16" ], "table_ref": [], "text": "Following prior multi-modal language generation studies (Hu et al., 2020;Lin et al., 2022), our multimodal transformer takes as input the embedding containing both text and image information and generates natural language responses in a unidirectional sequence-to-sequence generation process.\nWe treat the input inquiries as prompts for generating the response, and a special token [CLS] is added to the end of the inquiry. At inference time, the text is generated in an auto-regressive manner, where we insert multiple [MSK] tokens after the [CLS] token and predict tokens to replace [MSK]tokens one by one unidirectionally until the prediction is [EOS] or all the [MSK] tokens are predicted." }, { "figure_ref": [ "fig_2" ], "heading": "Conditional Optimized Sparse (COS) Attention Mask", "publication_ref": [ "b5" ], "table_ref": [], "text": "One challenge in generating responses for dialogbased embodied tasks is effectively modeling the long input image sequence, reducing the redundancy in the repetitive images but keeping the critical details. To this end, we introduce a Conditional Optimized Sparse (COS) attention mask for the multi-modal transformer, as shown in Figure 3. The mask can be divided into three row sections, corresponding to in what range the embeddings for input inquiry, the response generated, input image sequence can attend to, respectively. The first row section shows that the language embedding (LE) for input inquiry can attend to itself and the input visual embedding (VE). The second row section means the LE for the generated response can attend to the VE and all the previous LE, allowing unidirectional text generation (Devlin et al., 2019). The third row section indicates that the VE can attend to partial itself and the LE of input inquiry. Especially, instead of being fulling binary and pre-defined, a learnable conditional mask C that is non-binary and sparse is adopted in the third row section of the COS attention mask, controlling the self-attention of the VE. C is conditioned on VE, and the mapping is modeled by:\nC = σ(f (VE)),(1)\nwhere σ(x) = 1 1+e -x and f is a multi-layer perceptron. In this way, our COS attention mask uses a conditional optimizing strategy that optimizes the attention mask based on the image sequence. As a result, COS attention mask enables better encoding and understanding of long visual input and improves the response generation result." }, { "figure_ref": [], "heading": "Response Preprocessing with Parse by", "publication_ref": [ "b1", "b12", "b1" ], "table_ref": [], "text": "Step\nTraining a helper agent to imitate the responses in the human dialog directly may not be optimal as they may be unorganized and include utterances irrelevant to the task. Structured step-by-step instructions are easier for the helper agent to learn. Therefore, inspired by the idea of in-context learning (Brown et al., 2020;Kojima et al.), we propose a Parse by Step method that prompts GPT-3 (Brown et al., 2020) ter Prase by Step, the preprocessed training data is parsed in a step-by-step manner with a streamlined language pattern. As a result, the learning objectives of SeeRee is the preprocessed human response Y = P (R), where P is the Parse by Step method, and R is the original human response in the dialog of the training set." }, { "figure_ref": [], "heading": "Multi-modal Large Language Model", "publication_ref": [ "b29" ], "table_ref": [], "text": "Besides our SeeRee model, we introduce another navigation-helper agent constructed from a multimodal large language model (LLM). Specifically, we employ mPLUG-Owl (Ye et al., 2023), a State-Of-The-Art multi-modal LLM that is able to take as input an image sequence and text for language generation. mPLUG-Owl is originally trained with a large amount of uni-modal and multi-modal data in two stage paradigm including instruction tuning.\nTo leverage the extensive knowledge that mPLUG-Owl accumulated through training and avoid the potential issue associated with limited task-specific training data, we adopt mPLUG-Owl in a zeroshot manner. The build of a helper agent based on mPLUG-Owl serves as a valuable comparison for SeeRee and sheds light on the potential of leveraging LLMs in building navigation-helper agents." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b16", "b23" ], "table_ref": [], "text": "We initialize the encoders and multi-modal transformers in SeeRee with weights from SwinBert (Lin et al., 2022) to benefit from its image sequence understanding ability and then finetune SeeRee on the training set of each dataset separately. We adopt the RMM Guide model (G) (Roman et al., 2020) as the baseline model in our experiments. Please refer to the appendix for more implementation details." }, { "figure_ref": [], "heading": "RDH Task", "publication_ref": [ "b26", "b3", "b6", "b8" ], "table_ref": [ "tab_2" ], "text": "Task Performer Agents For RDH task on CVDN dataset (Thomason et al., 2020), we use HAMT1 (Chen et al., 2021), pre-trained on the original Navigation from Dialog History (NDH) task in CVDN as the task performer agent. For AVDN (Fan et al., 2022) and DialFRED (Gao et al., 2022) datasets, we leverage the task performer in the original work, i.e., HAA-transformer and DialFRED model2 . We further detail the task performers in the appendix.\nResult As indicated in Table 1, the task performer agent attains the best overall performance across all three datasets when guided by responses generated by the SeeRee helper agent. On both CVDN and DiaFRED datasets, SeeRee demonstrates performance levels that are strikingly similar to those of human helpers. This is likely attributable to the fact that the environments used in the CVDN are visually akin to the data employed during the pre-training phase of SeeRee's encoders. Such similarity facilitates efficient fine-tuning and leads to heightened performance. In addition, for the DialFRED dataset, due to the predominance of template-based human utterances, the challenge in response generation is simplified. Meanwhile, the AVDN dataset is more challenging due to the drone simulator's adoption of a continuous control space, and complex visual information leads to difficulty in reaching the precise destination. Still, SeeRee outperforms the baseline by a large margin. We present results from more LLM baselines and case studies of the RDH task in Appendix E and C. navigation inquiries. Future works for building such language-and navigation-capable task performers in AVDN and DialFRED environments are needed. Furthermore, given the flexibility of our system, should a superior model be developed, the task performer could be effortlessly swapped." }, { "figure_ref": [ "fig_3" ], "heading": "RdI", "publication_ref": [ "b26", "b26" ], "table_ref": [], "text": "Result The RdI task is conducted with a maximum 20 turns of conversation between the task performer and helper on the unseen validation set of the CVDN dataset. As depicted in Figure 4, we plot the mean GP of the task performer agent concerning the dialog turns happened during the navigation. As the dialog turn between helper and performer increases chronologically, more information is provided to the task performer. The navigation stops once the task performer reaches the maximum steps, 20, or a stop action is generated where the goal progress remains constant thereafter.\nBased on the result, the multi-modal LLM as the helper agent facilitates the most effective performance from the task performer at the end of the maximum conversation turns. However, it's worth noting that the response from SeeRee is shown to be less noisy on its effect and therefore is potentially more reliable over extended dialogue turns whereas multi-modal LLM tends to generate responses with a more varied effectiveness. Moreover, with less than half the maximum dialogue turns, SeeRee's responses yield a superior result that is 90% close to the navigation result at the end of the maximum turn. This underscores SeeRee's greater communication efficiency. (Thomason et al., 2020). The mean Goal Progress (GP) of the same task performer collaborating with different helper agents is plotted with respect to the number of conversation turns happened. Error bars show the relative range of the GP. Multimodal LLM enables a better but noisier performance of the task performer than SeeRee. (Thomason et al., 2020). The human tester plays the task performer role via an interface interacting with the helper agents. Task completion and subjective response evaluation are collected. The response from SeeRee is most effective despite being less natural." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b26", "b21", "b15" ], "table_ref": [], "text": "Based on CVDN dataset (Thomason et al., 2020) Language Generation Similarity Result As shown in Table 2, we also evaluate the ablated models with language generation metrics, BLUE2 (Papineni et al., 2002) and ROUGE-L (Lin and Och, 2004). BLUE2 score and ROUGH-L score drops when Parse by Stop method is applied, but GP in the RDH task receives a major increase. This indicates that a high language similarity to human responses does not necessarily equate to a more effective conversational helper agent." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b26" ], "table_ref": [ "tab_5" ], "text": "To further evaluate the performance of helper agents, we conduct human evaluations based on the RdI task. Human participants act as task performers navigating 60 randomly selected trajectories in validation sets of CVDN (Thomason et al., 2020). During the simulation, participants can control their movement in the environment using their keyboards and ask questions to the helper agent whenever needed. We evaluate both settings where the helper agent exists or not. Human participants are also asked to rate the naturalness and faithfulness of the response from the helper. The average GP and subjective rating of the responses are shown in Table 3. The result shows that SeeRee provides the best results in terms of task completion.\nThrough subjective evaluation, we find that SeeRee achieves significantly higher scores in terms of response faithfulness. Despite being trained with data preprocessed by Parse by Step, where the training supervision is no longer the original human utterances, it still achieves a close score of naturalness compared to the baseline model trained on original human responses. Through these evaluations, we show the ability of SeeRee to help human users complete embodied tasks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b26", "b8", "b24", "b9", "b23", "b10", "b20", "b26", "b8", "b25", "b23", "b10", "b20" ], "table_ref": [], "text": "Dialog-based Multi-modal Embodied Benchmarks Previous dialog-based multi-modal embodied benchmarks usually focus on evaluating either task performers (Thomason et al., 2020;Gao et al., 2022;Shi et al., 2022;Gu et al., 2022) or the corresponding pairs of task performer and helper (Roman et al., 2020;Hahn et al., 2020;Padmakumar et al., 2022). For instance, the CVDN (Thomason et al., 2020) evaluates a task performer to navigate to a desired room by dialog histories. Gao et al. (2022) developed a dialogue-enabled embodied instruction following benchmark, Dial-FRED, based on the ALFRED benchmark (Shridhar et al., 2020) and presented a task performer framework. Further, there is a wide range of activities studied in these tasks, such as navigating to a specific location (Roman et al., 2020), locating positions (Hahn et al., 2020), and interacting objects (Padmakumar et al., 2022). Compared to these benchmarks, our R2H benchmark aims for better helper agents and is the only benchmark for sole helper evaluation." }, { "figure_ref": [], "heading": "Multimodal-based Language Generation", "publication_ref": [ "b4", "b28", "b7", "b16" ], "table_ref": [], "text": "Building helper agents is in line with a growing collection of methods applied to the visual question-answering task. A unified vision-andlanguage framework can be trained to handle a variety of tasks, including the question-answering problem, using a single objective (Cho et al., 2021;Wang et al., 2022). Fu et al. (2021) tackled the video question-answering problem by adopting a video transformer to model the temporal dynamics of video inputs explicitly. One problem shared by these works is the input frames to the model are limited in quantity, whereas the helper agent has to take a long image sequence as input to include adequate information. Lin et al. (2022) developed a learnable sparse attention mask for video caption that enables long frame sequence as input. However, the learned sparse attention mask is fixed after training, lacking generalization ability compared to the COS attention mask of SeeRee." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce the Respond to Help Requests (R2H) benchmark with two tasks, Respond to Dialog History (RDH) and Respond during Interaction (RdI), assessing a helper agent's guidance capabilities. With the R2H benchmark, we build and evaluate two navigation-helper agents, SeeRee and Multi-modal LLM model. The results show that they both outperformed the baseline model.\nThrough further ablation study and human evaluation, we prove the effectiveness of SeeRee in assisting humans with tasks and argue that the effectiveness of responses cannot be determined by their linguistic resemblance to human dialogue alone." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "R2H benchmark presents a platform for helper agents that create natural language to address queries from task performers. Yet, the assessment of such an helper agent mandates a capable task performer agent that can not only navigate but also communicate, given that the efficacy of the helper can be gauged through final task fulfillment, and the task performer shouldn't act as a constraint. Also, the complexity of the real world surpasses that of a simulated environment, thereby imposing additional prerequisites for the task performer. Furthermore, the lack of abundant dialog-based embodied datasets also restricts the progression of both performer and helper agents." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [ "b5", "b16", "b26", "b8", "b18" ], "table_ref": [], "text": "A.1 Helper Model SeeRee For the training of SeeRee, we apply Mask Language Modeling (MLM) (Devlin et al., 2019) to the response where 80% tokens are masked with [MSK] tokens, and 10% tokens are changed randomly. Cross entropy loss is applied to predictions for the masked tokens:\nL M LM = Σ i L CrossEntropy (y i , ŷi ),(2)\nwhere y i is the masked token at position i and ŷi is the prediction. Additionally, in order to let the COS attention c mask attend to the specific details of the Visual Embedding (VE) that are most relevant to the task, we enforce C to be sparse, letting the VE to sparsely attend to itself using a sparsity loss (Lin et al., 2022):\nL SP ARSE = λ × M i=1 M j=1 |C i,j | ,(3)\nwhere λ is a regularization hyperparameter and C i,j is the value of the learnable conditional mask C.\nSeeRee is trained on CVDN (Thomason et al., 2020), DialFRED (Gao et al., 2022) and AVDN (Fan et al., 2022) datasets individually using AdamW optimizer (Loshchilov and Hutter, 2018) for 20k iterations with a batch size of 6 and learning rate of 1e -4 . The training data used are converted from the original training set from each dataset, with the environment-specific scripted sampler that generates image sequences with oracle environment information. We select the trained weights based on the RDH task evaluation result. Training takes about 12 hours on one NVIDIA A6000 GPU." }, { "figure_ref": [], "heading": "Multi-modal Large Language Model", "publication_ref": [ "b29" ], "table_ref": [], "text": "We utilized mPLUG-Owl (Ye et al., 2023) as a representative method for Large Language Model in this paper. mPLUG-Owl takes images and text as input and output natural language. It achieved state-ofthe-art performance in various multi-modality tasks. Providing the further path in the form of images and a proper prompt, mPLUG-Owl directly outputs the guidance for the performer agent. Table 4 shows prompt templates. The QUESTION is relevant to the current task, and it is not strictly formed. For example, it could be \"Should I continue forward?\" in CVDN, \"What does the object look like?\" for DialFRED, \"I am on top of a building block, Can I see the destination?\" for AVDN.\nRMM G RMM G is an LSTM-based model that generates natural language response based on the input image sequence and dialog history. We train RMM G from scratch with the same data used for fine-tuning SeeRee, using a batch size of 8 and a learning rate of 1e -4 ." }, { "figure_ref": [], "heading": "A.2 Task Performer Agents", "publication_ref": [ "b3", "b26", "b23", "b8", "b22", "b26" ], "table_ref": [], "text": "We aim to leverage the best available task performers in R2H benchmark.\nFor CVDN dataset, we select History Aware Multimodal Transformer (HAMT) (Chen et al., 2021) as the task performer agent in RDH task. HAMT model is designed for Vision-and-Language Navigation (VLN) tasks. The HAMT model incorporates a long-horizon history into multi-modal decision making. It efficiently encodes all past panoramic observations via a hierarchical vision transformer and then combines text, history, and current observation to predict the next action. The model is first trained end-to-end using several proxy tasks, including single-step action prediction and spatial relation prediction, and then reinforcement learning is used to improve the navigation policy further. As the time we conduct the experiment, HAMT has achieved state-of-the-art results on a broad range of VLN tasks, including CVDN (Thomason et al., 2020). However, HAMT model is only capable for navigation action prediction and cannot generate questions for asking help. Therefore, we adopt RMM Navigator model (N ) with RMM Questioner model (Q) (Roman et al., 2020) in RdI task, which is the only task performer model to the best of our knowledge that is designed to navigate while communicating in natural language. Both RMM N and Q are lstm based models and the RMM Questioner model is designed to generate questions at a fixed interval. It takes into account the Navigator's perspective in the environment and the dialog history.\nFor AVDN and DialFRED datasets (Fan et al., 2022;Gao et al., 2022), we use HAA-Transformer and DialFRED model proposed along with the dataset as the task performer agent because they are still leading the leaderboard3 4 . DialDRED model and HAA-Transformer model are both based on Episodic Transformer model (Pashevich et al., 2021). The DialFRED model trained on DialFRED (Thomason et al., 2020) andAVDN dataset (Fan et al., 2022). We fill the original response to the blank of the prompt and input to GPT-3 for sentence completion. The output from the GPT-3 becomes the processed response with mainly task-related instructions kept and organized in steps. Through Parse by Step, we preprocess the response in the training set.\ndataset focuses on three types of questions: location clarification, appearance clarification, and direction clarification. The HAA-Transformer model trained on the AVDN dataset can predict both navigation waypoints and human attention." }, { "figure_ref": [], "heading": "A.3 Prompts and Examples for Parse by Step", "publication_ref": [ "b26" ], "table_ref": [ "tab_7" ], "text": "We design different prompts for applying Parse by\nStep on CVDN dataset (Thomason et al., 2020) and AVDN dataset (Fan et al., 2022). Table 5 shows the designed prompt and some example preprocessed results. To process the training set responses, we first insert the original response into the blank of the prompt, which is then inputted into GPT-3 for sentence completion. The output from GPT-3, primarily retaining task-related instructions arranged in steps, is subsequently treated as the processed response. Furthermore, we also eliminate the item numbers to streamline the instructions further. This approach aids in preserving the essential task-related information while enhancing the response's readability and conciseness." }, { "figure_ref": [], "heading": "B Additional Benchmark Analysis", "publication_ref": [], "table_ref": [], "text": "We conduct additional analyses on our R2H benchmark across the three datasets as shown in Ta-ble 6. Especially, the datasets we have adopted cover a wide range of environment types, including both indoor (CVDN and DialFRED) and outdoor (AVDN) settings, as well as synthetic (Dial-FRED) and photo-realistic environments (CVDN and AVDN). This diversity adds depth and robustness to our benchmark." }, { "figure_ref": [ "fig_9" ], "heading": "C RDH Task Examples", "publication_ref": [ "b26", "b8" ], "table_ref": [], "text": "In this section, we show examples of our RDH task on all three datasets, CVDN dataset (Thomason et al., 2020), DialFRED dataset (Gao et al., 2022) and AVDN dataset (Fan et al., 2022). As shown in Figure 8, 7 and 6 each example includes the input inquiry from human, the sampled input image sequence from the environment oracle information, the response generated by SeeRee and the human ground truth response. Even though all methods have a negative impact on the goal progress, SeeRee minimized the influence while achieving the highest Success Rate." }, { "figure_ref": [], "heading": "D Results of RDH Task on More Metrics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E Additional LLM Baselines", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "Comprehensive baselines are essential for validating our approach, particularly in light of the growing focus on Large Language Models (LLMs). Given the complexity of our tasks, the multimodal LLM (mPLUG-Owl) serves as an adequate LLM baseline as it inherently supports our task requirements by natively accommodating both image sequences and text for language generation. Additionally, we have also explored two additional baselines that leverage the power of LLMs as detailed below and the result is shown in Table 7." }, { "figure_ref": [], "heading": "Multimodal-LLM + ChatGPT", "publication_ref": [], "table_ref": [], "text": "We experiment with prompting the Multimodal-LLM (mPLUG-Owl) to generate only captions for the image sequences. Subsequently, we use ChatGPT to serve as the helper agent, generating responses based on the captions and queries from the task performer." }, { "figure_ref": [], "heading": "BLIP2 Model with stacked images input", "publication_ref": [ "b14" ], "table_ref": [], "text": "We employ the BLIP2 model (Li et al., 2023) in a zeroshot manner. BLIP2 model benefits from a generic and efficient pre-training strategy that combines pretrained vision models with LLMs for vision-language pretraining. Due to the model's limitation of accepting only a single image input, we stack the input image sequences into a single image arranged in four columns. The text prompt used is the same as the prompt for Multimodal-LLM (mPLUG-Owl) as shown in Table 4." }, { "figure_ref": [ "fig_4" ], "heading": "F Human Evaluation Details", "publication_ref": [ "b26" ], "table_ref": [], "text": "In our human evaluation study, we recruited five students from the university and paid with at least $17/h as the human task performer. They consented to participate and contribute their data used for this work. These participants were given instructions on how to use the simulator before the evaluation began and had the same prior knowledge about the project. The participants are randomly assigned a navigation-helper agent and CVDN data including the initial instruction, starting, and target location. An interface, as shown in Figure 5, is built where an image box will show the image returned from the simulator and a text box allows both navigation control commands and sending natural language inquires for helper agents. It enables an easy interaction among the human task performer, helper agent, and simulator so that the participants can experience a real-time collaboration between helper and performer with only a screen and keyboard. Upon completion of the task, we assess task success and goal progress in the same way as in Thomason et al. (2020) and we ask the participant to rate the naturalness and faithfulness of the response from the helper in the last task session. !\"#$%&'\"($')*+&&,-.//&' 01\"%'\"$2&.33$4'\"5&%-'3&'3&\"1%&%-2&6.%-)1147 " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://sites.google.com/view/ response2helprequests/home." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b26", "b8", "b6" ], "table_ref": [], "text": "The human evaluation part of this project is classified as exempt by Human Subject Committee vis IRB protocols. Additionally, we recognize the potential ethical issues related to training language generation models using human dialog data, such as the possibility of the model learning harmful language without understanding its context. To mitigate this concern, our proposed data preprocessing method, Parse by Step, converts the responses in the training data into more structured and taskspecific instructions, effectively reducing the presence of noise and profanity in the training data. As a result, the likelihood of our model generating inappropriate responses is greatly minimized. (Thomason et al., 2020;Gao et al., 2022;Fan et al., 2022). We replace the response in validation sets with responses from different helper agents. Since the task performer agent is trained with human dialog and is fixed, it relies on the information contained in the response to complete the task. Thus, the better the task performer agent performs in the task, the more effective the response is." } ]
Intelligent navigation-helper agents are critical as they can navigate users in unknown areas through environmental awareness and conversational ability, serving as potential accessibility tools for individuals with disabilities. In this work, we first introduce a novel benchmark, Respond to Help Requests (R2H), to promote the development of multi-modal navigation helpers capable of responding to requests for help, utilizing existing dialog-based embodied datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH), which assesses the helper agent's ability to generate informative responses based on a given dialog history, and (2) Respond during Interaction (RdI), which evaluates the effectiveness and efficiency of the response during consistent cooperation with a task performer. Furthermore, we explore two approaches to construct the navigation-helper agent, including fine-tuning a novel task-oriented multimodal response generation model that can see and respond, named SeeRee, and employing a multi-modal large language model in a zero-shot manner. Analysis of the task and method was conducted based on both automatic benchmarking and human evaluations. Project website:
R2H: Building Multimodal Navigation Helpers that Respond to Help Requests
[ { "figure_caption": "Figure 1 :1Figure 1: Example of a helper agent.A navigation helper provides responses to help a task performer who is delivering a package. The helper has access to oracle information that is not available to the task performer, such as the location of the destination and map of the environment.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Comparison between different dialog-based embodied benchmark types. The benchmark either (i) evaluates task performers, where performers follow instructions in human annotations, (ii) evaluates helperperformer pairs, where the helper and performer agents need to learn and be evaluated together jointly, or (iii) evaluates helper agents only (our R2H benchmark), where helpers need to provide appropriate instructions according to performer and environment information.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of SeeRee. The visual and text inputs are encoded and fed into the multi-modal transformer, where Conditional Optimized Sparse (COS) attention mask is applied. The COS attention mask has fixed binary values except for the learnable conditional mask C for visual embedding (VE) that is conditioned on VE itself. Yellow, blue, and green colored rows correspond to the attention masks for LE of input inquiry and generated response and VE, respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results of Respond during Interaction (RdI) task on CVDN dataset(Thomason et al., 2020). The mean Goal Progress (GP) of the same task performer collaborating with different helper agents is plotted with respect to the number of conversation turns happened. Error bars show the relative range of the GP. Multimodal LLM enables a better but noisier performance of the task performer than SeeRee.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Examples of the interface used during human evaluation based on RdI task with CVDN (Thomason et al., 2020) dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 1: Caption", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: An example of data in RDH task on the AVDN dataset.Human inquiry: I see some warehouses in my view. Am I near the destination? How to go to destinaton? Ground truth human response: You are not close to your destination yet, head north and you will reach the complex gray warehouse office.", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An example of data in RDH task on the AVDN dataset.Human inquiry: I see some warehouses in my view. Am I near the destination? How to go to destination? Ground truth human response: You are not close to your destination yet, head north and you will reach the complex gray warehouse office.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 1: Caption", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Results of Respond to Dialog History (RDH) task onCVDN, DialFRED, and AVDN dataset (Thomason et al., 2020;Gao et al., 2022;Fan et al., 2022). We replace the original response in validation sets with responses from different helper agents. With a fixed task performer agent, a better performance of the performer agent represents a more effective response from the helper agent.", "figure_data": "CVDNDialFREDAVDNHelperSeen GP ↑Unseen GP ↑Seen SR ↑Unseen SR ↑Seen SPL ↑Unseen SPL ↑Human Annotator 6.95.149.133.414.716.5RMM G4.72.846.532.12.43.6Multimodal LLM5.33.647.033.80.71.5SeeRee6.54.949.133.14.64.4", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of human evaluation on RdI task on the CVDN", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": ", we conduct two ablation studies to explore the Parse by Step method and COS attention mask for SeeRee. We first let the task performer infers on the human dialog from the original dataset, but the response within is processed by our Parse by Step method. Then we create ablated SeeRee model by repeating the same training on SeeRee model twice, but incrementally replacing a naive fixed attention mask with the COS attention mask and applying our Parse by Step method.Response Effectiveness ResultThe first ablation study that evaluate of Parse by Step method on the original validation set serves as a sanity check. As the result in Table2, it shows that human responses processed with Pase by Step keep the information contained in the original response and they are overall equally capable as the original response. Addi-tionally in the second ablation study, the GP of the same task performer cooperated with different ablated SeeRee shows that both COS attention mask and Parse by Step leads to major improvements to the effectiveness of the response in facilitating task success.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Examples of our Parse byStep method on the CVDN dataset", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "shows the performance of RDH task on themain metric. Here Table 8 exhibits the results onmore metrics for a thorough performance under-standing.Goal Progress (GP) evaluates the distance of theprogress made towards the destination. Success", "figure_id": "tab_8", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistic analysis on R2H benchmark across the three datasets. The columns from left to right are the number of navigation trajectories; the number of queries per trajectory, which is the same as the number of responses per trajectory; the average length of responses and queries provided by human annotators and the average number of images in the image sequences sampled from the environment by script-based samplers, which serve as input to the helper agent.", "figure_data": "Rate (SR) shows the ratio of tasks being completedsuccessfully. Success weighted by inverse PathLength (SPL) and Path Weighted Success Rate(PWSR) measure the Success Rate weighted bythe total length of the navigation trajectory. Herewe could draw similar conclusions with", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Note that AVDN dataset(Fan et al., 2022) is more challenging due to the drone simulator's adoption of a continuous control space and complex visual information. The mistakes made by the helper are likely to be amplified if the task performer misses the target location and overshoots, or if the initial direction provided by helper agent is inaccurate.", "figure_data": "", "figure_id": "tab_10", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of multimodal LLM baselines on the RdH task. Additional LLM baselines (bottom two rows) are compared with regard to the multimodal LLM baseline using mPLUG-Owl.", "figure_data": "CVDNCVDNDialFREDDialFREDAVDNAVDNvalidation seenvalidation unseenvalidation seenvalidation unseenvalidation seenvalidation unseenGPGPSRSRSPLSPLMultimodal LLM baseline (mPLUG-Owl)5.33.647.033.80.71.5mPLUG-Owl + ChatGPT5.33.543.531.81.21.6BLIP25.63.438.624.73.85.0", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" } ]
Yue Fan; Jing Gu; Kaizhi Zheng; Eric Xin; Wang
[ { "authors": "Peter Anderson; Qi Wu; Damien Teney; Jake Bruce; Mark Johnson; Niko Sünderhauf; Ian Reid; Stephen Gould; Anton Van Den; Hengel", "journal": "", "ref_id": "b0", "title": "Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments", "year": "2018" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Angel Chang; Angela Dai; Thomas Funkhouser; Maciej Halber; Matthias Niessner; Manolis Savva; Shuran Song; Andy Zeng; Yinda Zhang", "journal": "", "ref_id": "b2", "title": "Matterport3d: Learning from RGB-D data in indoor environments", "year": "2017" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Cordelia Schmid; Ivan Laptev", "journal": "", "ref_id": "b3", "title": "History aware multimodal transformer for vision-and-language navigation", "year": "2021" }, { "authors": "Jaemin Cho; Jie Lei; Hao Tan; Mohit Bansal", "journal": "PMLR", "ref_id": "b4", "title": "Unifying vision-and-language tasks via text generation", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Winson Yue Fan; Tongzhou Chen; Chun Jiang; Yi Zhou; Xin Zhang; Wang Eric", "journal": "", "ref_id": "b6", "title": "Aerial vision-and-dialog navigation", "year": "2022" }, { "authors": "Tsu-Jui Fu; Linjie Li; Zhe Gan; Kevin Lin; William Yang Wang; Lijuan Wang; Zicheng Liu", "journal": "", "ref_id": "b7", "title": "Violet: End-to-end video-language transformers with masked visual-token modeling", "year": "2021" }, { "authors": "Xiaofeng Gao; Qiaozi Gao; Ran Gong; Kaixiang Lin; Govind Thattai; Gaurav; Sukhatme", "journal": "", "ref_id": "b8", "title": "Dialfred: Dialogue-enabled agents for embodied instruction following", "year": "2022" }, { "authors": "Jing Gu; Eliana Stefani; Qi Wu; Jesse Thomason; Xin Wang", "journal": "", "ref_id": "b9", "title": "Vision-and-language navigation: A survey of tasks, methods, and future directions", "year": "2022" }, { "authors": "Meera Hahn; Jacob Krantz; Dhruv Batra; Devi Parikh; James Rehg; Stefan Lee; Peter Anderson", "journal": "", "ref_id": "b10", "title": "Where are you? localization from embodied dialog", "year": "2020" }, { "authors": "Xiaowei Hu; Xi Yin; Kevin Lin; Lijuan Wang; Lei Zhang; Jianfeng Gao; Zicheng Liu", "journal": "", "ref_id": "b11", "title": "Vivo: Surpassing human performance in novel object captioning with visual vocabulary pre-training", "year": "2020" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b12", "title": "Large language models are zero-shot reasoners", "year": "" }, { "authors": "Eric Kolve; Roozbeh Mottaghi; Winson Han; Eli Van-Derbilt; Luca Weihs; Alvaro Herrasti; Daniel Gordon; Yuke Zhu; Abhinav Gupta; Ali Farhadi", "journal": "", "ref_id": "b13", "title": "Ai2-thor: An interactive 3d environment for visual ai", "year": "2017" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b14", "title": "Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models", "year": "2023" }, { "authors": "Chin-Yew Lin; Franz Josef; Och ", "journal": "", "ref_id": "b15", "title": "Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics", "year": "2004" }, { "authors": "Kevin Lin; Linjie Li; Chung-Ching Lin; Faisal Ahmed; Zhe Gan; Zicheng Liu; Yumao Lu; Lijuan Wang", "journal": "", "ref_id": "b16", "title": "Swinbert: End-to-end transformers with sparse attention for video captioning", "year": "2022" }, { "authors": "Ze Liu; Jia Ning; Yue Cao; Yixuan Wei; Zheng Zhang; Stephen Lin; Han Hu", "journal": "", "ref_id": "b17", "title": "Video swin transformer", "year": "2022" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b18", "title": "Decoupled weight decay regularization", "year": "2018" }, { "authors": "Khanh Nguyen; Hal Daumé; Iii ", "journal": "", "ref_id": "b19", "title": "Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning", "year": "2019" }, { "authors": "Aishwarya Padmakumar; Jesse Thomason; Ayush Shrivastava; Patrick Lange; Anjali Narayan-Chen; Spandana Gella; Robinson Piramuthu; Gokhan Tur; Dilek Hakkani-Tur", "journal": "", "ref_id": "b20", "title": "Teach: Task-driven embodied agents that chat", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b21", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Alexander Pashevich; Cordelia Schmid; Chen Sun", "journal": "", "ref_id": "b22", "title": "Episodic transformer for vision-and-language navigation", "year": "2021" }, { "authors": "Yonatan Homero Roman Roman; Jesse Bisk; Asli Thomason; Jianfeng Celikyilmaz; Gao", "journal": "", "ref_id": "b23", "title": "Rmm: A recursive mental model for dialogue navigation", "year": "2020" }, { "authors": "Zhengxiang Shi; Yue Feng; Aldo Lipani", "journal": "", "ref_id": "b24", "title": "Learning to execute actions or ask clarification questions", "year": "2022" }, { "authors": "Mohit Shridhar; Jesse Thomason; Daniel Gordon; Yonatan Bisk; Winson Han; Roozbeh Mottaghi; Luke Zettlemoyer; Dieter Fox", "journal": "", "ref_id": "b25", "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "year": "2020" }, { "authors": "Jesse Thomason; Michael Murray; Maya Cakmak; Luke Zettlemoyer", "journal": "", "ref_id": "b26", "title": "Vision-and-dialog navigation", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b27", "title": "", "year": "" }, { "authors": "Jianfeng Wang; Zhengyuan Yang; Xiaowei Hu; Linjie Li; Kevin Lin; Zhe Gan; Zicheng Liu; Ce Liu; Lijuan Wang", "journal": "", "ref_id": "b28", "title": "Git: A generative image-to-text transformer for vision and language", "year": "2022" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b29", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b30", "title": "Prompt for AVDN: Commander says", "year": "" }, { "authors": " Yeah", "journal": "", "ref_id": "b31", "title": "Keep going around the outside. 3", "year": "" }, { "authors": " Avdn Hey Drone", "journal": "", "ref_id": "b32", "title": "head southwest", "year": "" } ]
[ { "formula_coordinates": [ 5, 107.91, 24.77, 318.25, 295.91 ], "formula_id": "formula_0", "formula_text": "1 1 1 1 0 1 1 1 … … … 1 1 1 1 … … … 0 1 1 1 1 … … … 1 1 1 1 … … … 0 1 1 1 1 … … … 1 1 1 1 … … … M M Conditional Optimized" }, { "formula_coordinates": [ 5, 379.87, 472.44, 145.27, 10.91 ], "formula_id": "formula_1", "formula_text": "C = σ(f (VE)),(1)" }, { "formula_coordinates": [ 11, 103.25, 203.39, 186.62, 11.42 ], "formula_id": "formula_2", "formula_text": "L M LM = Σ i L CrossEntropy (y i , ŷi ),(2)" }, { "formula_coordinates": [ 11, 108.6, 330.84, 181.27, 33.71 ], "formula_id": "formula_3", "formula_text": "L SP ARSE = λ × M i=1 M j=1 |C i,j | ,(3)" } ]
10.1162/tacl_a_00410
2024-02-02
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b13", "b8", "b4", "b4" ], "table_ref": [], "text": "An important property of logically valid machine learning systems is self-consistencyi.e., the requirement that no two statements given by the system are contradictory. Pre-trained large language models (LLMs), despite demonstrating impressive few-shot accuracy on a variety of multi-step reasoning tasks, often give inconsistent responses to questions (Mitchell et al., 2022;Kassner et al., 2021) and factual knowledge-seeking prompts (Elazar et al., 2021). Without self-consistency, it is difficult to consider LLMs reliable or trustworthy systems. Elazar et al. (2021) defines self-consistency as the invariance of an LLM's responses across different types of semantics-preserving prompt transformations. In this work, we seek to introduce and explore LLM self-consistency over two new types of transformations (shown in Figure 1) that we argue are important for valid multi-step reasoning. Hypothetical Transformations A hypothetical transformation is an indirect phrasing of a prompt that queries the model for what its response would hypothetically be in some other context, such as \"what would your response to <prompt> be?\" or \"what would the next 5 words in your completion of <prompt> be?\" Consistency over hypothetical transformations implies that an LLM has some stored knowledge or computational path for determining what its response would be to some prompt p without explicitly being prompted with exactly p itself. This can be useful for prompts involving multi-step reasoning, where the LLM must have knowledge of its responses to the earlier steps in order to compute its responses to downstream steps. Like in Figure 1, given the prompt \"What is the runtime of the best movie of 2022\" the LLM must either have stored or computed its response to \"what is the best movie of 2022?\" in order to answer the full prompt.\nCompositional Transformations For a prompt that involves multiple interdependent steps of reasoning, a compositional transformation consists of replacing some intermediate step with the model's output to the previous step. In the previous example, if the LLM outputs the response \"Everything Everywhere All At Once\" to the prompt \"What is the best movie of 2022?,\" then the prompt \"What is the runtime of Everything Everywhere All At Once?\" is a compositional transformation of \"What is the runtime of the best movie of 2022?\" (See Figure 1.) Consistency over compositional transformations is also important for logically valid multi-step reasoning when the LLM must give a direct response -without it, the LLM may give contradictory direct responses to different multi-step prompts that are in fact querying for the same thing.\nIn this work, we investigate the degree to which LLMs are self-consistent on these prompt transformations across a variety of tasks. We formalize our definitions of hypothetical and compositional consistency (Section 2) and show empirically that a wide range of pre-trained language models demonstrate low consistency rates on both hypothetical transformations (Section 3) and compositional transformations (Section 4)." }, { "figure_ref": [], "heading": "Formalizing Consistency", "publication_ref": [], "table_ref": [], "text": "To make more precise our definitions of consistency and the semantics-preserving transformations that they entail, we first formalize our definitions prior to conducting our empirical evaluations." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Let vocabulary V be a finite set of tokens, V * be the set of all possible finite-length sequences formed by concatenating zero or more tokens from V, and p θ : V → {0, 1} be an auto-regressive language model that defines a probability distribution over tokens v ∈ V. p θ can be used to generate a sequence via greedy decoding as follows:\nỹt = arg max v∈V log p θ (y t = v | c; ỹ<t ) (1)\ngiven some context sequence c ∈ V * , until some time step T for which ỹT = [EOS], the end-of-sequence token.\nFor ease of notation, we denote the greedy decoding output of a model p θ as\ng p θ (c) = ( arg max v∈V p θ (y = v|c), • • • , (2) arg max v∈V p θ (y = v|c; ỹ<T )).\nWe also define an operator ∼ that indicates when two strings are semantically equivalent. Although the precise definition of semantic equivalence will vary across different tasks, we use it to loosely refer to pairs of strings that can be used interchangeably (give or take syntactic adjustments) without changing the meaning of the overall utterance. Lastly, the ∼ operator is also reflexive, symmetric, and transitive." }, { "figure_ref": [], "heading": "Composing prompts", "publication_ref": [], "table_ref": [], "text": "Reasoning with language often also involves composing prompts -for instance, we might ask \"what is the answer to 2 × 3 + 4?\", which can be seen as the composition of a prompt template \"what is the answer to _ + 4?\" with the prompt \"2 × 3\", where the \"_\" symbol in the former string is substituted with the latter string. This corresponds to a multi-step task where the model might first answer the prompt \"2 × 3\" (yielding g p θ (\"2 × 3\")), substitute g p θ (\"2 × 3\") into the template (yielding the composed prompt \"what is the answer to g p θ (\"2 × 3\") + 4?,\" where the g p θ (\"2 × 3\") is replaced with the actual output string), and then answer the filled-in template.\nTo denote such prompt templates, we define P ′ , the set of prompts p ∈ V * that contain exactly one \"_\" symbol. Additionally, the function f (p ′ , p) : P ′ × V * → V * denotes substitution of p for the \"_\" symbol in p ′ .1 f also has some useful properties that we will use in our later definitions:\n• We can trivially represent any prompt p ∈ V * as the substitution of itself into the identity prompt template \"_\" by writing p = f (\"_\", p).\n• p ∼ q if and only if f (p ′ , p) ∼ f (p ′ , q) for all p ′ ∈ P ′ ." }, { "figure_ref": [], "heading": "Definitions", "publication_ref": [ "b4", "b7" ], "table_ref": [], "text": "We start out by restating the general definition of self-consistency, as it has been commonly defined in past literature (Elazar et al., 2021;Jang et al., 2022).\nDefinition 2.1 (Self-consistency). p θ is self-consistent if p ∼ q → g p θ (p) ∼ g p θ (q) for all p, q ∈ V * .\nIn other words, a self-consistent model gives semantically-equivalent responses to semantically equivalent prompts. These semantically equivalent pairs of prompts (p, q in Definition 2.1) can take many forms, including hypothetical and compositional transformations.\nDefinition 2.2 (Hypothetical Transformation). Let P ′ I denote the set of hypothetical transformation prompt templates, which are prompt templates p ′ ∈ P ′ such that f (p ′ , p) ∼ f (_, p) ∀p ∈ V * . Then the set of hypothetical transformations of prompt p can be denoted as\nP I (p) := {f (p ′ , p) | p ′ ∈ P ′ I }. Since f (_, p) ∼ p, a model that is self-consistent must yield g p θ (f (p ′ , p)) ∼ g p θ (p) for all p ′ ∈ P ′ I .\nAlthough we defined hypothetical transformations with respect to all prompts p ∈ V * , our definition of compositional transformations must be more restricted, since we care only to apply compositional transformations to prompts that implicitly encode a compositional task. That is, we are concerned only with prompts that are already compositionsi.e., prompts of the form f (p ′ , p). Furthermore, given some target model p * that represents the ground truth or gold distribution, the response to a compositional prompt (as generated by p * ) is semantically equivalent to the prompt itself. That is, f (p ′ , p) ∼ g p * (f (p ′ , p)). For example, if p = \"2\" and p ′ = \"4 + _\", then f (p ′ , p) = \"4 + 2\" and g p * (f (p ′ , p)) = g p * (\"4 + 2\") = 6 ∼ \"4 + 2\" = f (p ′ , p), so f (p ′ , p) is compositional.\nDefinition 2.3 (Compositional prompt). We define the set of compositional prompts as \nP Comp := {f (p ′ , p) | f (p ′ , p) ∼ g p * (f (p ′ , p)), p ∈ V * , p ′ ∈ P ′ }\n(f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p)))\nand is compositionally inconsistent when:\n1. p ≁ g p θ (p) and g p θ (f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p))) 2. p ∼ g p θ (p) and g p θ (f (p ′ , p)) ≁ g p θ (f (p ′ , g p θ (p)))\nOne more case exists that we do not count into our measures of either compositional consistency or inconsistency. If p ≁ g p θ (p) and g p θ (f (p ′ , p)) ≁ g p θ (f (p ′ , g p θ (p))), then we cannot say this is compositionally consistent because the model's output for p does not necessarily relate to or imply its output to f (p ′ , p). However, we also cannot necessarily say that it is compositionally inconsistent to give nonequivalent responses to f (p ′ , p) and f (p ′ , g p θ (p)) if the model's responses to p and g p θ (p) are also nonequivalent." }, { "figure_ref": [], "heading": "Evaluating Consistency on Hypothetical Transformations", "publication_ref": [ "b4" ], "table_ref": [], "text": "In our above definitions, hypothetical consistency is characterized as a binary -either a model is hypothetically consistent across all pairs (p, p ′ ) ∈ V * × P ′ I or it is hypothetically inconsistent. But in practice, models are only hypothetically consistent in some cases, and it is likely impossible to achieve hypothetical consistency across all (p, p ′ ) ∈ V * × P ′ I . Instead, we explore the degree to which LLM outputs are invariant to hypothetical transformations of the prompt. This is measured as the hypothetical consistency rate, which is the proportion of pairs (p, p ′ ) ∈ V * × P ′ I for which a model p θ exhibits the property g p θ (p) ∼ g p θ (f (p ′ , p)). To measure hypothetical consistency rate, we devise a set of four hypothetical transformation prompt templates that we use to transform randomly sampled prompts (sourced from Wikipedia and DailyDialog) into hypothetical prompts, as shown in Table 1. We average the hypothetical consistency rate over these prompts to mitigate the model's sensitivity to prompt wording (Elazar et al., 2021).\nEvaluating hypothetical consistency requires checking whether g p θ (p) ∼ g p θ (f (p ′ , p)). However, checking this semantic equivalence is non-trivial -the same idea can be expressed in a number of different but synonymous ways. Rather than attempt to devise an automatic method for evaluating semantic equivalence, we instead use a multiple-choice set-up. One answer choice is the continuation of the initial prompt (denoted by \"<prompt>\") sourced from a text dataset, one choice is the model's own greedily decoded completion for <prompt>, and the three remaining choices are the other models' completions for <prompt>. As discussed before, a model that is hypothetically consistent can, in a sense, predict its own completion. Thus, the model should be more likely to generate the answer choice that corresponds to its own completion than to the other answer choices. These templates are designed both to query the model on what its completion would hypothetically be for a given prompt and to evaluate whether the model can distinguish its own completions from those of other models.\nAs an example, suppose the original prompt sourced from Wikipedia is \"This quilt begun in 1856 when she was seventeen includes the autographs on top of the blocks of many known celebrities and politicians of the day. Other\". Suppose that the first three words of the completions generated by OpenAI GPT-3 models ada-001, babbage-001, curie-001, and davinci-003 are \"notable quilt authors,\" \"famous quilts include,\" \"signatures include abolitionists,\" and \"notable figures whose,\" respectively. Additionally, the next three words of the Wikipedia article are \"figures represented on.\" Then a hypothetical transformation prompt that uses the first template in Table 1 might look like: I predict that the next 3 words after \"This quilt begun in 1856 when she was seventeen includes the autographs on top of the blocks of many known celebrities and politicians of the day. Other\" would be A) famous quilts include B) figures represented on C) notable quilt authors D) signatures include abolitionists E) notable figures whose Answer:\nIf the model being evaluated is davinci-003, then the correct answer would be E. In this context, g davinci-003 (\"I predict that... Other\" would be\") = \"E\" ∼ \"notable figures whose\" = g davinci-003 (\"This quilt begun... Other\"), which satisfies Definition 2.5. As mentioned in Section 3.1, all hypothetical prompts are few-shot (where the provided labels are the answer choices that correspond to the evaluated model's completion), and only case-insensitive exact-match answers (i.e. \"a/A/b/B/c/C/d/D\") are accepted as correct.\nAlthough this experimental set-up eases the burden of checking the semantic equivalence of g p θ (p) ∼ g p θ (f (p ′ , p)), it is likely still a lower bound on the true hypothetical consistency, since it does not account for other model outputs that may be semantically equivalent to g p θ (p). In the example above, had davinci-003 generated \"notable people whose\" instead of the letter \"E,\" this response would have still been marked incorrect despite the semantic equivalence of \"notable figures whose\" and \"notable people whose.\" Furthermore, the definition of semantic equivalence here depends implicitly on the prompt p. Nevertheless, we still find it useful to analyze these lower bounds -examinations of the failure modes of LLMs can aid in the future improvements of these models. We leave to future work the question of how far this lower bound is from the true hypothetical consistency rate.\nTable 1: Prompt templates used to evaluate whether LLMs are consistent across hypothetical prompt transformations. <prompt> is a prompt sourced from a dataset (e.g. Wikipedia, DailyDialog), <m> is the number of words of its own completion that the model is asked to predict, and <answer_choices> are the multiple-choice answer choices that the model is given." }, { "figure_ref": [], "heading": "Hypothetical Transformation Prompt Templates", "publication_ref": [ "b11" ], "table_ref": [], "text": "\"I predict that the next <m> words after \"<prompt>\" would be <answer_choices>. Answer:\" \"Given the prompt \"<prompt>\", my next <m> words would be <answer_choices>. Answer:\" \"Given the context \"<prompt>\", my next <m> words would be <answer_choices>. Answer:\" \"I predict that after the context \"<prompt>\" the next <m> words I would say are <answer_choices>. Answer:\"\nWe conduct our hypothetical consistency experiments with original prompts sourced from two language modeling tasks:\n• Wikipedia Since language models are frequently pre-trained on Wikipedia archives, evaluating on a Wikipedia dataset can confound information memorized during pre-training with the skill being evaluated. To address this issue, we collect a sample of 400 English Wikipedia (Wikimedia Foundation) articles that were created on or after June 30, 2021, since the OpenAI documentation (Center) indicates that the latest pre-training data for the ada-/babbage-/curie-/davinci-001 and davinci-003 models contains documents dating up to June 2021. Each initial prompt is a randomly selected segment of a Wikipedia article consisting of two full sentences followed by a random-length prefix of the next sentence.\n• DailyDialog: DailyDialog (Li et al., 2017) is a manually labeled dataset of multi-turn conversations about daily life. We choose this dataset because it contains language that is more colloquial in style and less factual in content than the Wikipedia dataset. We randomly sample 400 examples from the training split and use the first conversational turn as the initial prompt.\nWe use only the prompts for which all five answer choices are distinct. We also vary the number of words m in the original completion that the model is asked to distinguish from 1 to 6, since the difficulty may vary depending on the length of the original completion. We then compute the hypothetical consistency rate by calculating the proportion of the time that the model generated the letter of the answer choice corresponding to its own completion." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b12", "b18", "b16" ], "table_ref": [], "text": "In all experiments, we evaluate four model sizes of the OpenAI GPT-3 model (Brown et al., 2020)ada-001, babbage-001, curie-001, and davinci-003 (in order of increasing capacity). 2 All experiments are run using greedily decoded completions obtained from the OpenAI API from Aug. 2022 to Jun. 2023. We use 0-shot initial prompts but evaluate hypothetical consistency prompts using k-shot prompts, where k ranges from 1 to 10. Since the in-context performance of LLMs is known to vary depending on the selection of in-context examples (Liu et al., 2021;Rubin et al., 2022), we randomly select a different set of in-context examples for each prompt. We also randomize the order of answer choices for each multiple-choice question to mitigate sensitivity to answer choice order (Pezeshkpour & Hruschka, 2023). Further evaluations for other combinations of models can be found in Appendix A. " }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "All Model Sizes Perform Poorly At Distinguishing Their Own Completions", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "Figure 2 shows GPT-3's hypothetical consistency rates averaged over all few-shot prompts. Notably, all model sizes smaller than davinci-003 perform at about random chance on this task, regardless of how many words of the original completion the model is tasked with predicting. davinci-003 is the only model size that consistently performs above random chance, but even then its accuracy ranges only from 26% to 31% for Wikipedia and 30% to 37% for DailyDialog. We might also expect hypothetical consistency rate to increase as a function of m (because longer sequences provide more information for the model to use to distinguish its own completion from other sequences), but we do not observe this trend for these models. In Appendix A, however, we do observe this trend for gpt4. It appears that higher-capacity and generally more powerful models are overall more hypothetically consistent.\nWe also inspect the frequency with which each model selects each possible answer choice, as shown in Figure 3. For a highly hypothetically consistent model that has not memorized the dataset, we would expect it to 2 We select this particular set of models since it is the most recent set of text completion (rather than chat) models available for each size of GPT-3. However, we also compare against text-davinci-001 and gpt4 in Appendix A. We separate davinci-001 and gpt4 into separate analyses because davinci-001, davinci-003, and gpt4 often output the same original completions. In our multiple-choice set-up, this results in duplicate answer choices. As such, we only ever include completions from one of davinci-001, davinci-003, and gpt4 in any hypothetical consistency prompt. Since gpt4 is a chat model whereas the rest of the models are completion models, we choose to analyze the completion models in the main text and leave comparison against gpt4 for the Appendix. selects each possible answer choice when prompted with a hypothetical consistency prompt, averaged across all prompts (i.e. across all m, the number of words that the model is asked to predict; and k, the number of few-shot examples). The columns labeled \"Wikipedia\" and \"DailyDialog\" correspond to the answer choice containing the completion from the original dataset. Model outputs that could not be parsed into an answer choice are not included.\nselect the answer choice corresponding to its own completion the vast majority of the time, and to only select the other answer choices a negligible proportion of the time. However, for both tasks, only davinci-003 demonstrates a noticeable preference for its own completion over others. Furthermore, none of the other models display a preference for any other answer choice, including the completions of the other models and the continuation from the original dataset. Despite our use of multiple prompt formats, ada-001, babbage-001, and curie-001 all make random choices and cannot predict their own completions.\ndavinci-003's moderate hypothetical consistency also cannot easily be attributed to dataset memorization, for a couple of reasons. Firstly, we selected only hypothetical consistency prompts for which all five answer choices were distinct -ergo, davinci-003's completion could not have been identical to that of either Wikipedia or DailyDialog. Secondly, we also computed the average percent edit distance (the edit distance divided by the length of the longer string) between the completions of davinci-003 and the completions of the smaller models and datasets, as shown in Table 2. Across both datasets, the davinci-003 completions are on average more than 70% different from the dataset completions. Furthermore, the edit distances in Table 2 do not have high variance in each row, indicating that davinci-003's completions are not significantly more different from or similar to one particular source than the others." }, { "figure_ref": [], "heading": "Evaluating Compositional Self-Consistency", "publication_ref": [], "table_ref": [], "text": "Next, we evaluate compositional self-consistency on the tasks of arithmetic and semantic parsing, since these are tasks for which valid compositional reasoning are important. Both tasks can be framed as computational graphs that are directed and acyclic, with each node having at most one parenti.e. trees. To evaluate compositional consistency, we store the model's answer to the expression represented by each individual subtree and generate compositional consistency prompts by creating copies of the original prompts where a single sub-tree expression has been replaced by the model's output for that sub-tree. If a model is compositionally consistent, then it should give the same answer to the original expression as to the copy with the replaced sub-tree. Below, we further describe the experimental setup and give examples for each of the tasks." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "We evaluate compositional self-consistency across six models (ada-001, babbage-001, curie-001, davinci-001, davinci-003, and gpt4) and two tasks. Experiments for the first five models were run between Aug. 2022 and Jun. 2023, and experiments for gpt4 were run between May and Jun. 2023." }, { "figure_ref": [], "heading": "Synthetic Arithmetic", "publication_ref": [ "b19", "b9" ], "table_ref": [], "text": "We generate a set of 400 randomly-nested arithmetic expressions as the initial prompts. We then collect model completions for all possible sub-expressions of each expression using k-shot prompts, with k ranging from 3 to 10. We randomly select the in-context examples for each prompt. The arithmetic expressions have a maximum nesting depth of 5 with a nesting probability of 0.5, and each nested expression is enclosed in parentheses to avoid ambiguity. Operators and operands are randomly selected with uniform probability from the sets {+, -, /, ×} and [1, 2, • • • , 999], respectively.\nFor example, for the original arithmetic expression \"(2 × 3) + (6/2),\" we prompt each model with the following three sub-expression prompts, using parentheses to force the correct order of operations:\np 1 = \"Q: 2 × 3 \\n A:\" p 2 = \"Q: 6/2 \\n A:\" p 3 = \"Q: (2 × 3)+(6/2) \\n A:\"\nFor each non-root sub-expression (i.e. p 1 and p 2 ), we then create a new compositional consistency prompt by replacing that sub-expression in the original expression (i.e. p 3 ) with the model's completion. For the previous example, if the model answered p 1 and p 2 correctly, this would result in the following two compositional consistency prompts:\np (1) CC = \"Q: 6 + (6/2) \\n A:\" p (2) CC = \"Q: (2 × 3) + 3 \\n A:\"\nFor this example, we then compute a model's compositional consistency rate as the proportion of the time that the model's output for p i is correct, and its outputs for p (i) CC and p 3 are the same.\nGeoQuery GeoQuery (Zelle & Mooney, 1996) is a semantic parsing dataset consisting of 880 natural language questions about US geography, paired with FunQL (Kate et al., 2005) parses of those questions. Similar to the synthetic arithmetic task, we first collect model parses for the spans corresponding to each sub-parse of a sample of 400 GeoQuery training examples via k-shot prompts, for k ranging from 3 to 10. We randomly select the in-context examples for each prompt. For example, consider the GeoQuery example \"Which state has the city with the most population?\" with corresponding FunQL state(loc_1(largest_one(population_1(city(all))))). Then two of the initial prompts we create include the following: p 1 = \"Create a FunQL query for the following question: 'Which state has the city with the most population?' A: \" p 2 = \"Create a FunQL query for the following question: 'city with the most population' A: \"\nwhere each prompt is sourced from a non-leaf sub-parse of the original gold FunQL expression (and the other sub-parses are omitted here for brevity).\nSince evaluating whether g p θ (f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p)) would involve interleaving natural language with FunQL in this case, we instead measure compositional consistency as the instances where the model's parse for p 2 is correct (i.e. p 2 ∼ g p θ (p 2 )) and is a sub-parse of its parse for p 1 (regardless of whether the parse for p 1 is correct). This second condition is a slightly more relaxed version of the condition that\ng p θ (f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p))\n, where we instead assess whether g p θ (p) is being used in g p θ (f (p ′ , p)). Due to this relaxation, the rate we measure here is an upper bound on the true compositional consistency rate. " }, { "figure_ref": [ "fig_4", "fig_6" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "The compositional consistency rates for all six models are shown in Figure 4. While davinci-003 and gpt4 exhibit the highest compositional consistency rates, both are compositionally consistent less than 50% of the time on the arithmetic task and less than 65% of the time on the semantic parsing task. However, all models appear to improve in compositional consistency on the GeoQuery task as the number of in-context examples increases. Furthermore, gpt4 exhibits significantly more compositional consistency on the arithmetic task than even davinci-003. Taken together, these results suggest that both increasing the model capacity and the number of in-context examples can offer some improvements in compositional consistency. For instances where the models are compositionally inconsistent, we can analyze the sources of the inconsistency. For arithmetic, 90% of compositional inconsistencies are caused by the final answer not matching the answer of the compositional transformation, despite the answer to the sub-expression being correct (i.e. p ∼ g p θ (p) but g p θ (f (p ′ , p)) ̸ ∼ g p θ (f (p ′ , g p θ (p)), or case (2) in the definition of compositional inconsistency in Definition 2.7). For GeoQuery, approximately 59% of compositional inconsistencies result from the parse of the sub-tree not being included in the parse of the parent tree. For example, suppose that the parent tree is represented by the query \"how many people live in Texas?\" and the replaced subtree corresponds to the query \"Texas.\" A model might correctly output the parse stateid('texas') for the latter but then output the parse 'population(state(name(\"texas\")))' for the parent tree query, which is inconsistent with the parse of the subtree. In the other approximately 41% of cases, the compositional inconsistencies on GeoQuery are caused by an incorrect parse of the child node (i.e. p ̸ ∼ g p θ (p), or case (1) of compositional inconsistency in Definition 2.7).\nFor tasks where a precise definition of correctness does exist, such as arithmetic, it can be useful to understand the relationship between correctness and compositional consistency. Figure 5 shows this relationship for all four model sizes. There exists a notable linear relationship between correctness and compositional consistency, but all models except for davinci-001 are slightly more correct than consistent. This indicates that models may output correct answers for the final expression but not for intermediate steps, or vice versa. Nonetheless, training LLMs to optimize solely for correctness appears to be helpful for improving compositional consistency in such tasks. For other tasks where a precise definition of correctness does not exist, this may not be as feasible a solution." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b7", "b4", "b5", "b14", "b17", "b10", "b6", "b20", "b4", "b6", "b14", "b3" ], "table_ref": [], "text": "Our work is inspired by an extensive body of literature that has defined and evaluated model consistency in a variety of ways. Elazar et al. (2021) defines consistency as the ability for the LLM to give consistent responses to semantically equivalent contexts, such as paraphrased contexts. Jang et al. (2022) supplements this definition with multiple other categories of logical consistency, such as negational, symmetric, transitive, and additive consistency. Similar to our results, they show that many modern LLMs do not exhibit strong consistency according to these definitions.\nYet other work has highlighted the inconsistency of LLM predictions across paraphrases of the same input for a variety of downstream tasks, including knowledge extraction (Elazar et al., 2021;Fierro & Søgaard, 2022;Newman et al., 2022), truthfulness (Raj et al., 2022), summarization (Kryscinski et al., 2020), and natural language understanding (Jang et al., 2021;Zhou et al., 2022). Various remedies have been proposed for this issue - Elazar et al. (2021) proposes a novel consistency loss that minimizes the 2-sided KL divergence between paraphrases, Jang et al. (2021) proposes using multi-task training with paraphrase identification, and Newman et al. (2022) proposes training additional adapter layers to map paraphrased prompts to the same continuous representation. On the other hand, Dziri et al. (2023) suggest that the failure of LLMs to consistently reason correctly on compositional tasks is an intrinsic characteristic of the Transformer architecture -as the average parallelism of a compositional task increases, the expected error of the Transformer increases exponentially.\nOur work augments the past literature by formally defining two new types of logical consistency that are crucial for valid multi-step reasoning and that have not been studied before. We additionally validate the poor performance of modern LLMs on these new types of consistency and further the understanding of why LLMs fail to generalize well on compositional tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have proposed two types of language model self-consistency that are important for the reliability and logically valid reasoning of LLMs on multi-step tasks. Despite the GPT-3 and GPT-4 models' generally impressive performance on a wide variety of tasks, these models still perform inconsistently on both hypothetical and compositional consistency prompts, although larger models appear to perform better. This furthers our understanding of how these otherwise impressive LLMs fail to generalize well on compositional tasks and suggests an additional reason not to trust the outputs of LLMs on complex compositional tasks, especially without extensive empirical validation. Further work is required in order to improve the logical consistency of LLM reasoning, and to investigate whether novel training techniques or further scaling improve hypothetical or compositional consistency. The shaded region represents the 95% confidence interval computed with nonparametric bootstrapping. The label \"number of words from original completion to distinguish\" corresponds to the quantity m in Table 1." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_7", "fig_9", "fig_7", "fig_1", "fig_9" ], "heading": "Comparison of Hypothetical Consistency Against davinci-001 and gpt4", "publication_ref": [], "table_ref": [ "tab_7", "tab_6", "tab_2" ], "text": "We also run the same hypothetical consistency experiments on davinci-001 and gpt4. Hypothetical consistency rates for davinci-001 versus the smaller models are shown in Figure 6a, where trends are similar, but davinci-001 performs at random chance on Wikipedia, like all the other -001 series models. On DailyDialog, however, davinci-001 selects each possible answer choice when prompted with a hypothetical consistency prompt. Model outputs that could not be parsed into an answer choice are not included. The columns labeled \"Wikipedia\" and \"DailyDialog\" correspond to the answer choice containing the completion from the original dataset. Model outputs that could not be parsed into an answer choice are not included. performs noticeably better than all the other model sizes. Similar trends occur in Figure 7a, where most models are equally likely to select each answer choice on the Wikipedia dataset, and davinci-001 is more likely to select either its own or curie-001's completion on the DailyDialog dataset.\nIn contrast, when gpt4 is tasked with distinguishing its own completions from those of ada-001, babbage-001, curie-001, and the dataset, gpt4 performs notably better than both davinci-001 and davinci-003 on DailyDialog, reaching 59.9% hypothetical consistency when the number of words to distinguish is 6 (Figure 6b). However, its hypothetical consistency rate on Wikipedia is comparable to that of davinci-003 (Figure 2), ranging from 17.9% to 27.4%. Figure 7b also demonstrates that gpt4 is significantly more likely to select its own completion than the other models are.\nIt is unclear why gpt4 is more consistent on DailyDialog than previous models of similar capacity (i.e. davinci-001 and davinci-003). Little is known about gpt4's architecture or training, aside from its multimodal abilities and training via reinforcement learning from human feedback (RLHF, OpenAI, 2023).\nSince davinci-003 was also trained with RLHF (OpenAI), it is possible that other changes in architecture or training may have also contributed to the significant improvement in hypothetical consistency. It is also unlikely that gpt4's improvements in hypothetical consistency on DailyDialog can be attributed to dataset memorization. Firstly, we selected only prompts from both Wikipedia and DailyDialog for which all five answer choices were distinct, so gpt4's completion could not have been identical to that of the original DailyDialog dataset. Secondly, we computed the average percent edit distance (the edit distance divided by the length of the longer string) between the completions of gpt4 versus the completions of the smaller models and the datasets, which are shown in Table 4. The average percent edit distance between gpt4 completions and DailyDialog continuations was 81.3%, indicating that gpt4 was generating strings that were substantially different from the dataset. Similar trends were found when comparing davinci-001 and davinci-003 against the completions of the other models and the datasets (Tables 3 and2)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We are grateful to Eugene Choi, Richard Pang, and Nikita Nangia for helpful discussions and feedback about the design and implementation of this work. This work was supported by National Science Foundation Awards 1922658 and 2046556. CZ is supported by the DARPA PTG program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA. KC is additionally supported by 42dot, Hyundai Motor Company (under the project Uncertainty in Neural Sequence Modeling) and the Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI). This project has also benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Open Philanthropy, and Apple. We also thank the NYU High-Performance Computing Center for in-kind support and OpenAI for providing access to and credits for their models via the API Academic Access Program." } ]
Large language models (LLMs) have achieved widespread success on a variety of in-context fewshot tasks, but this success is typically evaluated via correctness rather than consistency. We argue that self-consistency is an important criteria for valid multi-step reasoning in tasks where the solution is composed of the answers to multiple sub-steps. We propose two types of selfconsistency that are particularly important for multi-step reasoning -hypothetical consistency (a model's ability to predict what its output would be in a hypothetical other context) and compositional consistency (consistency of a model's final outputs when intermediate sub-steps are replaced with the model's outputs for those steps). We demonstrate that multiple variants of the GPT-3/-4 models exhibit poor consistency rates across both types of consistency on a variety of tasks.
Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of the two types of self-consistency failures we identify in LLMs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Hypothetical consistency rates on multiple-choice self-knowledge prompts for the Wikipedia and DailyDialog datasets, across the four GPT-3 model sizes. Each line is the average taken across all k-shot prompts, for k ∈ [1, • • • , 10]. The shaded region represents the 95% confidence interval computed with nonparametric bootstrapping. The label \"number of words from original completion to distinguish\" corresponds to the quantity m in Table1.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: A more detailed breakdown of the numbers in Figure2: the percentage of the time that each model selects each possible answer choice when prompted with a hypothetical consistency prompt, averaged across all prompts (i.e. across all m, the number of words that the model is asked to predict; and k, the number of few-shot examples). The columns labeled \"Wikipedia\" and \"DailyDialog\" correspond to the answer choice containing the completion from the original dataset. Model outputs that could not be parsed into an answer choice are not included.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Compositional consistency rates versus the number of in-context examples on the arithmetic and GeoQuery tasks. The shaded region represents the 95% confidence interval computed with nonparametric bootstrapping.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The correctness versus compositional consistency rate of each type of GPT-3 or GPT-4 model on the arithmetic task.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Hypothetical consistency rates on multiple-choice hypothetical consistency prompts for the Wikipedia and DailyDialog datasets. Each multiple-choice prompt contains answer choices generated by the designated four models and an additional answer choice containing the actual continuation of the prompt in the dataset. Each line is the average taken across all k-shot prompts, for k ∈ [1, • • • , 10]. The shaded region represents the 95% confidence interval computed with nonparametric bootstrapping. The label \"number of words from original completion to distinguish\" corresponds to the quantity m in Table1.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparing selected answers on multiple-choice prompts with answer choices generated by ada-001, babbage-001, curie-001, and gpt4.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The proportion of the time that each model (ada-001, babbage-001, curie-001, and davinci-001)selects each possible answer choice when prompted with a hypothetical consistency prompt. Model outputs that could not be parsed into an answer choice are not included. The columns labeled \"Wikipedia\" and \"DailyDialog\" correspond to the answer choice containing the completion from the original dataset. Model outputs that could not be parsed into an answer choice are not included.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "given gold distribution p * . Definition 2.4 (Compositional transformation). For prompt compositions p ∈ P Comp and f (p ′ , p) ∈ P Comp both representing compositional tasks, the compositional transformation with respect to model p θ is f (p ′ , g p θ (p)).Given the above two types of prompt transformations, we can define narrower types of LLM self-consistency. (Hypothetical consistency). A model p θ is hypothetically consistent if g p θ (p) ∼ g p θ (f (p ′ , p)) for any prompt p ∈ V * and hypothetical transformation prompt template p ′ ∈ P ′ I . p θ is self-consistent, then p θ is also hypothetically consistent. (Consistency over compositional transformations). A model p θ is compositionally consistent when, for all pairs of compositional prompts p ∈ P Comp and f (p ′ , p) ∈ P Comp , 1. p ∼ g p θ (p) and g p θ", "figure_data": "Definition 2.5 Proof. Consider prompt p ∈ V * and hypothetical transformation prompt template p ∈ P ′ I . Since f (p ′ , p) ∼ f (_, p) (by Definition 2.2) and f (_, p) ∼ p, then f (p ′ , p) ∼ p by the transitive property of ∼. Then byDefinition 2.1, it follows that g p θ (f (p ′ , p)) ∼ g p θ (p).Definition 2.7", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average percent edit distances between completions from davinci-003 versus completions from the three other models and two datasets.", "figure_data": "Datasetdavinci-003 / ada-001davinci-003 / babbage-001davinci-003 / curie-001davinci-003 / DatasetWikipedia72.8%69.7%65.0%70.3%DailyDialog75.8%75.1%74.2%79.0%", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparing selected answers on multiple-choice prompts with answer choices generated by ada-001, babbage-001, curie-001, and davinci-001.", "figure_data": "davinci -00118% 18% 19% 19% 19% Wikipedia30% 35%gpt414% 3%7% 10% 23% Wikipediacurie -001 babbage -001 Evaluated model19% 19% 18% 18% 19% 19% 19% 18% 18% 18%20% 25%curie -001 babbage -001 Evaluated model18% 19% 18% 19% 19% 18% 18% 19% 18% 18%ada -00117% 17% 16% 17% 17%15%ada -00116% 17% 17% 16% 16%Wikipedia ada -001 Source of answer choice text babbage -001 curie -001 davinci -001 DailyDialog ada -001 babbage -001 curie -001 davinci -001 Source of answer choice text -001 davinci curie -001 babbage -001 ada -001 Evaluated model 16% 17% 19% 23% 24% 20% 20% 20% 20% 20% 20% 21% 20% 19% 19% 18% 19% 18% 18% 18% DailyDialog (a) Wikipedia ada 10% 10% 15% 20% 25% 30% 35% -001 Source of answer choice text babbage -001 curie -001 DailyDialog ada -001 babbage -001 curie -001 Source of answer choice text gpt4 curie -001 babbage -001 Evaluated model 17% 9% 11% 13% 40% gpt4 gpt4 21% 21% 20% 20% 20% 20% 20% 20% 20% 20% ada -001 19% 18% 18% 18% 18%", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average percent edit distances between completions from davinci-001 versus completions from the three other models and two datasets.", "figure_data": "Datasetdavinci-001 / ada-001davinci-001 / babbage-001davinci-001 / curie-001davinci-001 / DatasetWikipedia73.6%71.1%64.4%71.4%DailyDialog71.4%70.1%69.3%79.4%", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Average percent edit distances between completions from gpt4 versus completions from the three other models and two datasets.", "figure_data": "Datasetgpt4 / ada-001gpt4 / babbage-001gpt4 / curie-001gpt4 / DatasetWikipedia74.1%71.7%68.9%68.9%DailyDialog81.3%80.1%77.8%81.3%", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" } ]
Angelica Chen; Jason Phang; Alicia Parrish; Vishakh Padmakumar; Chen Zhao; Samuel R Bowman; Kyunghyun Cho
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b1", "title": "", "year": "2020" }, { "authors": "Help Openai; Center", "journal": "", "ref_id": "b2", "title": "Do the openai api models have knowledge of current events?", "year": "" }, { "authors": "Nouha Dziri; Ximing Lu; Melanie Sclar; Lorraine Xiang; Liwei Li; Bill Jiang; Peter Yuchen Lin; Chandra West; Bhagavatula; Le Ronan; Jena D Bras; Soumya Hwang; Sean Sanyal; Xiang Welleck; Allyson Ren; Zaid Ettinger; Yejin Harchaoui; Choi", "journal": "", "ref_id": "b3", "title": "Faith and fate: Limits of transformers on compositionality", "year": "2023" }, { "authors": "Yanai Elazar; Nora Kassner; Shauli Ravfogel; Abhilasha Ravichander; Eduard Hovy; Hinrich Schütze; Yoav Goldberg", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Measuring and Improving Consistency in Pretrained Language Models", "year": "2021-12" }, { "authors": "Constanza Fierro; Anders Søgaard", "journal": "", "ref_id": "b5", "title": "Factual consistency of multilingual pretrained language models", "year": "2022-05" }, { "authors": "Myeongjun Jang; Deuk Sin Kwon; Thomas Lukasiewicz", "journal": "", "ref_id": "b6", "title": "Accurate, yet inconsistent? consistency analysis on language understanding models", "year": "2021" }, { "authors": "Myeongjun Jang; Deuk Sin Kwon; Thomas Lukasiewicz", "journal": "", "ref_id": "b7", "title": "BECEL: Benchmark for consistency evaluation of language models", "year": "2022-10" }, { "authors": "Nora Kassner; Oyvind Tafjord; Hinrich Schütze; Peter Clark", "journal": "", "ref_id": "b8", "title": "BeliefBank: Adding memory to a pre-trained language model for a systematic notion of belief", "year": "2021-11" }, { "authors": "J Rohit; Yuk Wah Kate; Raymond J Wong; Mooney", "journal": "AAAI Press", "ref_id": "b9", "title": "Learning to transform natural to formal languages", "year": "2005" }, { "authors": "Wojciech Kryscinski; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b10", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020-11" }, { "authors": "Yanran Li; Hui Su; Xiaoyu Shen; Wenjie Li; Ziqiang Cao; Shuzi Niu", "journal": "", "ref_id": "b11", "title": "DailyDialog: A manually labelled multi-turn dialogue dataset", "year": "2017-11" }, { "authors": "Jiachang Liu; Dinghan Shen; Yizhe Zhang; Bill Dolan; Lawrence Carin; Weizhu Chen", "journal": "", "ref_id": "b12", "title": "What makes good in-context examples for gpt-3?", "year": "2021" }, { "authors": "Eric Mitchell; Joseph J Noh; Siyan Li; William S Armstrong; Ananth Agarwal; Patrick Liu; Chelsea Finn; Christopher D Manning", "journal": "", "ref_id": "b13", "title": "Enhancing self-consistency and performance of pretrained language models with nli", "year": "2022" }, { "authors": "Benjamin Newman; Prafulla Kumar Choubey; Nazneen Rajani", "journal": "", "ref_id": "b14", "title": "P-adapters: Robustly extracting factual information from language models with diverse prompts", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b15", "title": "", "year": "2023" }, { "authors": "Pouya Pezeshkpour; Estevam Hruschka", "journal": "", "ref_id": "b16", "title": "Large language models sensitivity to the order of options in multiple-choice questions", "year": "2023" }, { "authors": "Harsh Raj; Domenic Rosati; Subhabrata Majumdar", "journal": "", "ref_id": "b17", "title": "Measuring reliability of large language models through semantic consistency", "year": "2022" }, { "authors": "Ohad Rubin; Jonathan Herzig; Jonathan Berant", "journal": "", "ref_id": "b18", "title": "Learning to retrieve prompts for in-context learning", "year": "2022-07" }, { "authors": "John M Zelle; Raymond J Mooney", "journal": "AAAI Press", "ref_id": "b19", "title": "Learning to parse database queries using inductive logic programming", "year": "1996" }, { "authors": "Chunting Zhou; Junxian He; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Prompt consistency for zero-shot task generalization", "year": "2022-12" } ]
[ { "formula_coordinates": [ 2, 232.96, 633.2, 308.2, 15.25 ], "formula_id": "formula_0", "formula_text": "ỹt = arg max v∈V log p θ (y t = v | c; ỹ<t ) (1)" }, { "formula_coordinates": [ 2, 227.6, 694.14, 313.57, 35.55 ], "formula_id": "formula_1", "formula_text": "g p θ (c) = ( arg max v∈V p θ (y = v|c), • • • , (2) arg max v∈V p θ (y = v|c; ỹ<T ))." }, { "formula_coordinates": [ 3, 72, 506.85, 421.3, 33.25 ], "formula_id": "formula_2", "formula_text": "P I (p) := {f (p ′ , p) | p ′ ∈ P ′ I }. Since f (_, p) ∼ p, a model that is self-consistent must yield g p θ (f (p ′ , p)) ∼ g p θ (p) for all p ′ ∈ P ′ I ." }, { "formula_coordinates": [ 3, 72, 644.96, 468, 22.27 ], "formula_id": "formula_3", "formula_text": "P Comp := {f (p ′ , p) | f (p ′ , p) ∼ g p * (f (p ′ , p)), p ∈ V * , p ′ ∈ P ′ }" }, { "formula_coordinates": [ 4, 188.42, 234.75, 125.23, 11.9 ], "formula_id": "formula_4", "formula_text": "(f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p)))" }, { "formula_coordinates": [ 4, 95.14, 287, 218.51, 32.26 ], "formula_id": "formula_5", "formula_text": "1. p ≁ g p θ (p) and g p θ (f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p))) 2. p ∼ g p θ (p) and g p θ (f (p ′ , p)) ≁ g p θ (f (p ′ , g p θ (p)))" }, { "formula_coordinates": [ 8, 231, 518.73, 150.01, 40.21 ], "formula_id": "formula_6", "formula_text": "p 1 = \"Q: 2 × 3 \\n A:\" p 2 = \"Q: 6/2 \\n A:\" p 3 = \"Q: (2 × 3)+(6/2) \\n A:\"" }, { "formula_coordinates": [ 8, 234.62, 621.63, 142.75, 32.24 ], "formula_id": "formula_7", "formula_text": "p (1) CC = \"Q: 6 + (6/2) \\n A:\" p (2) CC = \"Q: (2 × 3) + 3 \\n A:\"" }, { "formula_coordinates": [ 9, 72, 299.54, 132.94, 11.9 ], "formula_id": "formula_8", "formula_text": "g p θ (f (p ′ , p)) ∼ g p θ (f (p ′ , g p θ (p))" } ]
10.18653/v1/N19-1388
2023-10-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b33", "b33" ], "table_ref": [], "text": "Multilingual model vocabularies are finite and typically smaller than the possible set of Unicode characters, inherently leaving some languages and scripts under-represented. As coverage increases, parameter allocation to each language decreases, resulting in a trade-off between capability, capacity, and coverage. Recent work on pixel representations (Salesky et al., 2021;Rust et al., 2023) provides an appealing alternative to past approaches, because they do not have a discrete model vocabulary or finite embedding matrix, and can represent all scripts with complete parameter sharing.\nRecent work (Rust et al., 2023) has also shown that pixel-based models can be directly finetuned across scripts without vocabulary extensions, adapters, or transliteration. However, pixel representations have previously only been trained or finetuned on individual languages at a time, rather than multilingually. This leaves unanswered questions about the effects of multilingual co-training, such as whether similar scripts will interfere with or boost performance, or if architectural changes will be needed given the larger input space. In this work we demonstrate how to effectively parameterize and train multilingual translation models with pixel representations, leading to improvements of up to 9 BLEU on two multilingual datasets with diverse language and script coverage. We explore various properties of pixel representations in order to understand their potential benefits and limitations, including positive transfer and representational similarity between languages, parameter sharing, and frequency-based relationships. Finally, we show that not only can pixel representations be finetuned cross-lingually or to unseen scripts, but can do so more data-efficiently than alternatives such as vocabulary expansion, with significant improvements for unseen scripts." }, { "figure_ref": [], "heading": "Our approach", "publication_ref": [ "b10", "b28" ], "table_ref": [], "text": "Covering the larger character sets 1 in multilingual models commonly results in significant parameter increases in the embedding matrix and softmax, creating a vocabulary bottleneck. While sampling data by language to balance vocabularies is common for large-scale multilingual systems (Fan et al., 2021), sampling may cause common vocabulary to be outof-vocabulary (OOV) for languages with longer-tail character distributions like Chinese (NLLB Team et al., 2022). 2 One alternative is to move to bytebased representations, which combats exploding model parameters by reducing the set of embeddings to 256. However, this approach increases sequence lengths up to 12× compared to characters, determined by the script's Unicode encoding, making optimal batch sizes prohibitively large and slow for our computational resources.\nRendering text to images bypasses many of the vocabulary challenges posed by multilingual modeling. Pixel-based representations have the advantage of no predetermined static vocabularies, no exploding embedding matrix parameters or sequence lengths, and complete parameter sharing across similar word forms at a sub-character level regardless of the underlying Unicode or byte structure.\nBelow we present the technical details of our approach and comparisons before proceeding to experimental settings and results." }, { "figure_ref": [ "fig_1" ], "heading": "Encoding text with pixels", "publication_ref": [ "b33" ], "table_ref": [], "text": "Figure 2 demonstrates the rendering process and resulting Transformer inputs. We render text using the PangoCairo library 3,4 following Rust et al. (2023) with a font size of 10pt at 120 DPI. We tokenize sentence-level images into fixed-size image tokens with h=24, w=24, and stride s=12, which results in ∼3 Latin characters per token. The height was chosen to fit the wide variety of scripts and diacritics in our experimental data with a fixed font size. We use the Google Noto Sans fonts collection which covers the majority of Unicode codepoints. 5 Further discussion on rendering parameter choices is found in App. C. No preprocessing is applied before rendering. We train many-to-one multilingual models with pixel representations on the source side, and generate discrete subword tokens as the target as below." }, { "figure_ref": [], "heading": "Traditional subword tokenization", "publication_ref": [ "b22", "b23" ], "table_ref": [ "tab_5" ], "text": "We generated all subword vocabularies using Sen-tencePiece unigramLM (Kudo, 2018;Kudo and Richardson, 2018). In exploratory experiments, we 2 For example, the NLLB model vocabulary does not include the common characters in 'mother' in Chinese, 妈妈. 3 https://docs.gtk.org/PangoCairo 4 PangoCairo provides greater flexibility than alternatives such as PyGame, used in previous work, by supporting fallback fonts at the character level. This is necessary not only for code-mixing but to support common occurrences such as non-transliterated entities within non-Latin scripts. 5 See https://notofonts.github.io/overview for the Noto fonts and their Unicode coverage. compared the union of subword vocabularies constructed per-language to a jointly-trained subword vocabulary of the same total size. Individual vocabularies were of size 5k,6 and scaled equivalently for joint vocabularies, e.g. 35k for 7 source languages. The two constructions did not result in significant differences in downstream performances in our balanced or imbalanced data settings so we present only joint vocabulary results in the main text, as this approach scales more easily to 59 languages.\nResults for both constructions are shown in App. G. We use separate source and target vocabularies and share target vocabularies between subword and pixel models in order to isolate the source representation change. Vocabulary sizes for all models and datasets are shown in Table 4 in App. B." }, { "figure_ref": [ "fig_1" ], "heading": "Model architecture", "publication_ref": [ "b34", "b3", "b35", "b5", "b18", "b21", "b41", "b4" ], "table_ref": [], "text": "Our core architecture follows Salesky et al. (2021) and combines a convolutional block7 which processes image tokens and produces flattened vectors (Figure 2) with a Transformer encoder-decoder model. Convolutional layers use one color channel and a 3 × 3 kernel with a stride of 1. Our conventional text models share the same Transformer architecture and replace the convolutional block with a traditional embedding matrix of size V × 512.\nOur base models are Transformers with 6 encoder and 6 decoder layers each, with hidden units of dim 512, feed-forward layers of dim 1024, and 4 heads. We train our models with the Adam optimizer (Kingma and Ba, 2015) with linear warmup, learning rate 5e-4, dropout of 0.1, and label smoothing 0.2. We train with temperature sampling T = 1.5 in language-imbalanced settings (Arivazhagan et al., 2019;Shaham et al., 2023). We use batches of 160k tokens, and train until performance on a held-out validation set fails to improve for ten validations. Trained models and scripts to replicate them will be released upon publication. 8Reparameterizing model capacity with deeper encoders and shallower decoders has been shown to be beneficial particularly for large multilingual vocabularies and/or smaller granularity inputs such as characters (Cherry et al., 2018;Kasai et al., 2021;Kong et al., 2021;Xu et al., 2021;Berard et al., 2021). Replacing the source embedding matrix with visual representations frees parameters which may be re-allocated elsewhere within the model. As we expand language coverage with pixel-based representations, it is not clear a priori whether and where additional capacity may be needed to scale performance compared to models for individual languages or traditional text models. We experiment with different ways to add and allocate model capacity with both pixel and text inputs, with results presented in § 3.1." }, { "figure_ref": [], "heading": "Multilingual translation with pixels", "publication_ref": [ "b9", "b34", "b31" ], "table_ref": [ "tab_0" ], "text": "We experiment with two datasets to investigate the performance and properties of multilingual pixel representations for machine translation. We perform initial experiments with the balanced 7 language pair multi-target TED data (Duh, 2018) used by Salesky et al. (2021), which we will refer to as TED-7, to compare performance to prior work with pixel representations and explore any necessary architectural changes in the multilingual setting. We then scale up using the larger 59 language pair TED talk corpus from Qi et al. (2018), or TED-59. In all cases, our models are many-to-one multilingual translation models with English as the target. We list the languages in each corpus with the number of training examples in App. A. Results for all datasets are shown in Table 1." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Model capacity: wider or deeper?", "publication_ref": [ "b34" ], "table_ref": [], "text": "Increasing language coverage often requires increased model capacity. We find that the small base architecture from Salesky et al. (2021) is unstable and may not converge when trained multilingually without additional capacity or batch size. For TED-7, multilingual source embeddings account for 33% of the total parameters of the best subword model.9 Without a source embedding matrix, despite the additional convolutional block, a pixel model with the same Transformer architecture as a subword model would be ∼17M parameters (or 38%) smaller, as seen in Figure 3, and may thus require different parameterization and result in different scaling behavior.\nWe investigate both the impact of reparameterizing the baseline model, as well as increasing capacity through greater encoder depth and/or width. We first find that shifting from an equal depth encoderdecoder model to a deep encoder and shallow decoder with the same number of parameters provides consistent improvements; for example, moving from 6-6 to 9-3 improves performance on the TED-7 dataset from 18.5 to 21.3 BLEU (+2.8). We maintain a shallow 3 layer decoder while varying the encoder through the remainder of this section. With an equal number of model parameters, increasing depth is more impactful than width, as seen in Figure 3a. Increasing width provides consistent improvements at all model sizes, while more significantly increasing overall parameters. With 12-3 layers and 2048 FF width, a pixel-based model has an equivalent number of parameters to the best subword model (∼55M) while able to continue improving with scale. The best pixel models for this dataset use 12-3 layers and FF width 4096. Continuing to increase depth and overall size has diminishing returns. Pixel models also appear more robust to overparameterization where text models degrade more quickly, as seen in Figure 3b.\nIs the optimal parameterization determined by the granularity of pixel inputs, the amount of training data, or the multilinguality of the task? To see, we reparameterize the models for individual language pairs from Salesky et al. ( 2021) at both the small and large data sizes (shown in App. F). We find that performance would have decreased in both cases, suggesting this is more likely due to the multilingual task, not the amount of data or pixel representations inherently.\nFor the larger TED-59 dataset (1.2M→5.1M), we use the same architecture as for TED-7. Exact model configurations for each dataset and representation scheme are listed together in App. B." }, { "figure_ref": [], "heading": "Language coverage and imbalanced data", "publication_ref": [ "b8", "b34" ], "table_ref": [], "text": "Including additional languages can sometimes interfere with rather than improve performance ('the curse of multilinguality' (Conneau et al., 2020)). When we compare our multilingual models to individual models for the same language pairs with TED-7, we see that all languages improve through multilingual training with pixel representations, while this is not the case for subword-based models, where two language pairs degrade (Figure 4). Improvements are greatest for those language pairs (ja, ko, zh) where individual models performed worse than BPE in Salesky et al. (2021). Improvements could be due to boosts from languages with similar scripts (zh and ja, or fr and de) or simply an increase in total training data: we investigate this in § 4.1 for TED-59 where we have more languages to study. Notably, improvements come without interference for pixel models here. Comparing multilingual pixel and BPE models, we see small but consistent improvements on TED-7 (Figure 5).\nThe TED-7 setting has relatively balanced data across all languages and scripts and at least 150k examples per pair, which is a reasonable baseline but unrealistic in the context of typical multilingual translation settings. We turn to the TED-59 dataset for increased language coverage with imbalanced training data and script representation for a more realistic setting to see if our improvements hold or interference emerges. Here we see larger improvements of up to 9 BLEU compared to BPE for most language pairs, and some degradation for 2 pairs whose scripts have only ∼5k training examples across all languages, highlighted in Figure 5.\nGiven the large and imbalanced nature of this " }, { "figure_ref": [], "heading": "=0.70", "publication_ref": [], "table_ref": [], "text": "Figure 6: Performance improvements with pixel representations are most strongly correlated with the total amount of data for a language's script compared to language or language family. Data size per language is listed in App. A. dataset with a many-to-many multilingual model 10 with language-aware multi-head attention, with 10 Their many-to-many model is trained on 2× as many sentences as the models presented here by reversing the dataset.\n25.3 average BLEU: our many-to-one pixel model improves on this by +3.1 BLEU. They do not report results per language for further comparison.\n4 Properties of multilingual pixel models" }, { "figure_ref": [], "heading": "Positive transfer across languages", "publication_ref": [], "table_ref": [], "text": "We look at the relationship between data representation for each source language, family, script and performance to find the greatest contributors to improvements with pixel representations on TED-59. The amount of data for a given pair is only weakly related to performance for both pixel and subword representations (ρ≤0.3, p < 0.05), while language family and script representation is moderately correlated (ρ=0.5 -0.6, p ≪ 0.001) suggesting some positive transfer across languages and scripts for both approaches. However, looking at each factor's relationship to performance improvement rather than raw scores better reflects those responsible for the difference. As shown in Figure 6, the amount of data for a given script is strongly correlated with ∆BLEU, (ρ=0.70, p ≪ 0.001), while family is moderately correlated (0.35) and data for individual language pairs has no clear relationship. We conclude that pixels enable more effective crosslingual transfer between languages with the same script, and to a lesser degree family, than joint subword vocabularies. We hypothesize that we would see similar improvements for Bengali and Tamil with at least 10k examples for their scripts." }, { "figure_ref": [], "heading": "Clustering by language and script", "publication_ref": [], "table_ref": [], "text": "To better understand how pixel representations pattern by language and script we compare our model subword embeddings and our pixel representations. Using the validation set as input, we compute sentence-level vectors by mean-pooling over token embeddings for each sentence for the subword model, or over the linearly projected vectors of the same dimension from the convolutional block in the pixel model. We visualize these representations using t-SNE clustering (van der Maaten and Hinton, 2008), in Figure 7 for TED-59 and in App. H for the smaller TED-7.\nPixel representations cluster neatly by script (7a), reflecting the strong ability to share information between languages of the same script discussed in § 4.1. Subword embeddings do not cluster as strongly by script despite shared subwords, with many separate clusters for e.g. Latin script languages (7b). We observe that subword embeddings cluster more tightly by language and family (7d), with less representational overlap between languages than we see with pixels (7c). However, the visual model still reflects some similarities within families both within and across scripts. For example, in the large Latin-script cluster in 7c, all Uralic languages appear within close proximity of each other, as do Austronesian, and some overlap exists between Cyrillic and Latin representations in 7a, which likely reflects Slavic family similarities rather than visually similar characters given sentence-level vectors." }, { "figure_ref": [ "fig_6" ], "heading": "Complete parameter sharing", "publication_ref": [], "table_ref": [], "text": "With traditional model vocabularies, parameters are not shared between embeddings; only 3% of embeddings are updated per batch on average 11 for TED-59 without redistribution techniques such as label smoothing. On the other hand, 100% of the pixel model representation block parameters are updated every batch due to parameter sharing at the pixel level. Pixel representations have direct access to token sub-components, whereas subwords do not, leading to more similar representations for 11 Heavily dependent on language coverage, sampling, vocabulary, and batch size. This number reflects a 64k source vocabulary and large batch size of 160k tokens. words e.g. with and without diacritics-with the TED-59 subword vocabulary, the Arabic forms and for \"book\" have disjoint subword decompositions and so do not share embeddings, whereas the pixel representations are highly similar; as visualized in Figure 8, the convolutional layer feature activations remain highly similar despite the inserted diacritics. If a pixel-based model observes partial lexical matches such as \"ktb\" and \"kitab\" in training, parameters for both will be updated by backpropagation to the shared pixel values; we hypothesize that this contributes to the increased transfer across languages with the same script and performance improvements. Future work may investigate whether this property leads to more compositional representations." }, { "figure_ref": [ "fig_7" ], "heading": "Reduced frequency-based representation degeneration", "publication_ref": [ "b11" ], "table_ref": [], "text": "Previous work has shown that embeddings can suffer from a frequency-based representation degeneration problem, where infrequent and unseen words cluster together in embedding space due to limited parameter updates during training (Gao et al., 2019). However, as pixel models share parameters at the pixel level, all representations are updated to some degree each batch regardless of subword- level text frequency. Therefore, the low-frequency degradation effect should reduce in pixel models and rare words may not cluster as strongly.\nWe examine this phenomenon by comparing the source embeddings from the subword model against representations from the pixel model on TED-7. We obtain a comparable set of representations from the pixel model by rendering each subword in the TED-7 source vocabulary and meanpooling the output of the convolutional block for all resulting resulting visual token(s).\nWe plot these embeddings using 2-D singular value decomposition, and color each point according to the log-frequency of its corresponding subword in Figure 9. We plot visual embeddings, excluding 1% of outliers for improved readability (and include the full plot in App. I). We see that in the text model, there is both a clear frequency bias and and a cluster of low-frequency embeddings. In the pixel model, though we see some frequency bias among embeddings, the distribution of low-frequency embeddings is improved." }, { "figure_ref": [ "fig_8" ], "heading": "Data-efficient cross-lingual transfer", "publication_ref": [ "b15", "b29", "b33" ], "table_ref": [ "tab_3" ], "text": "It has been shown that using pretrained multilingual models for cross-lingual transfer can provide significant performance improvements, particularly when the target language is under-resourced. However, adapting models to unseen scripts with no lexical coverage in the original model typically requires techniques such as expanding the embedding matrix to include new vocabulary (Wang et al., 2019b) or language-specific adapters (Houlsby et al., 2019;Pfeiffer et al., 2020). In contrast, models with pixel representations can be finetuned directly on new languages and scripts without requiring any architectural changes (Rust et al., 2023). We hypothesize that the model properties discussed in § 4 will not only allow transfer without model ex- tensions, but enable transfer more data-efficiently, requiring fewer examples to achieve good performance.\nTo evaluate the data-efficiency of cross-lingual transfer, we adapt our multilingual models to language pairs with five new source languages, each with different degrees of script coverage to those observed in pretraining as quantified in Table 3: Romanian, Polish, Farsi, Vietnamese, and Hebrew. We randomly sample 10k, 50k, and 150k (∼all) sentences from the multi-target TED dataset used for TED-7 for each new language pair and finetune our TED-7 models on the training data for each pair individually for up to 30 epochs, with early stopping if there are no improvements on the held-out validation sets for 5 epochs. We use the TED-7 models because they do not cover these languages in pretraining; we note that the overall performance on the original task is similar for pixel and subword models. In addition to the pixel and subword models, we also compare subword models with vocabulary expansion, where the source embedding matrix is extended to include BPE inventories of size 5k trained for each new language, for which embeddings are randomly initialized.\nWhether model vocabularies cover a particular script is typically described as binary, but even with observed scripts new languages introduce unseen character sequences and diacritics which will not be appropriately represented. We observe that for Unicode-based models, transfer capability is strongly reflected in lexical coverage; vocabulary expansion improves performance slightly for languages with higher n-gram coverage, and significantly for Hebrew with minimal coverage, particularly with more data to train new language-specific embeddings, as seen in Figure 10. However, pixel representations enable models to perform better still than vocabulary expansion, particularly with less data. We believe this is because with complete parameter sharing across all scripts, all parameters for new languages are more strongly initialized. This direction may lead to more data-efficient crosslingual transfer, particularly for under-resourced languages and tasks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b5", "b14", "b44", "b19", "b18", "b21", "b37", "b7", "b6", "b24", "b13", "b32", "b2", "b30", "b36", "b12", "b27", "b15", "b29", "b34", "b33", "b26" ], "table_ref": [], "text": "Previous work has shown allocating additional encoder capacity to be beneficial for smaller granularity inputs, both for characters and bytes (Cherry et al., 2018;Xue et al., 2022b) and other modalities (He et al., 2021;Zhang et al., 2017). Deep encoders and shallow decoders have been used to improve model efficiency and latency with subword inputs (Kim et al., 2019;Kasai et al., 2021;Kong et al., 2021), and deeper and narrower encoders have been shown to scale more effectively (Tay et al., 2022;Xue et al., 2022a).\nSignificant prior work has been devoted to broader and more effective language coverage, through full Unicode character coverage and downsampling (Clark et al., 2022), clustered vocabularies for efficient modeling of large vocabularies (Chung et al., 2020;Liang et al., 2023), bytelevel modeling (Gillick et al., 2016;Xue et al., 2022b), bytes in conjunction with BPE to combat data sparsity and memory issues (BBPE: Radford et al., 2019;Wang et al., 2019a) or bytefallback (Xue et al., 2022b). Mapping characters to a smaller set of common representations across scripts through transliteration (Amrhein and Sennrich, 2020;Purkayastha et al., 2023) or graphemeto-phoneme systems (Sun et al., 2022;Gheini and May, 2019) have also been shown beneficial for multilingual and cross-lingual transfer for re-lated languages across scripts, though they may also introduce collisions which can negatively affect performance. Post-hoc vocabulary expansion (Wang et al., 2019b;Moon and Okazaki, 2020) or language adapters (Houlsby et al., 2019;Pfeiffer et al., 2020) to increase vocabulary coverage have also been shown to be very effective. Recently, pixel representations have been proposed as a vocabulary-free alternative (Salesky et al., 2021;Rust et al., 2023), though not trained yet multilingually. We refer readers to the BigScience survey for greater discussion (Mielke et al., 2021)." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We introduce and demonstrate how to effectively train multilingual pixel representations for machine translation. We experiment with two different data scales with a variety of language and script coverage, demonstrating improved performance compared to the traditional subword approach. We analyze various properties of pixel representations to better understand where they may provide potential benefits and the impact of different scripts and data representation. We observe that these properties not only enable cross-lingual transfer to unseen scripts, but make pixel representations more data-efficient than alternatives such as vocabulary expansion. We hope this work contributes to more extensible multilingual models for all languages and scripts." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our multilingual experiments are only many-toone thus far, and apply visual representations to the source languages only. Whether the dynamics would change with multiple target languages is not yet known. Though we do experiment with multiple resource scales up to ∼5M sentences our settings remain limited in scale and domain compared to large-scale industry models and it remains to be seen how this approach would fare in other settings. At very low-resource settings with fewer than 10k examples for a given script, our approach may perform worse than traditional subword embeddings. We observe that pixel models are in some settings slower to converge than subword equivalents, which we cautiously attribute to sub-optimal hyperparameters. Though the compute resources required for training models are similar to traditional text representations, significantly more disk space is required to save rendered text compared to raw text, which may be necessary if pre-computing batches without rendering on-the-fly and may limit efficiency in larger-scale settings. Scalability to longer text has not yet been investigated." }, { "figure_ref": [ "fig_9" ], "heading": "Ethics Statement", "publication_ref": [ "b1" ], "table_ref": [], "text": "The aim of this work is to reduce the vocabulary bottleneck which disproportionately affects lowresource languages as they are less likely to be appropriately represented in traditional discrete multilingual model vocabularies. Alternatives such as byte-level tokenization potentially increase rather than decrease the disparity between scripts, as a single character may be represented as up to 12 bytes in e.g. Telugu, whereas Latin scripts are typically 1:1 characters:bytes (Ahia et al., 2023). We show the sequence lengths resulting from byte, character, BPE, and pixel 'tokenization' on TED-59 in Figure 11, App. D; of the alternatives to BPE tokenization, pixel representations result in the most similar sequence lengths and lowest variance across languages and scripts.\nIn application settings, substituting visually similar characters such as '0' for 'O' can be used to circumvent lexical filtering as used for e.g. spam filtering, hate speech detection, or censorship. Pixel representations may make these substitutions less effective which may be beneficial or harmful depending on the setting." }, { "figure_ref": [], "heading": "A List of Languages by Dataset", "publication_ref": [], "table_ref": [], "text": "We list the source languages in each dataset with the number of training examples and language code. All datasets are many-to-one parallel with English as the target language.\nFor TED-7 and TED-59, we use the provided train/dev/test splits, and report results on test using model checkpoints chosen based on dev perplexities. " }, { "figure_ref": [], "heading": "TED", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C Detailed discussion of rendering parameter choices", "publication_ref": [ "b34", "b33" ], "table_ref": [], "text": "Below we discuss our rendering choices in further detail and provide pointers to the experimentation in past work we build from.\nFont: Following past work (Salesky et al., 2021;Rust et al., 2023) we use the Noto font family, as it has the widest Unicode coverage within a single font or font family known to us. Previous work has used the non-serif font variant: we find a slight performance decrease of 5% with NotoSerif on TED-59, and accordingly stick to NotoSans.\nPatch size and stride: Salesky et al. ( 2021) extensively tune font size, window size, and stride for single language pair translation experiments, and find that performance may degrade for some language pairs with font size <10pt. For this reason, we use font size 10pt. While that work found slight differences in optimal window size (15-30) and stride (5-20), we found no degradation in multilingual performance with uniform window widths and so use uniform values for simplicity. 2023) compare additional rendering strategies to decrease the pixel input space through structured spacing (bigrams, words) or monospace fonts where available, and show improvements on both pretraining and cross-lingual transfer to downstream classification tasks, and multilingual QA. It remains to be seen how these strategies would affect translation.\nRendering backend: In addition to the character-level fallback capabilities mentioned in the main text ( § 2.1), the PangoCairo renderer is also more efficient than PyGame, with throughput approaching the Rust-based BERT tokenizer without batch processing, as measured in Rust et al. (2023, App. D)." }, { "figure_ref": [], "heading": "D Variance in sequence lengths across tokenizations", "publication_ref": [], "table_ref": [], "text": "Below we show the sequence lengths resulting from byte, character, and BPE tokenization and pixel representations on TED-59. Of the alternatives to BPE, pixel representations result in the most similar sequence lengths and lowest variance across languages and scripts. " }, { "figure_ref": [], "heading": "E Full results reported by individual language pair", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "In addition to the aggregated metric scores reported in the main text, below we report results for each individual language pair with three metrics: BLEU, chrF, and COMET.\nResults are organized by dataset. TED-7 results are reported in Table 5, and TED-59 in Table 6.\nE.1 Individual language pair results: TED-7 " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We are grateful to Team PIXEL (Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Miryam de Lhoneux, Desmond Elliott), Carlos Aguirre, Antonis Anastasopoulos, and Chenlei Sei for helpful discussions. Elizabeth Salesky is supported by the Apple Scholars in AI/ML fellowship." }, { "figure_ref": [], "heading": "F Control experiment: Reparameterized models for individual language pairs", "publication_ref": [ "b34" ], "table_ref": [], "text": "Here we reparameterize the TED models for individual language pairs from Salesky et al. (2021) according to our findings in § 3.1, shifting encoder-decoder layer depth from 6-6 to 12-3 and feed-forward width from 1024 to 2048, while maintaining approximately the same number of parameters (55.3M vs. 56.9M). We see that this reparameterization is not optimal for individual language pairs with less data, or the individual de-en language pair with a similar amounts of data to TED-7. " }, { "figure_ref": [], "heading": "TED WMT", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "H Clustering by language and script: TED-7", "publication_ref": [], "table_ref": [], "text": "Below we show the same t-SNE clustering from § 4.1.1 for the smaller multi-way parallel TED-7 validation set. Sentence-level vectors for clustering are creating by mean-pooling token embeddings for both the PIXEL and BPE models. We observe clear clustering by source language in the text model, despite parallel sentences and shared subwords. In the pixel model, we observe multiple clusters per language and script, with greater overlap between languages with shared scripts (French and German). I Full SVD plot of TED-7 pixel model embeddings " } ]
We introduce and demonstrate how to effectively train multilingual machine translation models with pixel representations. We experiment with two different data settings with a variety of language and script coverage, demonstrating improved performance compared to subword embeddings. We explore various properties of pixel representations such as parameter sharing within and across scripts to better understand where they lead to positive transfer. We observe that these properties not only enable seamless cross-lingual transfer to unseen scripts, but make pixel representations more data-efficient than alternatives such as vocabulary expansion. We hope this work contributes to more extensible multilingual models for all languages and scripts.
Multilingual Pixel Representations for Translation and Effective Cross-lingual Transfer
[ { "figure_caption": "Figure 1 :1Figure 1: Embedding matrices are disjoint parameter allocations by script, leading to a vocabulary bottleneck. Pixel representations however share parameters across scripts and are not limited to a discrete vocabulary.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Encoding text with pixels: text is rendered to images by sentence. Image tokens are created by overlapping sliding windows of fixed height (h), width (w), and stride (s). Convolutional layer output is projected to flat vectors for subsequent Transformer layers.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance across different model capacities, varying encoder depth and/or width (TED-7).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Improvement with multilingual models over models for each lang. pair.", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 7: Clustering shows more representational similarity within scripts and across languages with pixel representations than with disjoint subword embeddings in the TED-59 dataset. Individual languages from the same family are shown with different shades of the same color in items (c) and (d).", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Pixel representations result in similar representations for partial lexical matches due to visual similarity and parameter sharing at the pixel level.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: SVD plots of source representations show traditional embeddings cluster infrequent subwords together more tightly than pixels.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Data-efficiency in cross-lingual transfer. Models with pixel-based representations adapt more efficiently and effectively to new scripts than with traditional text representations (shown here: Hebrew).", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Average sequence length with various tokenization schemes compared on TED-59.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Model performance across two datasets on test. Models chosen by perplexity on held-out validation sets. Metric scores are averaged across all languages in the dataset; App. E shows results for individual language pairs.", "figure_data": "TED-7TED-59Source reps.BLEUchrFCOMETBLEUchrFCOMETBPE25.748.877.323.845.573.3PIXEL26.249.577.928.450.177.2", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "SrcScript # Sents PIXEL BPE Aharoni.∆azLatin5946 16.6 12.511.2+5.4beCyrillic4509 28.5 19.218.3+10.2glLatin10017 36.5 29.728.6+7.9skLatin61470 33.7 27.426.8+6.9[ LR ]avg: 28.8 22.221.2+7.6arArabic 214111 29.8 26.125.9+3.9deLatin 167888 36.1 30.028.9+7.2he Hebrew 211819 35.3 30.730.2+5.1itLatin 204503 38.5 32.332.4+6.1[ HR ]avg: 34.9 29.829.4+5.6dataset, previous work has commonly reported non-aggregated performance for a subset of languagepairs only (4 low-resource and 4 high-resource)with varied scripts and degrees of relatedness.Compared to the best previous results on thosepairs (Aharoni et al., 2019), our subword baselinesimprove slightly: +1 BLEU on the LR pairs and+0.4 on the HR. With pixel representations, ourmodels improve significantly, +7.6 on the LR pairsand +5.6 on the HR pairs, as shown in Table 2.Low-resource languages with well-representedscripts shared with other languages show largerimprovement than the overall mean of +4.6; mostdramatically, Belarusian (be) improves by >50%or 10.2 BLEU despite having only 4509 traininginstances through positive transfer from the >600ksentence pairs in Cyrillic in TED-59 (discussed fur-ther in § 4.1). Jin and Xiong (2022) presented thestrongest previous performance on the full TED-59", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Script coverage in pretraining measured at the level of character n-grams. Improvements with pixel representations are averaged across all resource settings.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Below we report the details of the best performing model for each dataset and source representation.Dataset #Sents Model V src V tgt Emb. dim. Enc. layers Dec. layers FF width Attn. heads #Params", "figure_data": "-7arArabic175kjaJapanese155kzhChinese170kdeGerman153kko Korean166kTotal:1.2MfrFrench158kruRussian181kTED-59arArabic214khe Hebrew212kplPolish176kazAzerbaijani6khiHindi19kptPortuguese52kbeBelarusian5khrCroatian122kpt-br Br. Portuguese185kbgBulgarian174khu Hungarian147kroRomanian180kbnBengali5khy Armenian21kruRussian208kbsBosnian6kidIndonesian87kskSlovak61kcalv -0kitItalian205kslSlovenian20kcsCzech103kjaJapanese204ksqAlbanian45kdaDanish45kka Georgian13ksrSerbian137kdeGerman168kkk Kazakh3ksvSwedish57kelGreek134kko Korean206ktaTamil6keoEsperanto7kku Kurdish10kthThai98kesSpanish196kltLithuanian42ktrTurkish182ketEstonian11kmk Macedonian25kukUkrainian108keuBasque5kmn Mongolian8kurUrdu6kfaFarsi151kmr Marathi10kviVietnamese172kfiFinnish24kms Malay5kzhChinese6kfrFrench192kmy Burmese21kzh-cn Chinese, Simplified200kfr-ca Ca. French20knb Norwegian Bokmål16kzh-tw Chinese, Traditional 203kglGalician10knlDutch184kTotal:5.1MB Model details by datasetTED-71.2M PIXEL ∅ 10k5121234096487MTED-71.2M BPE35k 10k512661024455MTED-59 5.1M PIXEL ∅ 10k5121234096487MTED-59 5.1M BPE64k 10k512662048882M", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Details of pixel and subword model scale variants.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Rust et al. (2023) used smaller square windows of 16 × 16, without any patch overlap (continuous), for English pretraining and crosslingual finetuning for classification tasks. In our multilingual translation experiments TED-59, we find an average 10% performance decrease without any overlap (stride s = width w). The maximum height of the characters in TED-59 with font size 10pt is 22px, requiring reduced size or truncation to use window size 16pt. A larger window size of 32 fits all characters but increases the proportion of whitespace pixels, and decreases performance by 7%. With window size 24px we were able to fit all characters and diacritics in this dataset, with best overall performance.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results on TED-7 evaluation set reported by individual language pair.", "figure_data": "BLEUMeanardefrjakoruzhPIXEL26.232.135.437.116.018.126.118.9BPE25.731.434.336.315.717.725.718.8CHAR24.530.432.734.315.016.824.317.8chrFMeanardefrjakoruzhPIXEL49.554.557.658.440.442.749.643.4BPE48.853.256.557.640.542.548.842.7CHAR47.552.455.056.039.241.147.641.4COMETMeanardefrjakoruzhPIXEL77.979.679.181.675.276.676.576.6BPE77.378.877.980.975.376.475.775.8CHAR76.778.878.180.274.375.675.374.5", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on TED-59 evaluation set reported by individual language pair.", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
Elizabeth Salesky; Neha Verma; J Philipp Koehn; J Matt Post
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Orevaoghene Ahia; Sachin Kumar; Hila Gonen; Jungo Kasai; David R Mortensen; Noah A Smith; Yulia Tsvetkov", "journal": "", "ref_id": "b1", "title": "Do all languages cost the same? tokenization in the era of commercial language models", "year": "2023" }, { "authors": "Chantal Amrhein; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "On Romanization for model transfer between scripts in neural machine translation", "year": "2020" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Dmitry Lepikhin; Melvin Johnson; Maxim Krikun; Mia Xu Chen; Yuan Cao; George Foster; Colin Cherry; Wolfgang Macherey; Zhifeng Chen; Yonghui Wu", "journal": "", "ref_id": "b3", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges", "year": "2019" }, { "authors": "Alexandre Berard; Dain Lee; Stephane Clinchant; Kweonwoo Jung; Vassilina Nikoulina", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Efficient inference for multilingual neural machine translation", "year": "2021" }, { "authors": "Colin Cherry; George Foster; Ankur Bapna; Orhan Firat; Wolfgang Macherey", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Revisiting characterbased neural machine translation with capacity and compression", "year": "2018" }, { "authors": "Chung Hyung Won; Dan Garrette; Kiat Chuan Tan; Jason Riesa", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Improving multilingual models with language-clustered vocabularies", "year": "2020" }, { "authors": "Jonathan H Clark; Dan Garrette; Iulia Turc; John Wieting", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b7", "title": "Canine: Pre-training an efficient tokenization-free encoder for language representation", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Kevin Duh", "journal": "", "ref_id": "b9", "title": "The multitarget TED talks task", "year": "2018" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary; Naman Goyal; Tom Birch; Vitaliy Liptchinsky; Sergey Edunov; Edouard Grave; Michael Auli; Armand Joulin", "journal": "Journal of Machine Learning Research", "ref_id": "b10", "title": "Beyond english-centric multilingual machine translation", "year": "2021" }, { "authors": "Jun Gao; Di He; Xu Tan; Tao Qin; Liwei Wang; Tieyan Liu", "journal": "", "ref_id": "b11", "title": "Representation degeneration problem in training natural language generation models", "year": "2019" }, { "authors": "Mozhdeh Gheini; Jonathan May", "journal": "", "ref_id": "b12", "title": "A universal parent model for low-resource neural machine translation transfer", "year": "2019" }, { "authors": "Dan Gillick; Cliff Brunk; Oriol Vinyals; Amarnag Subramanya", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Multilingual language processing from bytes", "year": "2016" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b14", "title": "Masked autoencoders are scalable vision learners", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b15", "title": "Parameter-efficient transfer learning for NLP", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b16", "title": "", "year": "" }, { "authors": "Renren Jin; Deyi Xiong", "journal": "International Committee on Computational Linguistics", "ref_id": "b17", "title": "Informative language representation learning for massively multilingual neural machine translation", "year": "2022" }, { "authors": "Jungo Kasai; Nikolaos Pappas; Hao Peng; James Cross; Noah Smith", "journal": "", "ref_id": "b18", "title": "Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation", "year": "2021" }, { "authors": "Jin Young; Marcin Kim; Hany Junczys-Dowmunt; Alham Hassan; Kenneth Fikri Aji; Roman Heafield; Nikolay Grundkiewicz; Bogoychev", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "From research to production and back: Ludicrously fast neural machine translation", "year": "2019" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b20", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Xiang Kong; Adithya Renduchintala; James Cross; Yuqing Tang; Jiatao Gu; Xian Li", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Multilingual neural machine translation with deep encoder and multiple shallow decoders", "year": "2021" }, { "authors": "Taku Kudo", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "year": "2018" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Davis Liang; Hila Gonen; Yuning Mao; Rui Hou; Naman Goyal; Marjan Ghazvininejad; Luke Zettlemoyer; Madian Khabsa", "journal": "", "ref_id": "b24", "title": "Xlm-v: Overcoming the vocabulary bottleneck in multilingual masked language models", "year": "2023" }, { "authors": "Jonas F Lotz; Elizabeth Salesky; Phillip Rust; Desmond Elliott", "journal": "", "ref_id": "b25", "title": "Text rendering strategies for pixel language models", "year": "2023" }, { "authors": "Sabrina J Mielke; Zaid Alyafeai; Elizabeth Salesky; Colin Raffel; Manan Dey; Matthias Gallé; Arun Raja; Chenglei Si; Wilson Y Lee; Benoît Sagot; Samson Tan", "journal": "", "ref_id": "b26", "title": "Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp", "year": "2021" }, { "authors": "Sangwhan Moon; Naoaki Okazaki", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Patch-BERT: Just-in-time, out-of-vocabulary patching", "year": "2020" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Barrault; Prangthip Mejia-Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b28", "title": "No language left behind: Scaling humancentered machine translation", "year": "2022" }, { "authors": "Jonas Pfeiffer; Ivan Vulić; Iryna Gurevych; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", "year": "2020" }, { "authors": "Sukannya Purkayastha; Sebastian Ruder; Jonas Pfeiffer; Iryna Gurevych; Ivan Vulić", "journal": "", "ref_id": "b30", "title": "Romanization-based large-scale adaptation of multilingual language models", "year": "2023" }, { "authors": "Ye Qi; Devendra Sachan; Matthieu Felix; Sarguna Padmanabhan; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "When and why are pre-trained word embeddings useful for neural machine translation", "year": "2018" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b32", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Phillip Rust; Jonas F Lotz; Emanuele Bugliarello; Elizabeth Salesky; Miryam De Lhoneux; Desmond Elliott", "journal": "ICLR", "ref_id": "b33", "title": "Language modelling with pixels", "year": "2023" }, { "authors": "Elizabeth Salesky; David Etter; Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Robust open-vocabulary translation from visual text representations", "year": "2021" }, { "authors": "Uri Shaham; Maha Elbayad; Vedanuj Goswami; Omer Levy; Shruti Bhosale", "journal": "", "ref_id": "b35", "title": "Causes and cures for interference in multilingual translation", "year": "2023" }, { "authors": "Simeng Sun; Angela Fan; James Cross; Vishrav Chaudhary; Chau Tran; Philipp Koehn; Francisco Guzmán", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Alternative input signals ease transfer in multilingual machine translation", "year": "2022" }, { "authors": "Yi Tay; Mostafa Dehghani; Jinfeng Rao; William Fedus; Samira Abnar; Hyung Won Chung; Sharan Narang; Dani Yogatama; Ashish Vaswani; Donald Metzler", "journal": "", "ref_id": "b37", "title": "Scale efficiently: Insights from pretraining and finetuning transformers", "year": "2022" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b38", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Changhan Wang; Kyunghyun Cho; Jiatao Gu", "journal": "", "ref_id": "b39", "title": "Neural machine translation with byte-level subwords", "year": "2019" }, { "authors": "Hai Wang; Dian Yu; Kai Sun; Jianshu Chen; Dong Yu", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Improving pre-trained multilingual model with vocabulary expansion", "year": "2019" }, { "authors": "Hongfei Xu; Josef Van Genabith; Qiuhui Liu; Deyi Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Probing word translations in the transformer and trading decoder for encoder layers", "year": "2021" }, { "authors": "Fuzhao Xue; Jianghai Chen; Aixin Sun; Xiaozhe Ren; Zangwei Zheng; Xiaoxin He; Xin Jiang; Yang You", "journal": "", "ref_id": "b42", "title": "Deeper vs wider: A revisit of transformer configuration", "year": "2022" }, { "authors": "Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b43", "title": "ByT5: Towards a tokenfree future with pre-trained byte-to-byte models", "year": "2022" }, { "authors": "Yu Zhang; William Chan; Navdeep Jaitly", "journal": "IEEE", "ref_id": "b44", "title": "Very deep convolutional networks for end-to-end speech recognition", "year": "2017" } ]
[]
10.18653/v1/n19-1423
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b31", "b2", "b6", "b48", "b0", "b36", "b26", "b14", "b16", "b13", "b8", "b13", "b42", "b48", "b17", "b20", "b17", "b48", "b47", "b30", "b27", "b10", "b34", "b31", "b5" ], "table_ref": [], "text": "Large Language Models (LLMs) have shown remarkable abilities for human language processing and extraordinary scalability and adaptability in few-or zero-shot settings. (Ouyang et al., 2022;Brown et al., 2020;Chowdhery et al., 2022). However, the training process depends on large-scale high-quality corpora but without the perception of the real world. Thus, LLMs still have to face the issue of hallucination (Yao et al., 2023;Bang et al., 2023) and temporal misalignment (Röttger and Pierrehumbert, 2021;Luu et al., 2022;Jang et al., 2022). This affects the reliability of LLMs and hinders wider practical application, because the consistency between the LLM responses with the real world needs further validation. Existing work has proved that incorporating external knowledge (i.e., non-parametric knowledge) with internal knowledge (i.e., parametric knowledge) can effectively alleviate hallucination, especially for knowledge-intensive tasks. In fact, retrievalaugmented LLMs have been shown so effective that they have been regarded as a standard solution to alleviate the factuality drawbacks in naive LLM generations. Retrieval augmentation is applied to select relative passages as external contexts for the language model, which is retrieve-then-read framework (Lewis et al., 2020b;Karpukhin et al., 2020;Izacard et al., 2022). Take the open-domain Question-Answering task (open-domain QA) as an example, a retriever first searches for related documents for a question. Then the LLM receives the question and the documents, then predicts an answer.\nAs most LLMs are only accessible through inference APIs, they play the part of black-box frozen readers in the pipeline. This makes previous retrieval augmentation methods that require complete access (Lewis et al., 2020b;Guu et al., 2020;Izacard et al., 2022) no longer feasible. Recent studies on retrieval-augmented language models lean more on the LLM-oriented adaptation. An idea is to train a dense retrieval model to cater to the frozen language model (Shi et al., 2023). By using feedback from the LLM as a training objective, the retrieval model is tuned for better LLM input contexts. Another research line focuses on the design of interactions between the retriever and the reader (Yao et al., 2023;Khattab et al., 2022), where both the arXiv:2305.14283v3 [cs.CL] 23 Oct 2023 retriever and the reader are usually frozen. The idea is to trigger the emergent ability through carefully crafted prompts or a sophisticated prompt pipeline. Multiple interactions with external knowledge allow the LLM to approach the correct answer step by step.\nHowever, there are still problems remaining to be solved. Existing approaches overlook the adaptation of the query, i.e., the input of the retrievethen-read pipeline. The retrieval query is either original from datasets or directly determined by the black-box generation, thus is always fixed. However, there is inevitably a gap between the input text and the knowledge that is really needed to query. This limits performance and places a burden on retrieval capability enhancement and prompt engineering.\nIn consideration of this issue, this paper proposes Rewrite-Retrieve-Read, a new framework for retrieval augmentation, which can be further tuned for adapting to LLMs. In front of the retriever, a step of rewriting the input is added, filling the gap between the given input and retrieval need, as is shown in Figure 1. We adopt the off-the-shelf tool, an internet search engine, as the retriever, which avoids the maintenance of the search index and can access up-to-date knowledge (Lazaridou et al., 2022). Different from previous studies (Khattab et al., 2022;Yao et al., 2023) that require the memory of multiple interaction rounds between the retriever and the LLM for each sample, the motivation of our rewriting step is to clarify the retrieval need from the input text.\nWe also propose a trainable scheme for our rewrite-retrieve-read framework (Figure 1 (c)). The black-box retriever and the reader form a frozen system. To further smooth the steps of our pipeline, we apply a small, trainable language model to perform the rewriting step, denoted as the rewriter. The rewriter is trained by reinforcement learning using the LLM performance as a reward, learning to adapt the retrieval query to improve the reader on downstream tasks.\nOur proposed methods are evaluated on knowledge-intensive downstream tasks including open-domain QA (HotpoQA (Yang et al., 2018), AmbigNQ (Min et al., 2020), PopQA (Mallen et al., 2022)) and multiple choice QA (MMLU (Hendrycks et al., 2021)). The experiments are implemented on T5-large (Raffel et al., 2020) as the rewriter, ChatGPT (Ouyang et al., 2022) and Vicuna-13B (Chiang et al., 2023) as the LLM reader. The results show that query rewriting consistently improves the retrieve-augmented LLM performance. The results also indicate that the smaller language model can be competent for query rewriting.\nTo sum up, our proposed novel retrievalaugmentation method, rewrite-retrieve-read is the first framework where the input text is adapted for the frozen retriever and LLM reader. We introduce a tuneable scheme with a small, trainable model, achieving performance gains with less resource consumption.\n2 Related Work" }, { "figure_ref": [], "heading": "Retrieval Augmentation", "publication_ref": [ "b3", "b16", "b7", "b16", "b37", "b21", "b15", "b27", "b42", "b2", "b13", "b6", "b18", "b43", "b20", "b28", "b48" ], "table_ref": [], "text": "Language models require external knowledge to alleviate the factuality drawbacks. Retrieval augmentation has been regarded as the standard effective solution. With a retrieval module, related passages are provided to the language model as the context of the original input. Thus factual information like common sense or real-time news helps with output prediction through contextualized reading comprehension.\nEarlier studies use sparse retriever (Chen et al., 2017) or dense retriever (Karpukhin et al., 2020) in front of a pre-trained language model (PrLM). The neural retriever and reader are both PrLMs of trainable size like BERT (Devlin et al., 2019) or BART (Lewis et al., 2020a). Hence, the whole retrieve-then-reader framework is a tuneable endto-end system, where the retrieved contexts can be regarded as the intermediate results (Karpukhin et al., 2020;Lewis et al., 2020b). Approaches to smooth the two-step framework are proposed to optimize the retrieval and the reading comprehension (Sachan et al., 2021;Lee et al., 2022;Jiang et al., 2022). More recently, retrieval remains a powerful enhancement as the size of models and data scales rapidly (Mallen et al., 2022;Shi et al., 2023;Brown et al., 2020). On the other hand, retrieval enhancement can compensate for the shortfall in parameter size, compared to large-scale language models. For example, by jointly training the retriever and the reader, Atlas (Izacard et al., 2022) shows few-shot performance on par with 540B PalM (Chowdhery et al., 2022) ✅ ✅\nFigure 1: Overview of our proposed pipeline. From left to right, we show (a) standard retrieve-then-read method, (b) LLM as a query rewriter for our rewrite-retrieve-read pipeline, and (c) our pipeline with a trainable rewriter.\nexternal knowledge. Komeili et al. (2022) use an internet search for relevant information based on the dialogue history to perform dialogue response generation. SeeKeR (Shuster et al., 2022) use a single Transformer to iteratively perform search query generation, then knowledge extraction for dialogue generation and sentence completion. For large-scale models, web search still shows effective for knowledge augmentation (Lazaridou et al., 2022), fact-checking (Menick et al., 2022), and LLM agent enhancement (Yao et al., 2023)." }, { "figure_ref": [], "heading": "Cooperation with Black-box LLMs", "publication_ref": [ "b31", "b4", "b6", "b49", "b48", "b32", "b46", "b45", "b17", "b42", "b24" ], "table_ref": [], "text": "Large Language Models, such as ChatGPT (Ouyang et al., 2022), Codex (Chen et al., 2021), PaLM (Chowdhery et al., 2022), emerge impressive natural language processing ability as well as remarkable scalability. This leads to a tendency to embrace LLMs on a wide range of NLP tasks. However, LLMs are only accessible as a black box in most cases, which is because (i) Some like Chat-GPT are not open-source and kept private; (ii) The large parameter scale requires computational resources that are not always affordable to users. This constraint means nothing is available except input and output texts.\nExisting studies have proved that LLMs' abilities can be better leveraged by carefully designed interaction methods. GenRead (Yu et al., 2023) prompts an LLM to generate context instead of deploying a retriever, showing that LLMs can retrieve internal knowledge by prompting. ReAct (Yao et al., 2023) and Self-Ask (Press et al., 2022) combines the Chain-of-Thought (CoT) (Wei et al., 2022;Wang et al., 2022) and inter-actions with web APIs. Only relying on prompt construction, Re-Act provides novel baselines for interactive tasks. Demonstrate-Search-Predict (DSP) (Khattab et al., 2022) defines a sophisticated pipeline between an LLM and a retriever. Unlike ReAct, DSP integrates prompts for demonstration bootstrap besides multihop breakdown and retrieval.\nDespite the promising performance in the zero or few-shot setting, the behavior of LLMs sometimes needs adjustments. A feasible approach is to append trainable small models in front of or after the LLM. The small models, as a part of the parameters of the system, can be fine-tuned for optimization. RePlug (Shi et al., 2023) is proposed to fine-tune a dense retriever for the frozen LLM in the retrievethen-read pipeline. The retriever is trained under the LLM's supervision to retrieve documents that are suitable for the LLM. With the same purpose, Directional Stimulus Prompting (Li et al., 2023) deploys a small model to provide the LLM with stimulus (e.g., keywords for summarization, or dialogue actions for response generation), which is updated according to the LLM reward.\nDifferent from the inspiring work mentioned above, our proposed pipeline contains a query rewriting step in front of the retrieve-then-read module. We further propose a trainable scheme with a small rewriting model, which is a novel enhancement for retrieval-augmented LLM by re-constructing the search query." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We present Rewrite-Retrieve-Read, a pipeline that improves the retrieval-augmented LLM from the perspective of query rewriting. Figure 1 shows an overview. This section first introduces the pipeline framework in section 3.1, then the trainable scheme in section 3.2." }, { "figure_ref": [], "heading": "Rewrite-Retrieve-Read", "publication_ref": [], "table_ref": [], "text": "A task with retrieval augmentation can be denoted as follows. Given a dataset of a knowledgeintensive task (e.g., open-domain QA), D = {(x, y) i }, i = 0, 1, 2, . . . , N , x (e.g., a question) is the input to the pipeline, y is the expected output (e.g., the correct answer). Our pipeline consists of three steps. (i) Query rewrite: generate a query x for required knowledge based on the original input x. (ii) Retrieve: search for related context, doc. (iii) Read: comprehend the input along with contexts [doc, x] and predict the output ŷ.\nA straightforward but effective method is to ask an LLM to rewrite queries to search for information that is potentially needed. We use a few-shot prompt to encourage the LLM to think, and the output can be none, one or more queries to search." }, { "figure_ref": [], "heading": "Trainable Scheme", "publication_ref": [ "b48", "b1", "b27", "b34" ], "table_ref": [], "text": "Besides, total reliance on a frozen LLM has shown some drawbacks. Reasoning errors or invalid search hinders the performance (Yao et al., 2023;BehnamGhader et al., 2022). On the other hand, retrieved knowledge may sometimes mislead and compromise the language model (Mallen et al., 2022). To better align to the frozen modules, it is feasible to add a trainable model and adapt it by taking the LLM reader feedback as a reward.\nBased on our framework, we further propose to utilize a trainable small language model to take over the rewriting step, as is shown in the right part of Figure 1. The trainable model is initialized with the pre-trained T5-large (770M) (Raffel et al., 2020), denoted as trainable rewriter, G θ . The rewriter is first trained on pseudo data to warm up ( §3.2.1), then continually trained by reinforcement learning ( §3.2.2)." }, { "figure_ref": [], "heading": "Rewriter Warm-up", "publication_ref": [ "b12", "b11" ], "table_ref": [], "text": "The task, query rewriting, is quite different from the pre-training objective of sequence-to-sequence generative models like T5. First, we construct a pseudo dataset for the query rewriting task. Inspired by recent distillation methods (Hsieh et al., 2023;Ho et al., 2022), we prompt the LLM to rewrite the original questions x in the training set and collect the generated queries x as pseudo labels. The collected samples are then filtered: Those that get correct predictions from the LLM reader are selected into the warm-up dataset, denoted as D T rain = {(x, x)|ŷ = y}. The rewriter G θ is finetuned on D T rain with the standard log-likelihood as the training objective, denoted as\nL warm = - t logp θ ( xt | x<t , x ).\n(1)\nThe rewriter model after warm-up shows modest performance, which depends on the pseudo data quality and rewriter capability. Highly relying on the human-written prompt line, x can be suboptimal. The relatively small scale of the rewriter size is also a limitation of the performance after the warm-up. Then we turn to reinforcement learning to align the rewriter to the following retriever and LLM reader." }, { "figure_ref": [], "heading": "Reinforcement Learning", "publication_ref": [ "b40", "b35", "b39", "b35", "b52" ], "table_ref": [], "text": "To further fine-tune the rewriter to cater to the LLM reader, we adopt a policy gradient reinforcement learning framework. Task Formulation In the context of reinforcement learning, the rewriter optimization is formulated as a Markov Decision Process 5-tuple ⟨S, A, P, R, γ⟩. (i) The state space S is a finite set limited by the vocabulary and the sequence length. (ii) The action space A is equals to the vocabulary. (iii) The transition probability P is determined by the policy network, which is the rewriter model G θ . (iv) The reward function R gives a reward value that depends on the current state. The policy gradient is derived from rewards, used as the training objective. (v) γ denotes the discount factor. More specifically, the rewriter G θ after the warm-up is the initial policy model π 0 . At each step t, the action a t is to generate the next token xt based on the observation of the present state, s t = [x, x<t ]. When the generation is stopped by the End-Of-Sentence token, one episode is ended. After finishing the retrieval and reading, a reward is computed by evaluating the final output, i.e., a score for the LLM reader prediction. Policy Optimization We adopt Proximal Policy Optimization (PPO) (Schulman et al., 2017), following (Ramamurthy et al., 2022). Maximization of the expectation of the reward R is formulated as\nmax θ E x∼p θ (•|x) [R(x, x)], max θ E (st,at)∼π θ ′ [min{k t,θ A θ ′ (s t , a t ) ; clip (k t,θ , 1 -ε, 1 + ε) A θ ′ (s t , a t )}], k t,θ = p θ (a t | s t ) p θ ′ (a t | s t ) ,(2)\nwhere θ ′ is the temporarily fixed policy for sampling and θ is updated. A denotes the advantage function, which is formulated based on the estimation of value network V ϕ . The value network V ϕ is initialized from the policy network π 0 . The formulation follows Generalized Advantage Estimation (GAE) (Schulman et al., 2015).\nδ t = R (s t , a t ) + V ϕ (s t+1 ) -V ϕ (s t ) , Âθ t (s t , a t ) = ∞ t ′ =0 λ t ′ δ t+t ′ , (3\n)\nwhere λ is the bias-variance trade-off parameter.\nThe reward function R reflects the quality of the generated queries, which needs to be consistent with the final evaluation of the task. x is fed to the retriever and the reader for a final prediction ŷ. A part of the reward function is the measures of ŷ compared to the golden label y (e.g., exact match and F 1 of the predicted answers), denoted as R lm . Besides, a KL-divergence regularization is added to prevent the model from deviating too far from the initialization (Ramamurthy et al., 2022;Ziegler et al., 2019).\nR (s t , a t ) = R lm ( x, y) -βKL (π θ ∥π 0 ) . (4)\nThe final loss function is composed of policy loss and value loss.\nL θ = - 1 |S| T τ ∈S T t=0 min(k t,θ A θ ′ , clip A θ ′ ), L ϕ = 1 |S| T τ ∈S T t=0 (V ϕ (s t ) -R t ) 2 , L ppo = L θ + λ v L ϕ .\n(5)\nHere, S denotes the sampled set, and T is for step numbers." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [ "b2", "b29", "b44" ], "table_ref": [], "text": "Rewriter For the frozen pipeline in §3.1, we prompt an LLM to rewrite the query with few-shot in-context learning (Brown et al., 2020;Min et al., 2022). Our prompt follows the formulation of [instruction, demonstrations, input], where the input is x. The instruction is straightforward and demonstrations are 1-3 random examples from training sets and are kept constant across all runs, mainly for the task-specific output format illustration, i.e., a short phrase as an answer for HotpotQA, and an option as an answer for MMLU. For the training scheme in §3.2, we fine-tuning a T5 as the rewriter.\nRetriever We use the Bing search engine as the retriever. It requires no candidate index construction like a dense retriever, nor candidates like a textbook. But it allows for a wide knowledge scope and up-to-time factuality. With Bing API, the retrieval is performed in two approaches. (i) For all retrieved web pages, we concatenate the snippets that are related sentences selected by Bing. This method is similar to using a search engine in a browser, input a query and press Enter, then collect the texts shown on the search result page. (ii) For retrieved web pages, we request the URLs and parser to get all the texts. This is similar to clicking on items on the search result page. Then we use BM25 to keep those with higher relevance scores with the query, reducing the document length.\nReader The reader is a frozen LLM, where we adopt ChatGPT (gpt-3.5-turbo) and Vicuna-13B. It performs reading comprehension and prediction with few-shot in-context learning. In our prompt, following the brief instruction and the demonstrations, the input is x or [doc, x] with retrieval augmentation.\nIt has been proved that both the phrasing of prompt lines (Zhang et al., 2023a) and the selection of demonstrations show effects on the in-context learning performance (Su et al., 2022;Zhang et al., 2023b). As it is not the focus of this work, we pay no more attention to prompt editing." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Task Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Open-domain QA", "publication_ref": [ "b47", "b30", "b19" ], "table_ref": [], "text": "Three open-domain QA datasets are used for evaluation. (i) HotPotQA (Yang et al., 2018) consists of complex questions that require multi-hop reasoning. We evaluate the full test set. (ii) AmbigNQ (Min et al., 2020) provides a disambiguated version of Natural Questions (NQ) (Kwiatkowski et al., 2019). For ambiguous questions in NQ, minimal constraints are added to break it into several similar" }, { "figure_ref": [], "heading": "Direct prompt", "publication_ref": [], "table_ref": [], "text": "Answer the question in the following format, end the answer with '**'. {demonstration} Question: {x} Answer:" }, { "figure_ref": [], "heading": "Reader prompt in retrieval-augment pipelines", "publication_ref": [ "b27" ], "table_ref": [], "text": "Answer the question in the following format, end the answer with '**'. {demonstration} Question: {doc} {x} Answer:\nPrompts for LLM as a frozen rewriter Open-domain QA: Think step by step to answer this question, and provide search engine queries for knowledge that you need. Split the queries with ';' and end the queries with '**'. {demonstration} Question: {x} Answer: Multiple choice QA: Provide a better search query for web search engine to answer the given question, end the queries with '**'. {demonstration} Question: {x} Answer: but specific questions. The first 1000 samples are evaluated in the test set. (iii) PopQA (Mallen et al., 2022) includes long-tail distributions as it contains more low-popularity knowledge than other popular QA tasks. We split the dataset into 13k for training and 714 for testing.\nOpen-domain QA benchmarks are sets of question-answer pairs denoted as {(q, a) i }. We use ChatGPT for both the reader and the frozen rewriter. The evaluation metrics are Exact Match (EM ) and F 1 scores. For the reward function in RL, we use an indicator to reward if the retrieved content hits the answer and penalize if misses the answer, denoted as Hit. The total reward is a weighted sum of EM, F 1 , and Hit.\nHit = 1 a in doc, -1 else R lm = EM + λ f F 1 + λ h Hit. (6)" }, { "figure_ref": [], "heading": "Multiple-choice QA", "publication_ref": [ "b10" ], "table_ref": [], "text": "For multiple-choice QA, our evaluation is conducted on Massive Multi-task Language Understanding (MMLU) (Hendrycks et al., 2021), an exam question dataset including 4 categories: Humanities, STEM, Social Sciences, and Other. Each category is split into 80% for the training set and 20% for the test set.\nMultiple-choice QA can be formulated as {(q ′ , a) i }, where q ′ = [q, c 0 , c 1 , c 2 , c 3 ]. c denotes the options, generally there are four for each question. The retrieved documents that are included in the officially provided contaminated lists are ignored. The questions with options are rewritten into search queries. The answer is one option. EM is reported as metrics and used for the reward.\nR lm = EM.(7)\nWe use ChatGPT as a frozen rewriter and the reader.\nWe also use Vicuna-13B as the reader for evaluation due to the rate limit issue of ChatGPT. More information on datasets and training setup are presented in the appendix." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The following settings are implemented to evaluate and support our methods. (i) Direct: The standard in-context learning without any augmentations. (ii) Retrieve-then-read: The standard retrieval-augmented method. Retrieved documents are concatenated with the question. (iii) LLM as a frozen rewriter: As is introduced in §3.1, we prompt a frozen LLM to reason and generate queries by few-shot in-context learning. (iv) Trainable rewriter: Applying the fine-tuned rewriter, the output queries are used by the retriever and the reader. Table 1 presents prompt line forms. Please note that the prompts for prediction are kept the same for each task." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b27" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Experimental results on open-domain QA are reported in Table 2. For the three datasets, query rewriting consistently brings performance gain with both a frozen rewriter and a trainable rewriter. On AmbigNQ and PopQA, the standard retrieval augments the reader, indicating useful external knowledge is retrieved. On HotpotQA, the standard retrieval hurts the reader. This shows that using complex questions as queries cannot compensate for the parametric knowledge, but bring noises instead (Mallen et al., 2022). This suggests that multi-hop questions are not suitable queries for the web search engine. The scores increase by adding the rewriting step. On PopQA, our trainable rewriter surpasses standard retrieval while being inferior to the LLM rewriter. This indicates that the distillation of query rewriting is sub-optimal. The scores on multiple-choice QA are presented in Table 3. With ChatGPT as a reader, it can be observed that query rewriting improves the scores in most of the settings, except for the social sciences category. With Vicuna as a reader, our method achieves more gains on the four categories compared to ChatGPT. This agrees with the intuition that a more powerful reader has more parametric memories, thus more difficult to compensate with external knowledge. 6 Analysis" }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Training Process", "publication_ref": [ "b27" ], "table_ref": [], "text": "The training process includes two stages, warm-up and reinforcement learning. This section shows the validation scores of the three open-domain QA datasets for further analysis. Figure 2 presents the metric scores through training iterations in the process of reinforcement learning. As the rewriting models have been warmed up on the pseudo data before RL, scores at \"0 iteration\" denote the ability acquired from the warm-up training. It can be observed that the curves show upward trends with some fluctuations on all the datasets. (i) For multi-hop questions in HotpotQA, the standard retrieval is relatively weaker. Complex questions can be not specific search queries and show a larger gap from rewritten queries, i.e., the green and red lines. (ii) On AmbigNQ and PopQA, our method surpasses the baselines after several iterations (3 or 4). This indicates that the RL training stage can compensate for the insufficiency of the distillation on the pseudo data during warm-up training. (iii) In particular, on PopQA, the trainable rewriter remains inferior to the LLM rewriter. This can be explained as the dataset is constructed for adaptive retrieval (Mallen et al., 2022), which only uses retrieval where it helps to avoid harmful redundant retrieval. Thus, \"None\" is a possible query that means no retrieval. This causes more complexity and uncertainty. LLM rewriter knows better when the retrieval is needed for itself as a reader, although the rewriting step is not concatenated as the input context of the reader.\nWe calculate the performance of query \"None\". The questions that can be correctly answered without retrieval (i.e., the \"Direct\" method) are those samples that need no more context. Comparing this retrieval-free set with those that are rewritten to be\"None\" query, the F 1 score of the LLM rewriter is 71.9% and the T5 rewriter score is 67.1%. If we consider the questions that can be correctly answered without retrieval but go wrong with retrieval as the retrieval-free set, the F 1 scores are 78.7% for LLM rewriter and 77.4% for T5. " }, { "figure_ref": [], "heading": "Retrieval Result", "publication_ref": [ "b27", "b25" ], "table_ref": [ "tab_4" ], "text": "Our proposed method is a pipeline framework, instead of an end-to-end system. The query rewriting first affects the retrieved context, then the context makes a difference to the output of the reader. Hence, QA metrics are indirect measurements. We take a closer look at the retrieved context and the reader capability through the retrieval metric, hit ratio. After text normalization, the hit rate is computed to measure whether the retrieved context contains the correct answers.\nTable 4 shows the scores on AmbigNQ. The scores in the second line are computed on a selection of the samples whose retrieved contexts hit correct answers (under the standard retrieve-thenread setting). The scores show the approximate upper bound ability of the reader with retrieval augmentation, abbreviated as the \"upper bound\" score. The effectiveness of retrieval is proved compared to the no retrieval setting (the first line). For each retrieval method, two settings are presented: (i) collecting Bing snippets, (ii) selecting from URLs by BM25. The metrics show that content selection with BM25 recalls better documents than snippets, 2 Our trainable rewriter is adapted to the retriever using BM25 during RL training. Using the output queries of the test set after training, the snippet hit rate is 73.4%. while query rewriting makes progress on both settings. We also observed that the improvement in hit rate of the retriever is more significant than the improvement in the reader. This is consistent with the findings in related search (Mallen et al., 2022;Liu et al., 2023).\n✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌" }, { "figure_ref": [ "fig_1" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "To intuitively show how the query rewriting makes a difference in the retrieved contexts and prediction performance, we present examples in Figure 3 to compare the original questions and the queries. In example 1, the original question asks for a film that the youngest daughter of Lady Mary-Gaye Curzon co-stars with two certain actors. Both query 1 and query 2 put the keyword film forward, closely following the youngest daughter of Lady Mary-Gaye Curzon. With both, the actress Charlotte Calthorpe and her movie information can be retrieved and the answer is included. The second is an example where the query from the LLM rewriter failed but the query from T5 gets the correct answer. The number 2000 is misunderstood in query 1, while query 2 keeps 200 movie together, avoiding meaningless retrieval. Example 3 is for multiple choice. The query simplifies the background and enhances the keyword community planner. The retrieve contexts are mainly about Introduction to Community Planning where the answer environment appears several times." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper introduces the Rewrite-Retrieve-Read pipeline, where a query rewriting step is added for the retrieval-augmented LLM. This approach is applicable for adopting a frozen large language model as the reader and a real-time web search engine as the retriever. Further, we propose to apply a tuneable small language model the rewriter, which can be trained to cater to the frozen retriever and reader. The training implementation consists of two stages, warm-up and reinforcement learning. Evaluation and analyses on open-domain QA and multiple-choice QA show the effectiveness of query rewriting. Our work proposes a novel retrieval-augmented black-box LLM framework, proves that the retrieval augmentation can be enhanced from the aspect of query rewriting, and provides a new method for integrating trainable modules into black-box LLMs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b17", "b48" ], "table_ref": [], "text": "We acknowledge the limitations of this work. (i) There is still a trade-off between generalization and specialization among downstream tasks. Adding a training process, the scalability to direct transfer is compromised, compared to few-shot in-context learning. (ii) The research line of LLM agent has shown impressive performance but relies on multiple calls to the LLM for each sample (Khattab et al., 2022;Yao et al., 2023), where the LLM plays as an agent to flexibly call the retriever multiple times, reads the context in earlier hops, and generates follow-up questions. Different from these studies, our motivation is to enhance the oneturn retriever-then-read framework with a trainable query rewriter. (iii) Using a web search engine as the retriever also leads to some limitations. Neural dense retrievers that are based on professional, filtered knowledge bases may potentially achieve better and controllable retrieval. More discussion is included in the appendix." }, { "figure_ref": [], "heading": "A Warm-up Dataset", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "For the warm-up training of the tuneable rewriter, we construct a pseudo dataset for the query rewriting task. For benchmarks that provide official training and test splits (HotpotQA and AmbigNQ), we use the whole training set. For those that have no official splits (PopQA and MMLU), we randomly split the full dataset. In detail, PopQA contains 16 types of questions, thus split into 13k for training and 714 for testing following stratified sampling. For MMLU, each of the 4 categories is randomly split into 80% for the training set and 20% for the test set. Then the training sets of each benchmark are used to derive the pseudo dataset for the query rewriting, i.e., D T rain = {(x, x)|ŷ = y}. We present the statistics of the splits and warm-up dataset in Table 5." }, { "figure_ref": [], "heading": "B Setup Details", "publication_ref": [], "table_ref": [], "text": "For warm-up, we train the T5-large with 3e-5 learning rate, {16, 20} batch size, for {6,8,12} epochs. For reinforcement learning, we set the sampling \nβ t+1 = β t (1 + K β e t ) ,\nwhere KL target is set to 0.2, K β is set to 0.1. β 0 is initialized to be 0.001. The generation strategy follows the 4-beam search and returns the one sequence. In the implementation of the BM25based retriever, the textboxes from searched URLs are parsed from HTML code. We compute BM25 scores between the paragraph from each textbox and the query following the scikit-learn package, then keep those with higher scores until the reserved context reaches a max length. In reinforcement learning, the results of AmbigNQ are with the BM25 method, while others use snippets as context." }, { "figure_ref": [], "heading": "C Web Search: Tool Use", "publication_ref": [ "b33", "b38", "b20", "b28", "b43", "b41" ], "table_ref": [], "text": "Our proposed pipeline integrates an externally built web search engine as the retriever module. We present more discussion on the advantages and disadvantages here.\nThe usage of external tools expands the ability boundary of language models, compensating for the parametric knowledge, and grounding the capabilities of language models to interact with environments (Qin et al., 2023;Schick et al., 2023). Recent studies show a trend to leverage plug-andplay tools like search engines to enhance language agents (Lazaridou et al., 2022;Menick et al., 2022;Shuster et al., 2022;Shen et al., 2023). Search engine APIs are well-developed retrievers, saving efforts to build and maintain another retriever, like a Contriever. Accessible to the whole Internet, the web search retrieves from a wide-range, up-to-date base. The temporal misalignment problem on a fixed candidate database can be alleviated.\nOn the other hand, web search APIs are commercial products requiring subscriptions. Also, the vast amount of knowledge on the web can be difficult to control. The retrieved context from the Internet can be occasionally inconsistent, redundant, and toxic, which hinders the LLM reader.\nBeyond retrieval augmentation, in a general scope, other tools called by LLMs, like code interpreters, online models, and expert applications, are all similar to search engines, without trainable parameters to optimize. There could be a gap between the LM and these tools. This paper proposes an idea to align them through a trainable small model." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* Work done during an internship at 3 Microsoft Research Asia. # Equal contribution. †Corresponding author. This paper was partially supported by Joint Research Project of Yangtze River Delta Science and Technology Innovation Community (No. 2022CSJGG1400). 1 https://github.com/xbmxb/RAG-query-rewriting" } ]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-thenread pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs from the perspective of the query rewriting. Unlike prior studies focusing on adapting either the retriever or the reader, our approach pays attention to the adaptation of the search query itself, for there is inevitably a gap between the input text and the needed knowledge in retrieval. We first prompt an LLM to generate the query, then use a web search engine to retrieve contexts. Furthermore, to better align the query to the frozen modules, we propose a trainable scheme for our pipeline. A small language model is adopted as a trainable rewriter to cater to the black-box LLM reader. The rewriter is trained using the feedback of the LLM reader by reinforcement learning. Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice QA. Experiments results show consistent performance improvement, indicating that our framework is proven effective and scalable, and brings a new framework for retrieval-augmented LLM 1 .
Query Rewriting for Retrieval-Augmented Large Language Models
[ { "figure_caption": "F1Figure 2 :2Figure 2: Reinforcement learning validation scores of (a)HotpotQA, (b)AmbigNQ, and (c)PopQA. The solid lines show EM (red) and F1 (blue) numbers through training iterations. The dashed lines are EM scores of the standard retrieve-then-read method (orange) and retrieval with an LLM as the rewriter (green).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Examples for intuitive illustration. Q0 denotes original input, Q1 is from the LLM rewriter, and Q2 is from the trained T5 rewriter. Hit means retriever recall the answer, while Correct is for the reader output.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "but be of 50× smaller size. The Internet as a knowledge base More related to our work, the search engine can assume the role of the retriever and use the Internet as the source of", "figure_data": "InputInputInputExampleRetrieverRewriter Black-box LLMRewriter Small PrLMElia Kazan have in common? Input: What profession does Nicholas Ray andQueryQueryQuery: Nicholas Ray professionQuery: Elia Kazan professionDocumentsWeb SearchWeb SearchRetrieverRetrieverElia Kazan was an American film andtheatre director, producer,screenwriter and actor, described ......Black-box LLM ReaderDocumentsDocumentsNicholas Ray American author and director, original name RaymondNicholas Kienzle, born August 7,1911, Galesville, Wisconsin, U.S......OutputBlack-box LLM ReaderBlack-box LLM ReaderCorrect (reader)directorHit (retriever)OutputRewardOutput(a) Retrieve-then-read (b)Rewrite-retrieve-read(c) Trainable rewrite-retrieve-read", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Prompt lines used for the LLMs.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Metrics of open-domain QA.", "figure_data": "EMF1HotpotQADirect32.3643.05Retrieve-then-read30.4741.34LLM rewriter32.8043.85Trainable rewriter34.3845.97AmbigNQDirect42.1053.05Retrieve-then-read45.8058.50LLM rewriter46.4058.74Trainable rewriter47.8060.71PopQADirect41.9444.61Retrieve-then-read43.2047.53LLM rewriter46.0049.74Trainable rewriter45.7249.51MMLUEMHuman. STEM Other SocialChatGPTDirect75.658.8 69.0 71.6Retrieve-then-read76.763.3 70.0 78.2LLM rewriter77.063.5 72.6 76.4Vicuna-13BDirect39.834.9 50.2 46.6Retrieve-then-read40.239.8 55.2 50.6LLM rewriter42.041.5 57.1 52.2Trainable rewriter43.240.9 59.3 51.2", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Metrics of multiple choice QA.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Retrieval analysis on AmbigNQ.", "figure_data": "ModelEMF1Hit ratioNo retrieval42.1053.05-Upper bound58.4069.45100Retrieve-then-readw/ snippet38.7050.5061.1w/ BM2545.8058.5076.4LLM rewriterw/ snippet39.8052.6463.5w/ BM2546.4058.7477.5Trainable rewriterw/ BM25 247.8060.7182.2", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Metrics of multiple choice QA. steps to 5120, 10 threads, 512 steps for each. After sampling, the policy network is trained for {2,3,4} epochs, with learning rate as 2e-6 and batch size as {8,16}. λ f and λ h are 1.0. β in Eq. 4 is dynamically adapted according toRamamurthy et al. (2022); Ziegler et al. (2019), e t = clip KL (π∥π 0 ) -KL target KL target , -0.2, 0.2 ,", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Xinbei Ma; Yeyun Gong; Pengcheng He; Hai Zhao; Nan Duan
[ { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b0", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "Parishad Behnamghader; Santiago Miret; Siva Reddy", "journal": "", "ref_id": "b1", "title": "Can retriever-augmented language models reason? the blame game between the retriever and the language model", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "", "ref_id": "b3", "title": "Reading Wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Pondé De Oliveira Pinto; Jared Kaplan; Harrison Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman; Alex Ray; Raul Puri; Gretchen Krueger; Michael Petrov; Heidy Khlaaf; Girish Sastry; Pamela Mishkin; Brooke Chan; Scott Gray; Nick Ryder; Mikhail Pavlov; Alethea Power; Lukasz Kaiser; Mohammad Bavarian; Clemens Winter; Philippe Tillet; Felipe Petroski Such; Dave Cummings; Matthias Plappert; Fotios Chantzis; Elizabeth Barnes; Ariel Herbert-Voss; William Hebgen Guss; Alex Nichol; Alex Paino; Nikolas Tezak; Jie Tang; Igor Babuschkin; Suchir Balaji; Shantanu Jain; William Saunders; Christopher Hesse; Andrew N Carr; Jan Leike; Joshua Achiam; Vedant Misra; Evan Morikawa; Alec Radford; Matthew Knight; Miles Brundage; Mira Murati; Katie Mayer; Peter Welinder; Bob Mcgrew; Dario Amodei; Sam Mccandlish; Ilya Sutskever; Wojciech Zaremba", "journal": "", "ref_id": "b4", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b5", "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b6", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Mingwei Chang", "journal": "", "ref_id": "b8", "title": "Retrieval augmented language model pre-training", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b10", "title": "Measuring massive multitask language understanding", "year": "2021" }, { "authors": "Namgyu Ho; Laura Schmid; Se-Young Yun", "journal": "", "ref_id": "b11", "title": "Large language models are reasoning teachers", "year": "2022" }, { "authors": "Cheng-Yu Hsieh; Chun-Liang Li; Chih-Kuan Yeh; Hootan Nakhost; Yasuhisa Fujii; Alexander J Ratner; Ranjay Krishna; Chen-Yu Lee; Tomas Pfister", "journal": "", "ref_id": "b12", "title": "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes", "year": "2023" }, { "authors": "Gautier Izacard; Patrick Lewis; Maria Lomeli; Lucas Hosseini; Fabio Petroni; Timo Schick; Jane Dwivedi-Yu; Armand Joulin; Sebastian Riedel; Edouard Grave", "journal": "", "ref_id": "b13", "title": "Few-shot Learning with Retrieval Augmented Language Models", "year": "2022" }, { "authors": "Joel Jang; Seonghyeon Ye; Changho Lee; Sohee Yang; Joongbo Shin; Janghoon Han; Gyeonghun Kim; Minjoon Seo", "journal": "", "ref_id": "b14", "title": "Temporalwiki: A lifelong benchmark for training and evaluating ever-evolving language models", "year": "2022" }, { "authors": "Zhengbao Jiang; Luyu Gao; Jun Araki; Haibo Ding; Zhiruo Wang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b15", "title": "Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer", "year": "2022" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Omar Khattab; Keshav Santhanam; Lisa Xiang; David Li; Percy Hall; Christopher Liang; Matei Potts; Zaharia", "journal": "", "ref_id": "b17", "title": "Demonstrate-searchpredict: Composing retrieval and language models for knowledge-intensive NLP", "year": "2022" }, { "authors": "Mojtaba Komeili; Kurt Shuster; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Internet-augmented dialogue generation", "year": "2022" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b19", "title": "Natural questions: a benchmark for question answering research", "year": "2019" }, { "authors": "Angeliki Lazaridou; Elena Gribovskaya; Wojciech Stokowiec; Nikolai Grigorev", "journal": "", "ref_id": "b20", "title": "Internetaugmented language models through few-shot prompting for open-domain question answering", "year": "2022" }, { "authors": "Haejun Lee; Akhil Kedia; Jongwon Lee; Ashwin Paranjape; Christopher Manning; Kyoung-Gu Woo", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "You only need one model for open-domain question answering", "year": "2022" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Zekun Li; Baolin Peng; Pengcheng He; Michel Galley; Jianfeng Gao; Xifeng Yan", "journal": "", "ref_id": "b24", "title": "Guiding large language models via directional stimulus prompting", "year": "2023" }, { "authors": "Kevin Nelson F Liu; John Lin; Ashwin Hewitt; Michele Paranjape; Fabio Bevilacqua; Percy Petroni; Liang", "journal": "", "ref_id": "b25", "title": "Lost in the middle: How language models use long contexts", "year": "2023" }, { "authors": "Kelvin Luu; Daniel Khashabi; Suchin Gururangan; Karishma Mandyam; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Time waits for no one! analysis and challenges of temporal misalignment", "year": "2022" }, { "authors": "Alex Mallen; Akari Asai; Victor Zhong; Rajarshi Das; Hannaneh Hajishirzi; Daniel Khashabi", "journal": "", "ref_id": "b27", "title": "When not to trust language models: Investigating effectiveness and limitations of parametric and nonparametric memories", "year": "2022" }, { "authors": "Jacob Menick; Maja Trebacz; Vladimir Mikulik; John Aslanides; Francis Song; Martin Chadwick; Mia Glaese; Susannah Young; Lucy Campbell-Gillingham; Geoffrey Irving", "journal": "", "ref_id": "b28", "title": "Teaching language models to support answers with verified quotes", "year": "2022" }, { "authors": "Sewon Min; Xinxi Lyu; Ari Holtzman; Mikel Artetxe; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Rethinking the role of demonstrations: What makes in-context learning work?", "year": "2022" }, { "authors": "Sewon Min; Julian Michael; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "", "ref_id": "b30", "title": "AmbigQA: Answering ambiguous open-domain questions", "year": "2020" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Ofir Press; Muru Zhang; Sewon Min; Ludwig Schmidt; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b32", "title": "Measuring and narrowing the compositionality gap in language models", "year": "2022" }, { "authors": "Yujia Qin; Shihao Liang; Yining Ye; Kunlun Zhu; Lan Yan; Yaxi Lu; Yankai Lin; Xin Cong; Xiangru Tang; Bill Qian", "journal": "", "ref_id": "b33", "title": "Toolllm: Facilitating large language models to master 16000+ real-world apis", "year": "2023" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b34", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Rajkumar Ramamurthy; Prithviraj Ammanabrolu; Kianté Brantley; Jack Hessel; Rafet Sifa; Christian Bauckhage; Hannaneh Hajishirzi; Yejin Choi", "journal": "", "ref_id": "b35", "title": "Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization", "year": "2022" }, { "authors": "Paul Röttger; Janet Pierrehumbert", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Temporal adaptation of BERT and performance on downstream document classification: Insights from social media", "year": "2021" }, { "authors": "Devendra Singh Sachan; Siva Reddy; William L Hamilton; Chris Dyer; Dani Yogatama", "journal": "", "ref_id": "b37", "title": "End-toend training of multi-document reader and retriever for open-domain question answering", "year": "2021-12-06" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b38", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "John Schulman; Philipp Moritz; Sergey Levine; Michael Jordan; Pieter Abbeel", "journal": "", "ref_id": "b39", "title": "High-dimensional continuous control using generalized advantage estimation", "year": "2015" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b40", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b41", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface", "year": "2023" }, { "authors": "Weijia Shi; Sewon Min; Michihiro Yasunaga; Minjoon Seo; Rich James; Mike Lewis; Luke Zettlemoyer; Wen-Tau Yih", "journal": "", "ref_id": "b42", "title": "Replug: Retrievalaugmented black-box language models", "year": "2023" }, { "authors": "Kurt Shuster; Mojtaba Komeili; Leonard Adolphs; Stephen Roller; Arthur Szlam; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion", "year": "2022-12-07" }, { "authors": "Hongjin Su; Jungo Kasai; Chen Henry Wu; Weijia Shi; Tianlu Wang; Jiayi Xin; Rui Zhang; Mari Ostendorf; Luke Zettlemoyer; Noah A Smith", "journal": "", "ref_id": "b44", "title": "Selective annotation makes language models better fewshot learners", "year": "2022" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc V Le; Ed H Chi; Denny Zhou", "journal": "", "ref_id": "b45", "title": "Selfconsistency improves chain of thought reasoning in language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b46", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Karthik Narasimhan; Yuan Cao", "journal": "", "ref_id": "b48", "title": "ReAct: Synergizing reasoning and acting in language models", "year": "2023" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b49", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2023" }, { "authors": "Tianjun Zhang; Xuezhi Wang; Denny Zhou; Dale Schuurmans; Joseph E Gonzalez", "journal": "", "ref_id": "b50", "title": "Tempera: Test-time prompt editing via reinforcement learning", "year": "2023" }, { "authors": "Zhuosheng Zhang; Aston Zhang; Mu Li; Alex Smola", "journal": "", "ref_id": "b51", "title": "Automatic chain of thought prompting in large language models", "year": "2023" }, { "authors": "Nisan Daniel M Ziegler; Jeffrey Stiennon; Tom B Wu; Alec Brown; Dario Radford; Paul Amodei; Geoffrey Christiano; Irving", "journal": "", "ref_id": "b52", "title": "Fine-tuning language models from human preferences", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 334.6, 227.07, 161.34, 24.31 ], "formula_id": "formula_0", "formula_text": "L warm = - t logp θ ( xt | x<t , x )." }, { "formula_coordinates": [ 5, 81.11, 90.29, 208.76, 90.65 ], "formula_id": "formula_1", "formula_text": "max θ E x∼p θ (•|x) [R(x, x)], max θ E (st,at)∼π θ ′ [min{k t,θ A θ ′ (s t , a t ) ; clip (k t,θ , 1 -ε, 1 + ε) A θ ′ (s t , a t )}], k t,θ = p θ (a t | s t ) p θ ′ (a t | s t ) ,(2)" }, { "formula_coordinates": [ 5, 95.73, 289.91, 189.89, 48.61 ], "formula_id": "formula_2", "formula_text": "δ t = R (s t , a t ) + V ϕ (s t+1 ) -V ϕ (s t ) , Âθ t (s t , a t ) = ∞ t ′ =0 λ t ′ δ t+t ′ , (3" }, { "formula_coordinates": [ 5, 285.63, 308.59, 4.24, 9.46 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 5, 85.14, 518.96, 204.73, 13.36 ], "formula_id": "formula_4", "formula_text": "R (s t , a t ) = R lm ( x, y) -βKL (π θ ∥π 0 ) . (4)" }, { "formula_coordinates": [ 5, 80.86, 577.43, 198.28, 89.75 ], "formula_id": "formula_5", "formula_text": "L θ = - 1 |S| T τ ∈S T t=0 min(k t,θ A θ ′ , clip A θ ′ ), L ϕ = 1 |S| T τ ∈S T t=0 (V ϕ (s t ) -R t ) 2 , L ppo = L θ + λ v L ϕ ." }, { "formula_coordinates": [ 6, 111.73, 463.01, 178.14, 45.49 ], "formula_id": "formula_6", "formula_text": "Hit = 1 a in doc, -1 else R lm = EM + λ f F 1 + λ h Hit. (6)" }, { "formula_coordinates": [ 6, 151.84, 743.52, 138.02, 10.77 ], "formula_id": "formula_7", "formula_text": "R lm = EM.(7)" }, { "formula_coordinates": [ 8, 485.65, 92.19, 37.46, 309.1 ], "formula_id": "formula_8", "formula_text": "✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ❌ ❌ ❌ ❌ ❌" }, { "formula_coordinates": [ 12, 311.73, 317.51, 101.54, 10.77 ], "formula_id": "formula_9", "formula_text": "β t+1 = β t (1 + K β e t ) ," } ]
10.1162/tacl_a_00416
2023-10-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b11", "b33", "b3", "b21", "b10", "b13", "b32", "b22", "b29", "b16", "b9", "b6" ], "table_ref": [ "tab_0" ], "text": "The success of NLP models greatly depends on the availability and quality of training data. This poses a significant challenge for multilingual NLP, as data for languages other than English is typically limited (Ponti et al., 2019;Joshi et al., 2020;Whitehouse et al., 2022). An approach to address the data scarcity challenge is through zero-shot crosslingual transfer or multitask training, in which a model is trained across data of diverse tasks and languages, exhibiting the capability to handle unseen tasks, particularly in larger models (Artetxe and Schwenk, 2019;Nooralahzadeh et al., 2020;Huang et al., 2021). However, when aiming for task-specific objectives, a smaller, fine-tuned model dedicated to that particular task often outperforms larger general-purpose, zero-shot models. In addition, a smaller task-specific model is more practical and cost-effective for training and deployment. Nevertheless, developing a powerful task-specific model becomes challenging in the absence of training data (Lauscher et al., 2020).\nConversely, recent powerful Large Language Models (LLMs) excel at handling general instructions and have shown promise in data generation tasks (Wang et al., 2023). In this work, we leverage LLMs to generate synthetic data for various multilingual commonsense reasoning tasks, XCOPA (Ponti et al., 2020), XWinograd (Tikhonov and Ryabinin, 2021), and XStoryCloze (Lin et al., 2022), where the training data is limited even for English (see Table 1). To augment the training data, we provide LLMs with instructions and examples from the original training data, prompting them to generate new and diverse examples. We explore the generation of synthetic data in English using different LLMs, including open-source models like Dolly-v21 and StableVicuna2 , as well as ChatGPT and GPT-4. Although the weights and capabilities of the latter two models remain undisclosed, we explore them as they extend the capability of generating texts in languages beyond English.\nWe develop task-specific models by fine-tuning multilingual pre-trained language models, namely mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), using the generated data. We then compare their performance against models trained on a limited set of human-created data in the target language whenever available, and otherwise through zero-shot transfer learning from manually created English training data. Our experiments demonstrate that training the models with relatively large synthetically generated datasets yields better performance than training with limited manuallycreated datasets. This finding empirically confirms the utility of synthetic data generated by LLMs for improving downstream task-specific models.\nWe expand the multilingual data synthesis using ChatGPT and GPT-4 on XCOPA and find that generating multilingual datasets generally surpasses the effectiveness of the zero-shot cross-lingual transfer. We further assess the quality of the generated dataset in different languages by asking native speakers to evaluate the naturalness and logical soundness of the generated dataset compared to the human-written examples. The annotation results reveal that while ChatGPT and GPT-4 successfully generate natural text in most languages, they struggle with generating understandable text in certain languages such as Tamil. Moreover, a noticeable gap is observed in terms of commonsense coherence when comparing ChatGPT-generated data to human-constructed data. On the other hand, GPT-4 significantly narrows this difference.\nTo summarise, our work has the following key contributions:\n• Augmenting three low-resource, multilingual commonsense reasoning datasets by leveraging and prompting four LLMs; • Fine-tuning smaller models, mBERT and XLMR, using the synthesised data and showcasing the practical value of the LLMgenerated data; • Performing an extensive analysis of the effects of various target languages in data generation and scaling, as well as a human evaluation of the naturalness and logical coherence of the data generated in various languages; • Releasing the synthesised datasets for public use and reproducibility." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b6", "b38", "b26", "b7", "b2", "b0", "b34", "b34", "b20", "b37", "b17", "b27" ], "table_ref": [], "text": "Multilingual and Low-Resource NLP\nRecently, there has been increased attention on expanding NLP beyond English, including the development of multilingual models (Devlin et al., 2019;Conneau et al., 2020;Xue et al., 2021;Scao et al., 2022) as well as the creation of benchmarks to address multilingual challenges (Conneau et al., 2018;Artetxe et al., 2020;Adelani et al., 2021;Winata et al., 2023). Among the prevailing challenges faced across various languages, a common theme is the scarcity of available data. Consequently, when data is lacking, one approach is to employ zero-shot cross-lingual transfer. Studies conducted by Winata et al. (2023) have demonstrated the effectiveness of zero-shot crosslingual transfer for related languages. Additionally, Muennighoff et al. (2023) show that models finetuned only with English instruction data are capable of understanding multilingual instructions. In this work, we are tackling a similar scenario where the availability of data is limited. et al. (2020) show that few-shot can drastically increase the cross-lingual performance of small models, proving that multilingual data augmentation is an effective strategy. A series of works try to predict the cross-lingual accuracy of models through measurements and modelling (Xia et al., 2020), and study strategies for multilingual data augmentation, such as choosing the transfer languages (Lin et al., 2019), and predicting multilingual few-shot accuracy leading for optimal data augmentation approaches (Srinivasan et al., 2022)." }, { "figure_ref": [], "heading": "Multilingual Data Augmentation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Lauscher", "publication_ref": [ "b14", "b24", "b28", "b35", "b33", "b8" ], "table_ref": [], "text": "Many works focus on synthetic data augmentation for code-mixing, including utilising linguistic theories (Lee et al., 2019;Pratapa et al., 2018), machine translation models (Tarunesh et al., 2021), parallel corpus and Wikipedia (Winata et al., 2019;Whitehouse et al., 2022), and employing ChatGPT (Dai et al., 2023). Our work explores data augmentation on multilingual commonsense datasets with powerful instruction-tuned LLMs." }, { "figure_ref": [], "heading": "XCOPA XWINOGRAD XSTORYCLOZE", "publication_ref": [], "table_ref": [], "text": "We are collecting more examples for the COPA dataset which will be used to test a system's ability of Commonsense Causal Judgments. The format of the data: A premise: a statement of something that happened, and two choices that could plausibly {occur as the result / be the cause} of the premise. The correct choice is the alternative that is more plausible than the wrong choice. " }, { "figure_ref": [], "heading": "Dataset Augmentation", "publication_ref": [ "b25", "b15", "b16", "b19" ], "table_ref": [ "tab_0" ], "text": "Our experiments use XCOPA, XWinograd, and XS-toryCloze, which are selected due to (1) the limited availability of training data and (2) commonsense reasoning datasets present greater challenges for data synthesis. Table 1 summarises the statistics of the three datasets.\nXCOPA is a cross-lingual Choice of Plausible Alternatives dataset that translates and re-annotates the validation and test sets of English (EN) COPA (Roemmele et al., 2011) into 11 target languages (ET: Estonian, HT: Haitian Creole, ID: Indonesian, IT: Italian, QU: Quechua, SW: Swahili, TA: Tamil, TH: Thai, TR: Turkish, VI: Vietnamese, and ZH: Chinese).3 Each instance consists of a premise, a question (cuase/result), and two alternatives. The task is to predict the more plausible alternative.\nXWinograd expands the original English Winograd Schema Challenge (WSC) (Levesque et al., 2012) to five other languages (FR: French, JA: Japanese, PT: Portuguese, RU: Russian, and ZH),4 which consists of pronoun resolution problems aim-ing to evaluate the commonsense reasoning ability of a machine. Given a statement with two noun phrases and a pronoun, the challenge of WSC is to determine the referent of the pronoun, which can only be inferred from the context.\nXStoryCloze is collected by Lin et al. (2022), where the validation split of the original English StoryCloze dataset (Mostafazadeh et al., 2016) is translated into 10 other typologically diverse languages (RU, ZH, ES: Spanish, AR: Arabic, HI: Hindi, ID, TE: Telugu, SW, EU: Basque, and MY: Burmese). Each example consists of a foursentence commonsense story, a correct ending, as well as a wrong ending." }, { "figure_ref": [], "heading": "LLMs for Data Generation", "publication_ref": [ "b26", "b4" ], "table_ref": [], "text": "Our preliminary experiments reveal that language models that are specifically fine-tuned on downstream NLP tasks, such as BLOOMZ (Scao et al., 2022) and Flan-T5 (Chung et al., 2022), struggle to follow the complex instructions. Conversely, more recent LLMs such as Dolly-v2, StableVicuna, ChatGPT, and GPT-4, which are designed to handle more intricate and general-purpose instructions, have demonstrated success in following our instructions for data generation. ChatGPT and GPT-4 also stand out with the capability of generating examples in non-English languages.\nWe explore synthetic data generation with the four aforementioned LLMs, balancing between open-access models and closed models (see §5.1). Specifically, we use dolly-v2-12b,5 which is derived from EleutherAI's Pythia-12b (Biderman et al., 2023) and fine-tuned on a ∼15K instructions generated by Databricks employees; and StableVicuna-13B, an RLHF (reinforcement learning from human feedback) fine-tuned Vicuna model on various conversational and instructional datasets -Vicuna is an open-source LLaMA model (Touvron et al., 2023a) fine-tuned on user-shared conversations collected from ShareGPT.6 " }, { "figure_ref": [], "heading": "Instructions and Responses", "publication_ref": [], "table_ref": [], "text": "We utilise LLMs to generate synthetic examples for all datasets by prompting them. We construct instructions using the descriptions from the dataset papers as a reference and provide LLMs with some examples, randomly sampled from the train (+validation) split of the original dataset, then ask LLMs to generate similar data points. We experiment with various instructions and evaluate the synthesised data on a smaller scale, update the instructions based on the errors, and then choose the best instruction to generate the final datasets.\nThe final instructions and responses are in Table 2. Our data generation process comprises the following key steps: (1) We establish the desired total number of examples to generate. This quantity can be determined by various factors such as budget constraints, a fixed ratio concerning the original dataset, etc. ( 2 We focus on a fixed-budget scenario and first generate a total of 3-4K data points for each dataset with LLMs. LLMs tend to generate fewer samples than requested or inconsistent output in invalid for- mats. We report the success rate for different LLMs on the three datasets in Table 3, which indicates that GPT-4 has the most robustness. Among the datasets, LLMs have the lowest generation success rate for XWinograd, which is more challenging. XWinograd requires both answers to be from the generated sentence, with only one pronoun being replaced. In addition, we observed pronoun inconsistency in the generated XWinograd data. Despite the requirement for interchangeable pronouns in the options, models frequently fail to comply. For example, \"The dog bit the mailman because _ entered the yard.\" is generated by Chat-GPT with the options 'The dog'\" or \"the mailman\", however, \"_\" in the sentence cannot be replaced by the same pronoun for the given two options, hence it may make the task easier and the example is considered suboptimal. We keep those instances in the dataset and discuss further in §6.1." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We first generate synthetic English examples for XCOPA, XWinograd, and XStoryCloze, with Dolly-v2, StableVicuna, ChatGPT, and GPT-4. The size of the final filtered synthesised data for the three datasets is 3.7k, 2K, and 1.7K, respectively. We then fine-tune mBERT, XLMR-base, and XLMR-large with the synthesised data and compare the zero-shot cross-lingual transfer performance across different languages, where we use the original validation set in target languages.\nFor XCOPA, we additionally experiment with generating data points directly in non-English languages, by providing examples in the target language and specifying the language desired for the generated data (see Table 2). However, since no examples for cause are included in TH and TR train/validation data (they do appear in the test split), we do not generate XCOPA for the two languages. We use ChatGPT and GPT-4 for multilingual synthetic data generation, as both Dolly-v2 and StableVicuna exhibit limitations in effectively generating multilingual text. The size of the multilingual synthesised data is ∼3.6K in each language.\nWe fine-tune models on all datasets as multiplechoice tasks8 by searching best learning rate from {5e -6 , 10e -6 }, and batch size from {8, 16, 32}. All the fine-tuning experiments are conducted on a single 40G A100. For generating data with Dolly-v2 and StableVicuna, we use 2×40G A100." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "This section presents the main results of fine-tuned models on the three datasets and compares performance with generated data in different LLMs, languages, and scales." }, { "figure_ref": [], "heading": "General Result", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Table 4 presents the average accuracy of fine-tuned mBERT, XLMR-Base, and XLMR-Large models across all languages on the three datasets. The models are trained using original data (ORI), different LLM-generated data (GEN), as well as a combination of both sources (O+G) in English.\nAcross different datasets, LLMs, and fine-tuned models, consistent improvements are observed when using both original and LLM-generated data. Among the models, Dolly-v2 performs the best on Xingorad when fine-tuned on mBERT, while GPT-4 achieves the highest accuracy in other settings. The most significant improvement is shown in XWinograd with XLMR-Base, where the addition of an extra 2k datapoints leads to an average accuracy enhancement of 12.8 compared to the baseline, across all four LLMs.\nWhen using only LLM-generated data, smaller models like mBERT and XLMR-Base generally outperform the baseline. However, with XLMR-Large, which achieves stronger baselines. e.g. >80 in XWinograd and XStoryCloze, the accuracy remains similar or even worse compared to using the original data. GPT-4-generated data demonstrates the best robustness but still experiences a decline in performance in XWinograd when the generated data size is similar to the original data. This highlights the challenges of generating data at a human-level quality." }, { "figure_ref": [], "heading": "Multilingual Data Generation", "publication_ref": [ "b12", "b1", "b22" ], "table_ref": [ "tab_0" ], "text": "We investigate whether the synthetically generated multilingual dataset outperforms training solely in English. We choose the XCOPA dataset and explore two settings: synthetic multilingual data by asking LLMs to generate responses in the target languages directly and translating the Englishgenerated data to target languages with Google Translate API. We exclude Dolly-v2 and Stable-Vicuna due to their limited effectiveness in generating non-English text. Although GPT-4 exhibits the most promising performance, it is significantly costlier compared to ChatGPT. Therefore, we also Table 5: Accuracy on XCOPA. ORI corresponds to the original data, GEN EN and GEN XX represents data generated in English and target languages. T rans denotes translations of the English-generated data. We show languages that are available in all settings. Improvement and decline in performance are represented with green and red shadows. consider using ChatGPT as a contrasting experiment under resource-constrained conditions.\nTable 5 shows the results for the languages that are available for all settings, excluding TR and TH (unavailable for LLM-generation, refer to §4), and QU (not supported by the Google Translate API). We can see the impact of the generated data varies across different fine-tuned models and languages, aligning with the findings of Kumar et al. (2022). Training on GPT-4 synthesised data displays consistent improvement across all scenarios and languages, except the zero-shot cross-lingual result on HT with XLMR-Large.\nMore fluctuating results can be observed with ChatGPT-generated data. A comparison between GENEN + ORI and GENXX + ORI indicates that utilising data generated in target languages generally leads to improved performance with GPT-4 generated data, as well as in base models with ChatGPT-generated data. However, for XLMR-Large, employing ChatGPT-generated data in target languages mostly yields negative outcomes. In languages such as TA and VI, training on generated data in the target languages results in more performance degradation compared to zero-shot cross-lingual transfer. This suggests that ChatGPT performs worse in those languages than XLMR-Large (Ahuja et al., 2023).\nTranslating the English dataset generally shows overall better results than training on the data generated directly in the target languages, with the exception of XLMR-Large with GPT-4. For SW, XLMR models fined-tuned with ChatGPT-generated data exhibit performance decline in most cases, even when the English-generated data benefits all other languages. This observation suggests that XLMR struggles with SW. In §6.1 we select TA, SW, and the two best languages, ID and ZH, along with EN, for human evaluation.\nAdditionally, we conduct experiments adding Target Languages in Validation (TLV). This only results in minor variations in the performance, consistent with the findings of Ponti et al. (2020). We include the full results in Table 11 in Appendix D." }, { "figure_ref": [], "heading": "Dataset Scaling Up", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We now investigate the impact of training on a larger scale of generated data on model performance. We focus on the XCOPA dataset and expand the generated data with ChatGPT (more budget-efficient) to 28.6k examples in English. We also compare the results of zero-shot cross-lingual transfer with translating the English-generated data to target languages.\nThe results in Table 6 demonstrate the positive impact of scaling up the generated data on model performance. Particularly, XLMR-Large exhibits the most significant improvement.\nFurthermore, we conduct experiments on generating data with a fixed ratio of the original datasets and the results are included in Appendix C." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "To better evaluate the quality of the generated datasets and compare them with the human-created data, we ask native speakers to annotate the multilingual data generated by ChatGPT and GPT-4.\nFor each dataset, we first select 50 generated examples in English, and then request two annotators to evaluate the examples in two categories:\n(1) Text Naturalness. The annotators are asked to choose one of the following options for each example: \"the text sounds natural\", \"the text sounds awkward but understandable\", or \"the text is not understandable\", and (2) Logic Soundness. This category focuses on the commonsense aspect of the examples. The annotators are required to select the most appropriate description from: \"the correct option is (clearly) more plausible\", \"both options are equally plausible\", \"both options are implausible\", or \"the wrong option is actually more plausible\". We only ask the annotators to evaluate the logic if the text is at least understandable.\nFor XWinograd, we introduce an additional evaluation criterion. Annotators are asked to determine whether the two noun phrases in the examples can be replaced by the same pronoun (refer to §3.2). For XCOPA, we extend the annotations to non-English languages, where we choose the two languages that demonstrate the most notable improvement, namely ZH and ID, as well as the two languages that exhibit the least improvement or regres-sion in performance with ChatGPT-generated data, namely TA and SW (see Table 5). In addition to the original examples and the generated examples in the target languages, we include 50 examples that are translated from the same English-generated examples (that were selected for annotation).\nTo ensure impartiality, all the examples are shuffled, and the annotators are not provided with information regarding the source of the examples (human-created, LLM-generated, or translated)." }, { "figure_ref": [], "heading": "Text Naturalness", "publication_ref": [], "table_ref": [], "text": "Figure 1 presents the annotation results for XCOPA, averaged from two annotators for each language. For Text Naturalness, we can see that in EN, ID, ZH, and SW, both ChatGPT and GPT-4 achieved higher naturalness than the original dataset. This is particularly prominent in ID, revealing the fluency issue in the original ID data in XCOPA, which is also confirmed by a native speaker." }, { "figure_ref": [], "heading": "Issue with Tamil", "publication_ref": [], "table_ref": [], "text": "In contrast, the performance of the TA dataset is surprisingly low, with a majority of examples classified as \"not understandable.\" Upon consulting language experts, we have identified several main issues in Tamil, including (1) the insertion of redundant words with the same meaning, such as \"I will retry to try it again\" (2) verb agreement errors, and (3) the presence of uncommon and out-of-context words.\nIt is worth noting that generating Tamil using GPT-4 is both slow and costly. We suspect that the tokenizer for Tamil, as well as similar languages like Telugu and Kannada, are poorly trained, resulting in unusable generation in those languages. While the low quality of the generated data could explain the significant decline in the performance of the XLMR-Large model when trained on ChatGPT-generated data in Tamil, intriguingly, models trained on Tamil data generated by GPT-4 show improvement over the baselines.\nTo further investigate this issue, we conduct an experiment where we fine-tune the models using only five examples from the TA examples generated by GPT-4 that are identified as natural and sound by the annotators. The improvement on mBERT under this setting is 50% of the total improvement seen with the entire 3.6K TA examples. For XLMR-base and XLMR-large, 15% and 3% of the total improvement can be observed, respectively. Considering that the estimated number of correct samples in The translated text is typically less natural than the original and generated data (apart from ID due to issues in the original data). This result affirms that LLMs generally excel in generating fluent text for the languages it supports." }, { "figure_ref": [], "heading": "Logic Soundness", "publication_ref": [], "table_ref": [], "text": "In terms of logic soundness, ChatGPT falls short compared to the original dataset. We further illustrate the categorised issues in the last column of the plots in Figure 1. We can see that for ChatGPT, the majority of the examples are labelled as \"both options are equally plausible\", only SW has more problematic examples with \"the wrong option is actually more plausible\". We suspect that this issue arises from the instruction provided (taken from the description of the original COPA dataset), which states that \"both options could be plausible, but one is more plausible.\" In some cases, ChatGPT generates two choices that are excessively similar in terms of plausibility. On the other hand, GPT-4 9 We could not conduct this experiment as the entire dataset was not manually labelled. tends to generate options with more clear-cut differences in plausibility, mirroring the original data. We note that despite the description/instruction that both alternatives could happen, both the original dataset and the data synthesised by GPT-4 tend to present one plausible and one implausible option.\nFor English XWinograd and XstoryCloze, the majority of the examples in both original and generated examples are evaluated as natural and logically sound. For XWinograd, although more than 47 examples are evaluated to exhibit high text quality and follow commonsense logic, only 23 ChatGPTgenerated examples fulfil the requirement that both noun phrases should be interchangeable with the same pronoun. GPT-4 examples demonstrate better consistency, with 36 following this rule, whereas all original examples are found satisfactory." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b36" ], "table_ref": [], "text": "This paper explores the effectiveness of utilising LLMs for data augmentation in cross-lingual datasets with limited training data. We specifically focus on commonsense reasoning tasks that are challenging for data synthesis. Our experiments including four LLMs for data generation on three datasets, showcase enhanced cross-lingual zeroshot transfer on smaller fine-tuned task-specific language models. However, the impact varies across different datasets and languages. Notably, larger models such as XLMR-Large, which have higher baselines, demonstrate more difficulty in achieving performance improvements with LLM-generated data. Among the four LLMs, GPT-4-generated data exhibits mostly consistent superior performance.\nExpanding data generation directly in target languages also shows general improvements compared to cross-lingual zero-shot with the Englishgenerated data. Human evaluation of the synthesised multilingual dataset shows that the ChatGPT and GPT-4 generated data demonstrate high naturalness in most languages, even surpassing the original data. However, in certain languages like TA, both models fail to generate natural text. Additionally, when assessing the logical soundness of the dataset, examples synthesised by ChatGPT reveal notable inconsistencies regarding more plausible options compared to the original human-created data. In contrast, GPT-4 exhibits a performance on par with human-written data.\nIn conclusion, leveraging LLMs for data augmentation shows promise. However, the choice of LLM used for data generation significantly influences the quality of the resulting data, as well as its applicability to the language under consideration. In circumstances where a more advanced model such as GPT-4 cannot be accessed, other models can be utilised, though this might result in performance difficulties in certain non-English languages -a challenge that also exists for GPT-4 -and concerns regarding logical coherence. A compelling direction for future research could involve exploring the efficacy of more recent instruction-tuned or aligned open-source LLMs, such as LLaMA 2 (Touvron et al., 2023b) or TÜLU (Wu et al., 2023), in enhancing data augmentation." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We have identified the following limitations in this work: (1)While LLMs, especially GPT-4, exhibit promising results in the context of multilingual commonsense data augmentation, they may encounter challenges when applied to extremely lowresource languages. (2) In order to achieve optimal performance, few-shot examples in the target language are still necessary for generating new examples. However, acquiring such examples may not always be feasible for all languages of interest. (3) The usage of closed models like GPT-4 is limited by licensing restrictions, and the results obtained from these models may not be reproducible. Nonetheless, the experiments conducted in this study demonstrate the potential benefits of leveraging LLMs for multilingual dataset augmentation." }, { "figure_ref": [], "heading": "Ethical Consideration", "publication_ref": [], "table_ref": [], "text": "Synthetic data generation with LLMs, especially multilingual data, should be approached with sensitivity and respect, as it reflects the linguistic, social, and cultural identity of a multilingual community. Since LLMs are trained on web data, they may encode biases perpetuating stereotypes, discrimination, or marginalisation of specific languages or communities. Therefore, collaboration with linguists, language experts, and community representatives is necessary to avoid the unintentional perpetuation of stereotypes and cultural insensitivity. generate longer sentences, especially for the ending sentences, whereas in the original dataset, they tend to be the shortest among all sentences." }, { "figure_ref": [], "heading": "C Fixed Ratio Data Augmentation", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "We experiment with generating data with a fixed ratio of the original datasets. Specifically, we compare training with the original English data (200 randomly selected examples) and augment it with different quantities of English examples generated by GPT-4, where we include original training instances in all cases.\nThe results in Table 7 showcase the performance on English test examples when fine-tuning mBERT and XLMR models with training data sizes that are 1×, 2×, 5×, and 10× the size of the original dataset. We can see that performance consistently improves as we increase the amount of generated data except XStoryCloze, which has the highest baselines, echoing the previous findings. The relative performance gain is generally more pronounced when increasing the data from 2× to 5× for the other two datasets." }, { "figure_ref": [], "heading": "D Additional Results", "publication_ref": [], "table_ref": [], "text": "This section includes the following additional results: Table 8 " }, { "figure_ref": [], "heading": "A Model Details", "publication_ref": [], "table_ref": [], "text": "The open-source models used in the experiments are as follows:\n• mBERT: https://huggingface.co./ bert-base-multilingual-uncased\n• XLMR-base: https://huggingface.co./ xlm-roberta-base\n• XLMR-large: https://huggingface.co./ xlm-roberta-large\n• Dolly-v2: https://huggingface.co./ databricks/dolly-v2-12b\n• StableVinuca: https://huggingface.co./ CarperAI/stable-vicuna-13b-delta" }, { "figure_ref": [], "heading": "B Sentences and Event Diversity of ChatGPT-generated StoryCloze Data", "publication_ref": [], "table_ref": [], "text": "As the StoryCloze dataset contains more sentences and has richer content, we follow the analysis of the ROC story and further compare the stylistic features in terms of sentence length, and the most frequent events 10 generated by ChatGPT with the original data. This helps us to determine whether ChatGPT-generated data can capture the corpus distribution by randomly sampling n examples from the dataset in the instructions.\nIn Figure 2, we present the results of comparing the generated data points with the original 300 train set used as few-shot examples in the generation instructions. We can see that 23 of the 30 most frequent events in the original dataset can also be found in the 30 most frequent events of the ChatGPT-generated data. Regarding the sentence length, we observe that ChatGPT tends to " } ]
This paper explores the potential of leveraging Large Language Models (LLMs) for data augmentation in multilingual commonsense reasoning datasets where the available training data is extremely limited. To achieve this, we utilise several LLMs, namely Dolly-v2, Sta-bleVicuna, ChatGPT, and GPT-4, to augment three datasets: XCOPA, XWinograd, and XS-toryCloze. Subsequently, we evaluate the effectiveness of fine-tuning smaller multilingual models, mBERT and XLMR, using the synthesised data. We compare the performance of training with data generated in English and target languages, as well as translated Englishgenerated data, revealing the overall advantages of incorporating data generated by LLMs, e.g. a notable 13.4 accuracy score improvement for the best case. Furthermore, we conduct a human evaluation by asking native speakers to assess the naturalness and logical coherence of the generated examples across different languages. The results of the evaluation indicate that LLMs such as ChatGPT and GPT-4 excel at producing natural and coherent text in most languages, however, they struggle to generate meaningful text in certain languages like Tamil. We also observe that ChatGPT falls short in generating plausible alternatives compared to the original dataset, whereas examples from GPT-4 exhibit competitive logical consistency. We release the generated data at https://github.com/mbzuai-nlp/Gen-X.
LLM-powered Data Augmentation for Enhanced Crosslingual Performance
[ { "figure_caption": "Here are n examples in {language}: Example 1: Premise: The man wanted to save money. What happened as a result? Correct choice: He cut back on making frivolous purchases. Wrong choice: He withdrew money from his savings account. . . . Example n: . . . Based on the examples above, generate m new examples in {language}. We are collecting more examples for the Winograd Schema Challenge. Each example has a short sentence that contains two noun phrases and one pronoun replaced by \"_\", and the challenge is to determine the referent of the pronoun, which can only be inferred from the context. Here are n examples of the data: Example 1: Sentence: Harley hides from Dyna because _ is scary. Who/What is scary? Correct answer: Dyna. Wrong answer: Harley. . . . Example n: . . . Based on the examples above, generate m new examples. Both noun phrases in each example can be males, females, inanimate objects, or groups of people or objects. There should only be one \"_\" in the sentence. The correct and wrong answer should be one of the noun phrases mentioned in the sentence. We are collecting more examples for a story cloze dataset. Each example consists of a 4-sentence story, one correct ending sentence which is a plausible continuation of the story, and one wrong ending sentence which is logically inconsistent with the context. Here are n examples of the data: Example 1: Sent-1: Tina is very tired every single morning. Sent-2: She does not get enough sleep because of her two jobs. Sent-3: Tina decides to quit one of the jobs. Sent-4: She now gets enough sleep to function everyday. Correct ending: Tina is well rested. Wrong ending: Tina is more tired than ever before. . . . Example n: . . . Based on the examples above, provide m new similar examples. Requirements: 1) the story should read like a coherent story, with a specific beginning and ending, where something happens in between 2) both ending sentences should be entirely reasonable, realistic and sensible when read in isolation, and 3) both ending sentences should follow up the story by sharing at least one of the characters of the story. Premise: The politician made a controversial statement. What happened as a result? Correct choice: The politician faced criticism from the media. Wrong choice: The politician's approval ratings increased. Premise: 我裤子口袋里的钥匙不见 了。 What was the cause? Correct choice: 这个口袋上有一个洞。 Wrong choice: 裤子是新的。 Sentence: Sam gave Andrew the book because _ had already read it. Who/What had already read the book? Correct answer: Sam. Wrong answer: Andrew. Sentence: The dog chased the cat , but _ was too fast. Who/What was too fast? Correct answer: the cat. Wrong answer: The dog.Sent-1: Jordan was a high school student who wanted to become a doctor. Sent-2: He spent all his free time studying biology and chemistry. Sent-3: One day, his school hosted a science fair competition. Sent-4: Jordan's project won first place. Correct ending: Jordan went on to study medicine in college. Wrong ending: Jordan gave up his dream of becoming a doctor.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ") We proceed to generate examples through the following iterative process: (a) To ensure diversity, 7 we randomly sample a set of n examples from the training datasets. (b) We append these sampled examples to the instructions and prompt the model to generate an additional set of m new examples. (c) Afterwards, we perform post-processing and only add valid and unique examples to the generated set. Typically, the values of n and m are set to 5 to 10.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Comparison between the 30 most frequent events and the lengths of the sentences in the original and the ChatGPT-generated English StoryCloze dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "HT ID IT QU SW TA TH TR VI ZH MBERT GEN", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Number of examples available in XCOPA, XWinograd, and XStoryCloze. XX denotes the average number of non-English examples per language. Since a validation split is not specified in XStoryCloze, we take 60 random examples from the train split for validation. XWinograd has no train/validation/test split, and we follow an 80/10/10 split for the experiments.", "figure_data": "DATASETTrainValidationTestENXXENXXENXXXCOPA4000 100 100 500 500XWinograd18580 2330 233 424XStoryCloze 300 300 6060 1511 1511", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "ORI 400 GEN 3.7k O+G 4.1k ORI 1.8k GEN 2k O+G 3.8k ORI 300 GEN 1.7k O+G 2k Comparison of Average Accuracy across all languages for mBERT, XLMR-Base, and XLMR-Large on XCOPA, XStoryCloze, and XWinograd. Training datasets include ORI (original EN data), GEN (LLM-generated EN data), and O+G (both), with the number of examples used for training indicated by the subscripts. The best results obtained with the same amount of training data are highlighted in bold. Green and red subscripts denote improvement and decline in performance compared to the baseline (ORI). See per language results in Appendix D.", "figure_data": "Fine-tunedLLM forXCOPAXWINOGRADXSTORYCLOZEModelGenerationDOLLY-V247.9 53.3 ↑5.4 54.0 ↑6.152.9 59.6 ↑6.7 59.3 ↑6.465.0 68.7 ↑3.7 68.1 ↑3.1mBERTSTABLEVICUNA 47.9 52.9 ↑5.0 54.7 ↑6.8 CHATGPT 47.9 55.0 ↑7.1 54.1 ↑6.252.9 53.7 ↑0.8 58.5 ↑5.6 52.9 56.0 ↑3.1 58.3 ↑5.465.0 64.6 ↓0.4 67.3 ↑2.3 65.0 64.3 ↓0.7 68.3 ↑3.3GPT-447.9 56.4 ↑8.5 57.2 ↑9.352.9 54.9 ↑2.0 57.5 ↑4.665.0 68.0 ↑3.0 69.8 ↑4.8DOLLY-V254.8 58.1 ↑3.3 58.1 ↑3.353.5 56.5 ↑3.0 66.3 ↑12.8 73.0 75.8 ↑2.8 76.5 ↑3.5XLMR-BaseSTABLEVICUNA 54.8 57.6 ↑2.8 59.3 ↑4.5 CHATGPT 54.8 58.2 ↑3.4 59.4 ↑4.653.5 59.0 ↑5.5 66.0 ↑12.5 73.0 69.6 ↓3.4 74.2 ↑1.2 53.5 62.7 ↑9.2 65.9 ↑12.4 73.0 67.4 ↓5.6 74.5 ↑1.5GPT-454.8 62.7 ↑7.9 63.0 ↑8.253.5 63.3 ↑9.8 66.9 ↑13.4 73.0 74.6 ↑1.6 79.3 ↑6.3DOLLY-V263.0 58.6 ↓4.4 65.0 ↑2.080.1 76.9 ↓3.2 83.1 ↑3.085.0 84.8 ↓0.2 86.4 ↑1.4XLMR-LargeSTABLEVICUNA 63.0 64.4 ↑1.4 68.7 ↑5.7 CHATGPT 63.0 64.6 ↑1.6 68.1 ↑5.180.1 68.2 ↓11.9 82.0 ↑1.9 80.1 73.2 ↓6.9 83.2 ↑3.185.0 74.6 ↓10.4 84.8 ↓0.2 85.0 77.3 ↓7.7 85.8 ↑0.8GPT-463.0 72.1 ↑9.1 72.2 ↑9.280.1 76.4 ↓3.7 83.5 ↑3.485.0 86.0 ↑1.0 88.4 ↑3.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Accuracy on XCOPA when scaling up the generated data to over 28K with ChatGPT. We report average results on all XCOPA languages excl. QU, since it is not available with the Google Translate API.", "figure_data": "ModelGENEN + ORIEN GEN T rans EN+ ORIEN3.7K28.6K3.7K28.6KmBERT54.356.058.060.1XLMR-Base 60.161.861.261.7XLMR-Large 69.772.467.271.4", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Figure 1: Human evaluation of 50 random examples from the original XCOPA, ChatGPT (top) and GPT-4 (bottom) generated data in target languages, and translation of English generated data. Examples are annotated by two native speakers in each language. The subplots in the last column show the logic issues of the XCOPA data, where the three bars for each language represent Oringal, Gen XX , and Gen T rans", "figure_data": "Number of Examples0 10 20 30 40 50EN ID ZH SW TA Text Naturalness Original ChatGPT-Gen XX EN ID ZH SW TA Logic Soundness ChatGPT-Gen Trans ENLogic Issues (ChatGPT) EN ID ZH SW TA Both options equally plausible Both options implausible Wrong option more plausibleNumber of Examples0 10 20 30 40 50EN ID ZH SW TA Text Naturalness OriginalEN ID ZH SW TA Logic Soundness GPT-4-Gen XX GPT-4-Gen Trans ENLogic Issues (GPT-4) EN ID ZH SW TA Both options equally plausible Both options implausible Wrong option more plausibleEN(from left to right).the 3.6k dataset is around 360, it is plausible thattraining solely on those examples could raise theaccuracy level, or even surpass, what we observefor the entire dataset. 9 An intriguing question thatremains to be investigated in future research is whythe remaining 3.2k incorrect or unnatural examplesdo not negatively impact the model's performance.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance on English test examples training on GPT-4-generated English data and the original data. Original data points selected from the three datasets are set to 200. 1× corresponds to using only the original data, 2× means using 200 original data and 200 generated data.", "figure_data": "ModelRatio XCOPA XWingrad XStoryCloze1×64.050.274.6mBERT2× 5×64.8 68.051.9 57.176.8 80.610×69.865.780.31×58.045.970.7XLMR-Base2× 5×59.0 63.053.7 67.879.7 81.910×65.871.284.11×56.078.181.1XLMR-Large2× 5×61.2 81.479.8 82.090.9 89.910×85.282.891.9", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": ", Table9, and Table10show generated data in English with different LLMs on XCOPA, XWinograd, and XStoryCloze. Table11and Table12show the full result on XCOPA with ChatGPT and GPT-4.", "figure_data": "corrode come work ask think walk drive run0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 cost take fail beget decide do want contract contract feel find beget fail decide take 0.10 0.04 0.02 0.06 0.08 0.12 0.14 cost buybecome excite determine realize come design touchtryfindmakesuffermake interpret buy sleep play Original StoryCloze feel determine forget count give start distinguishstart want do forget sleep give play try interpret work run spend ChatGPT Generated StoryCloze", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Chenxi Whitehouse; Monojit Choudhury; Alham Fikri Aji
[ { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Ruder", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b0", "title": "Masakhaner: Named entity recognition for african languages", "year": "2021" }, { "authors": "Kabir Ahuja; Rishav Hada; Millicent Ochieng; Prachi Jain; Harshita Diddee; Samuel Maina; Tanuja Ganu; Sameer Segal; Maxamed Axmed; Kalika Bali", "journal": "", "ref_id": "b1", "title": "Mega: Multilingual evaluation of generative ai", "year": "2023" }, { "authors": "Mikel Artetxe; Sebastian Ruder; Dani Yogatama", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "On the cross-lingual transferability of monolingual representations", "year": "2020" }, { "authors": "Mikel Artetxe; Holger Schwenk", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b3", "title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond", "year": "2019" }, { "authors": "Stella Biderman; Hailey Schoelkopf; Quentin Gregory Anthony; Herbie Bradley; O' Kyle; Eric Brien; Mohammad Hallahan; Shivanshu Aflah Khan; Usvsn Purohit; Edward Sai Prashanth; Aviya Raff; Lintang Skowron; Oskar Sutawika; Der Van; Wal", "journal": "PMLR", "ref_id": "b4", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b5", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Alexis Conneau; Ruty Rinott; Guillaume Lample; Adina Williams; Samuel Bowman; Holger Schwenk; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "XNLI: Evaluating crosslingual sentence representations", "year": "2018" }, { "authors": "Haixing Dai; Zhengliang Liu; Wenxiong Liao; Xiaoke Huang; Zihao Wu; Lin Zhao; Wei Liu; Ninghao Liu; Sheng Li; Dajiang Zhu", "journal": "", "ref_id": "b8", "title": "Chataug: Leveraging chatgpt for text data augmentation", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Kuan-Hao Huang; Wasi Ahmad; Nanyun Peng; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Improving zero-shot cross-lingual transfer learning via robust training", "year": "2021" }, { "authors": "Pratik Joshi; Sebastin Santy; Amar Budhiraja; Kalika Bali; Monojit Choudhury", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "year": "2020" }, { "authors": "Shanu Kumar; Sandipan Dandapat; Monojit Choudhury", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "diversity and uncertainty in moderation\" are the key to data selection for multilingual few-shot transfer", "year": "2022" }, { "authors": "Anne Lauscher; Vinit Ravishankar; Ivan Vulić; Goran Glavaš", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers", "year": "2020" }, { "authors": "Grandee Lee; Xianghu Yue; Haizhou Li", "journal": "", "ref_id": "b14", "title": "Linguistically Motivated Parallel Data Augmentation for Code-Switch Language Modeling", "year": "2019" }, { "authors": "Hector Levesque; Ernest Davis; Leora Morgenstern", "journal": "", "ref_id": "b15", "title": "The winograd schema challenge", "year": "2012" }, { "authors": "Victoria Xi; Todor Lin; Mikel Mihaylov; Tianlu Artetxe; Shuohui Wang; Daniel Chen; Myle Simig; Naman Ott; Shruti Goyal; Jingfei Bhosale; Ramakanth Du; Sam Pasunuru; Punit Shleifer; Vishrav Singh Koura; Brian O' Chaudhary; Jeff Horo; Luke Wang; Zornitsa Zettlemoyer; Mona Kozareva; Veselin Diab; Xian Stoyanov; Li", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Few-shot learning with multilingual generative language models", "year": "2022" }, { "authors": "Yu-Hsiang Lin; Chian-Yu Chen; Jean Lee; Zirui Li; Yuyan Zhang; Mengzhou Xia; Shruti Rijhwani; Junxian He; Zhisong Zhang; Xuezhe Ma; Antonios Anastasopoulos; Patrick Littell; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Choosing transfer languages for cross-lingual learning", "year": "2019" }, { "authors": "Haokun Liu; William Huang; Dhara Mungra; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Precise task formalization matters in Winograd schema evaluations", "year": "2020" }, { "authors": "Nasrin Mostafazadeh; Nathanael Chambers; Xiaodong He; Devi Parikh; Dhruv Batra; Lucy Vanderwende; Pushmeet Kohli; James Allen", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "A corpus and cloze evaluation for deeper understanding of commonsense stories", "year": "2016" }, { "authors": "Niklas Muennighoff; Thomas Wang; Lintang Sutawika; Adam Roberts; Stella Biderman; Teven Le Scao; M Saiful Bari; Sheng Shen; Zheng Xin Yong; Hailey Schoelkopf; Xiangru Tang; Dragomir Radev; Alham Fikri Aji; Khalid Almubarak; Samuel Albanie; Zaid Alyafeai; Albert Webson; Edward Raff; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Crosslingual generalization through multitask finetuning", "year": "2023" }, { "authors": "Farhad Nooralahzadeh; Giannis Bekoulis; Johannes Bjerva; Isabelle Augenstein", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Zero-shot cross-lingual transfer with meta learning", "year": "2020" }, { "authors": "Maria Edoardo; Goran Ponti; Olga Glavaš; Qianchu Majewska; Ivan Liu; Anna Vulić; Korhonen", "journal": "", "ref_id": "b22", "title": "XCOPA: A multilingual dataset for causal commonsense reasoning", "year": "2020" }, { "authors": "Maria Edoardo; Helen O' Ponti; Yevgeni Horan; Ivan Berzak; Roi Vulić; Thierry Reichart; Ekaterina Poibeau; Anna Shutova; Korhonen", "journal": "Computational Linguistics", "ref_id": "b23", "title": "Modeling language variation and universals: A survey on typological linguistics for natural language processing", "year": "2019" }, { "authors": "Adithya Pratapa; Gayatri Bhat; Monojit Choudhury; Sunayana Sitaram; Sandipan Dandapat; Kalika Bali", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Language modeling for code-mixing: The role of linguistic theory based synthetic data", "year": "2018" }, { "authors": "Melissa Roemmele; Cosmin ; Adrian Bejan; Andrew S Gordon", "journal": "", "ref_id": "b25", "title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning", "year": "2011" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilić; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Gallé", "journal": "", "ref_id": "b26", "title": "Bloom: A 176bparameter open-access multilingual language model", "year": "2022" }, { "authors": "Anirudh Srinivasan; Gauri Kholkar; Rahul Kejriwal; Tanuja Ganu; Sandipan Dandapat; Sunayana Sitaram; Balakrishnan Santhanam; Somak Aditya; Kalika Bali; Monojit Choudhury", "journal": "", "ref_id": "b27", "title": "Litmus predictor: An ai assistant for building reliable, high-performing and fair multilingual nlp systems", "year": "2022" }, { "authors": "Ishan Tarunesh; Syamantak Kumar; Preethi Jyothi", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "From machine translation to code-switching: Generating high-quality code-switched text", "year": "2021" }, { "authors": "Alexey Tikhonov; Max Ryabinin", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "It's All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning", "year": "2021" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b30", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b31", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Self-instruct: Aligning language models with self-generated instructions", "year": "2023" }, { "authors": "Chenxi Whitehouse; Fenia Christopoulou; Ignacio Iacobacci", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "EntityCS: Improving zero-shot cross-lingual transfer with entity-centric code switching", "year": "2022" }, { "authors": "Genta Indra Winata; Alham Fikri Aji; Samuel Cahyawijaya; Rahmad Mahendra; Fajri Koto; Ade Romadhony; Kemal Kurniawan; David Moeljadi; Radityo Eko Prasojo; Pascale Fung; Timothy Baldwin; Jey ; Han Lau; Rico Sennrich; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "NusaX: Multilingual parallel sentiment dataset for 10 Indonesian local languages", "year": "2023" }, { "authors": "Genta Indra Winata; Andrea Madotto; Chien-Sheng Wu; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Code-switched language models using neural based synthetic data from parallel sentences", "year": "2019" }, { "authors": "Zeqiu Wu; Yushi Hu; Weijia Shi; Nouha Dziri; Alane Suhr; Prithviraj Ammanabrolu; Noah A Smith; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "", "ref_id": "b36", "title": "Fine-grained human feedback gives better rewards for language model training", "year": "2023" }, { "authors": "Mengzhou Xia; Antonios Anastasopoulos; Ruochen Xu; Yiming Yang; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Predicting performance for natural language processing tasks", "year": "2020" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "", "journal": "Fine-tuned Training data LLM AVG EN RU ZH ES AR HI ID TE SW EU MY Dolly", "ref_id": "b39", "title": "Table 9: Accuracy on XWinograd with English generated data from different LLMs", "year": "" } ]
[]
10.18653/v1/2022.gebnlp-1.9
2023-11-13
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b55", "b36", "b44", "b8", "b21", "b12", "b50", "b31", "b37", "b14", "b5", "b47", "b63", "b48", "b27", "b23" ], "table_ref": [], "text": "Task-specific models proposed for speech recognition, toxicity detection, and language identification have previously been documented to present biases for certain language varieties, particularly for African American Language (AAL) (Sap et al., 2022;Koenecke et al., 2020;Meyer et al., 2020;Blodgett and O'Connor, 2017). There has been little investigation, however, of the possible language variety biases in Large Language Models (LLMs) (Dong et al., 2019;Brown et al., 2020;Raffel et al., 2020), which have unified multiple tasks through language generation.\nWhile there are largely beneficial and socially relevant applications of LLMs, such as in alleviating barriers to mental health counseling1 and medical healthcare (Hsu and Yu, 2022) access, there is also potential for biased models to exacerbate existing societal inequalities (Kordzadeh and Ghasemaghaei, 2022;Chang et al., 2019;Bender et al., 2021). Past algorithms used in psychiatry and medicine have been shown to be racially biased, in some cases leading to, for example, underestimating patient risk and denial of care (Obermeyer et al., 2019;Straw and Callison-Burch, 2020). Furthermore, LLMs capable of understanding AAL and other language varieties also raise important ethical implications, such as enabling increased police surveillance of minority groups (see Patton et al. 2020 and section 8 for further discussion). Therefore, it is necessary to investigate the potential language variety biases of language generation models to both increase accessibility of applications with high social impact and also anticipate possible harms when deployed.\nMoreover, prior work (Grieser, 2022) has shown that African American speakers talking about racerelated issues use language in ways which may draw on morphosyntactic features of AAL in order to subtly foreground the race aspect of the discussion topic without explicit mention. Most training corpora include little representation of AAL (see further discussion in section 3), and even those that do can still fail to capture its significant regional and contextual variation (see Farrington et al. 2021 for examples). Without the ability to interpret these subtler meanings of AAL, LLMs will undoubtedly exacerbate the misunderstandings which already take place between AAL speakers and other communities." }, { "figure_ref": [], "heading": "AAL WME Source Text", "publication_ref": [], "table_ref": [], "text": "Since RED gone, my HEAD gone & dats thee ONLY shit WRK.\nSince Red is gone, my head is gone, and that's the only thing working.\nModel-Generated AAL Model-Generated WME" }, { "figure_ref": [], "heading": "ChatGPT Counterpart", "publication_ref": [], "table_ref": [], "text": "Since Red ain't around, my head ain't right, and that's the only thing keepin' me going.\nSince Red left, my head is gone and that's the only thing that works." }, { "figure_ref": [], "heading": "GPT-4 Counterpart", "publication_ref": [ "b38", "b25", "b27", "b2", "b1", "b1", "b64", "b29", "b28" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Since Red gone, my head gone, and that's the only thing workin'.\nSince Red left, my head hasn't been right and that's the only thing that works. Given the lack of African American representation in LLMs and the possible harms to the AALspeaking community, we focus on LLMs' understanding of AAL to investigate biases. AAL is a language variety which follows consistent morphological, syntactic, and lexical patterns distinct from WME, such as the dropped copula (e.g., \"she at work\") and aspect markers (e.g., the habitual be: \"he be running\") (Lanehart, 2001;Green, 2009). We use Grieser (2022)'s definition of AAL as the grammatically patterned variety of English used by many, but not all and not exclusively, African Americans in the United States. Following Baker-Bell (2020) and Alim and Smitherman (2012), we also use the definition of White Mainstream English (WME) as the dialect of English reflecting the linguistic norms of white Americans. While previous linguistic literature occasionally uses the terms \"Standard American English\" and \"African American Vernacular English,\" we employ AAL and WME instead to avoid the implication that AAL and other language varieties are \"non-standard\" and to more precisely identify the demographics of prototypical WME speakers, similarly to Baker-Bell (2020) and Alim and Smitherman (2012). Examples of AAL and WME are shown in Table 1.\nWe evaluate understanding of AAL by LLMs through production of language in each variety using automatic metrics and human judgments for two tasks: a counterpart generation task akin to dialect translation (Wan et al., 2020;Harrat et al., 2019) (see examples in Table 1) and a masked span prediction (MSP) task where models predict a phrase that was removed from their input, similar to Groenwold et al. (2020). We summarize our contributions as follows: (1) we evaluate six pre-trained, large language models on two language generation tasks: counterpart generation between language varieties and masked span prediction; (2) we use a novel dataset of AAL text from multiple contexts (social media, hip-hop lyrics, focus groups, and linguistic interviews) with human-annotated counterparts in WME; and (3) we document performance gaps showing that LLMs have more difficulty both interpreting and producing AAL compared to WME; our error analysis reveals patterns of AAL features that models have difficulty interpreting in addition to those that they can understand." }, { "figure_ref": [], "heading": "Background: Bias", "publication_ref": [ "b6", "b56", "b2", "b3", "b53", "b6" ], "table_ref": [], "text": "In measuring AAL understanding, we identify evidence of bias through performance gaps and analysis of model behavior with each language variety. Following Blodgett et al. (2020), findings of bias could result in both allocational harms and representational harms posed by the evaluated models 2 .\nWhile LLMs are becoming more available and valuable resources, the models' lack of understanding of AAL limits their use by AAL speakers, and this disparity will only grow as the use of these models increases across social spheres. Our evaluation attempts to quantify these error disparities (Shah et al., 2020) by measuring models' understanding of AAL and WME texts. When LLMs do not perform equally well on different language varieties, the LLM itself as a resource becomes unfairly allocated, and speakers of minoritized language varieties like AAL are less able to leverage the benefits of LLMs. AAL speakers would be particularly unfairly impacted with applications in areas of health, including mental health.\nAdditionally, our evaluation includes a qualitative analysis of how AAL is currently understood and produced by LLMs. Prior sociolinguistic works discuss and study how attitudes toward African American speakers have formed linguistic prejudices against AAL (Baker-Bell, 2020;Baugh, 2015), as well as how stereotyped uses of AAL by non-AAL speakers can perpetuate racial divides (Ronkin and Karn, 1999). Stereotypical or offensive uses of AAL by LLMs thus reflect a representational harm to AAL speakers that can further 2 Allocational harms are reflected in the unfair distribution of resources and opportunities among social groups, while representational harms are reflected in disparate or harmful representations of a particular group (see Blodgett et al. (2020) for further discussion). promote these views. We advocate for approaches which carefully consider sociolinguistic variation in order to avoid generation of inappropriate speech across different settings." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b20", "b19" ], "table_ref": [], "text": "Biases in pre-trained language models can often be attributed to common training datasets. Training corpora for LLMs are typically drawn from internet sources, such as large book corpora, Wikipedia, outbound links from Reddit, and a filtered version of Common Crawl3 in the case of GPT-3 (Brown et al., 2020), which can severely under-represent the language of African Americans (Pew Research Center, 2018;Dolcini et al., 2021). Though few estimates of the presence of AAL in datasets exist, one study estimates that in the Colossal Cleaned Crawl Corpus (C4), only 0.07% of documents reflect AAL (Dodge et al., 2021). Beyond C4, African Americans are significantly underrepresented in data sources such as Wikipedia4 (0.05%) and news articles (6%; Pew Research Center 2023), falling well below the national average. Additionally, as models learn from gold standard outputs provided by annotators, they learn to reflect the culture and values of the annotators as well." }, { "figure_ref": [], "heading": "Data Sources", "publication_ref": [ "b66", "b30", "b7", "b4" ], "table_ref": [], "text": "There is significant variation in the use of features of AAL depending on, for example, the region or context of the speech or text (Washington et al., 1998;Hinton and Pollock, 2000). Accordingly, we collect a novel dataset of AAL from six different contexts. We draw texts from two existing datasets, the TwitterAAE corpus (Blodgett et al., 2016) and transcripts from the Corpus of Regional African American Language (CORAAL; Kendall and Farrington 2021), as well as four datasets collected specifically for this work: we collect all available posts and comments from r/BlackPeopleTwitter5 belonging to \"Country Club Threads\", which designates threads where only Black Redditors and other users of color may contribute6 . Given the influence of AAL on hip-hop music, we collect hip-hop lyrics from 27 songs, 3 from each of 9 Black artists from Morgan (2001) and Billboard's 2022 Top Hip-Hop Artists. Finally, we use the transcripts of 10 focus groups concerning grief and loss in the Harlem African American community and conducted as part of ongoing work by the authors to better understand the impacts of police brutality and other events on the grief experiences of African Americans. Following Bender and Friedman (2018), a data statement with further details is included in Appendix A. 50 texts are sampled from each dataset, resulting in 300 candidate texts in total. We use a set of surface level and grammatical patterns to approximately weight each sample by the density of AAL-like language within the text (patterns are listed in Appendix B). 12 additional texts are also sampled from each dataset for fine-tuning." }, { "figure_ref": [], "heading": "Data Annotations", "publication_ref": [ "b11" ], "table_ref": [], "text": "Our interdisciplinary team includes computer scientists, linguists, and social work scientists and thus, we could recruit knowledgeable annotators to construct semantically-equivalent re-writings of AAL texts into WME, referred to as counterparts. The four human annotators included 2 linguistics students, 1 computer science student, and 1 social work scientist, all of whom self-identify as AAL speakers and thus have knowledge of the linguistic and societal context of AAL and racial biases. These annotators were familiar with both AAL and WME, allowing them to provide accurate annotations and judgements of model generations in both language varieties. Annotators were asked to rewrite the AAL text in WME, ensuring that the counterparts conserve the original meaning and tone as closely as possible (see Appendix C.1).\nTo compute inter-annotator agreement, we asked each annotator to label the 72 additional texts, and they also shared a distinct 10% of the remainder of the dataset with each other annotator. We compute agreement using Krippendorff's alpha with Levenshtein distance (Braylan et al. 2022 " }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b28", "b39" ], "table_ref": [], "text": "We evaluate multiple language generation models using two tasks. In the counterpart generation task, we evaluate models on producing near semantically-equivalent WME text given AAL text and vice versa to target LLMs' ability to interpret and understand AAL. A second task, masked span prediction, requires models to predict tokens to replace words and phrases hidden or masked from the input. This task resembles that of Groenwold et al. (2020), but spans vary in length and position. Much like BART pre-training (Lewis et al., 2020), span lengths are drawn from a Poisson distribution (λ = 2) and span locations are sampled uniformly across words in the original text. We independently mask noun phrases, verb phrases, and random spans from the text for more fine-grained analysis.\nWhile our focus is on measuring model capabilities in interpreting AAL7 , these generation tasks allow us to test whether the model understands the language well enough to produce it. It is not our goal to produce a LLM that can generate AAL within the context of downstream tasks (see section 8 for further discussion)." }, { "figure_ref": [ "fig_0" ], "heading": "Models", "publication_ref": [ "b50", "b39", "b24" ], "table_ref": [], "text": "We consider six different models for the two tasks where applicable: GPT-3 (Brown et al., 2020); its chat-oriented successor, ChatGPT (GPT-3.5)8 ; GPT-4 (OpenAI, 2023), currently OpenAI's most advanced language model ; T5 (Raffel et al., 2020); its instruction-tuned variant, Flan-T5 (Chung et al., 2022); and BART (Lewis et al., 2020). Flan-T5, GPT-3, ChatGPT, and GPT-4 are evaluated on the counterpart generation task, while GPT-3, BART, and T5 are evaluated on the MSP task. We note that the GPT models besides GPT-3 were not included in the MSP task because token probabilities are not provided by the OpenAI API for chatbased models. An example of the instruction provided to GPT models is provided in Figure 1. Notably, the instruction text provided to GPT models uses \"African American Vernacular English\" and \"Standard American English\" because prompts with these terms were assigned lower perplexity than \"African American Language\" and \"White Mainstream English\" by all GPT models, and lower perplexity prompts have been shown to improve task performance (Gonen et al., 2022). Additionally, GPT models are simply asked to translate with no additional instructions in order to examine their natural tendency for tasks involving AAL text. We evaluate both Flan-T5 fine-tuned on the 72 additional texts, referred to as Flan-T5 (FT) in the results, and Flan-T5 without fine-tuning (with automatic metrics only). Additional modeling details and generation hyperparameters are included in Appendices E.1 and E.2." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b67", "b68" ], "table_ref": [], "text": "We use both automatic and human evaluation metrics for the counterpart generation task. As with most generation tasks, we first measure n-gram overlap of the model generations and gold standard reference texts and in our experiments, we utilize the Rouge metric. In addition, to account for the weaknesses of word-overlap measures, we also measure coverage of gold standard references with BERTScore (Zhang* et al., 2020) using the microsoft/deberta-large-mnli checkpoint, because it is better correlated with human scores than other models 9 . Specifically, original AAL is the gold standard for model-generated AAL and human annotated WME counterparts are the gold standard for model-generated WME. In some experiments, Rouge-1, Rouge-L, and BERTScore are presented as gaps, where scores for generating WME are subtracted from those for generating AAL. Due to the tendency of models to avoid toxic outputs and neutralize text, we also consider the percentage of toxic terms removed when transitioning from model inputs in one language variety to outputs in the other. Toxicity scores are derived as the number of words categorized as offensive in the word list of Zhou et al. (2021), and percent change between inputs and outputs are calculated as (T ox in -T oxout) T ox in . Human evaluation is also conducted on the generated counterparts. The same linguistics student, computer science student, and social work scientists involved in creating the dataset of aligned counterparts were also asked to judge model generations. As a baseline, human-generated counterparts are included in the human evaluation. 100 9 https://docs.google.com/spreadsheets/d/ 1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI/ edit WME and AAL texts along with their generated and annotated counterparts are randomly sampled from the dataset for human evaluation. We ensure that annotators do not rate human or modelgenerated counterparts for which they initially generated the WME counterpart. All annotators are asked to rate each assigned counterpart using 5point Likert scales on the dimensions below.\nHuman-likeness (Human-like) measures whether the annotator believes that the text was generated by a human or language model. Linguistic Match (Dialect) measures how well the language of the counterpart is consistent with the intended English variety (i.e., AAL or WME). Meaning Preservation (Meaning) measures how accurately the counterpart conveys the meaning of the original text. And finally, Tone Preservation (Tone) measures how accurately the counterpart conveys the tone or other aspects beyond meaning of the original text. Additional details on the judgment instructions are included in Appendix C.2.\nIn the masked span prediction task, span predictions are evaluated using automated metrics: model perplexity of the reference span, and the entropy of the model's top 5 most probable spans. With the exception of GPT-3, experiments are repeated 5 times, randomly sampling spans to mask in each trial. Metrics are reported as the percent change in perplexity between WME and AAL." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Are AAL and WME Metrics", "publication_ref": [ "b13" ], "table_ref": [], "text": "Comparable?\nStudies such as Bugliarello et al. ( 2020) might suggest that in translation-like tasks, it is invalid to compare results from automatic metrics, such as BLEU, cross-lingually because: (1) different languages may use different numbers of words to convey the same meaning, and (2) models for different languages utilize different tokenization schemes.\nThough we emphasize that AAL and WME are language varieties of English rather than distinct languages, a similar argument may be made that their Rouge scores are not directly comparable. However, the counterpart generation task setting does not suffer from either of the aforementioned weaknesses. To show this, we calculate differences in the number of words and 1-gram Type-Token Ratio for AAL and WME text pairs in our dataset.\nAs shown in Table 3, the total number of words in the AAL and WME texts are similar, and we find that the lengths of each pair of texts differ by less than 1/10th of a word (0.095) on average. Bugliarello et al. (2020) also finds that among metrics studied, translation difficulty is most correlated with the Type-Token Ratio (TTR) of the target language. Table 3 shows that the difference in the 1-gram TTR between AAL and WME is not statistically significant. Finally, as the same models are applied to both AAL and WME texts, the tokenization schemes are also identical. Therefore, the identified weaknesses of cross-lingual comparison do not apply to our results. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_2", "fig_2", "fig_4", "fig_5", "fig_6" ], "heading": "Counterpart Generation", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows results using automatic coverage metrics on counterpart generations in AAL and WME. Rouge-1, Rouge-L and BERTScore (the coverage scores) for model output are computed over the generated AAL or WME in comparison to the corresponding gold standards. We note that the models consistently perform better when generating WME, indicating that it is harder for models to reproduce similar content and wording as the gold standard when generating AAL. ChatGPT is the worst model for producing WME from AAL, and ChatGPT and GPT-3 are nearly equally bad at producing AAL from WME. Flan-T5 (FT) does best for both language varieties, likely due to the fact that Flan-T5 (FT) was directly fine-tuned for the task. Flan-T5 without fine-tuning performs comparatively with slightly lower coverage scores in both directions. We also compute the coverage scores between the original AAL text from the dataset and the human annotated counterparts in WME, labeled as \"Human\" in Figure 2. Models tend to generate counterparts with lower coverage scores than the text input to the model, which reflects the alternative language variety. This suggests that it is difficult for models to generate counterparts in either direction.\nFigure 3 shows human judgments of modelgenerated WME and model-generated AAL. With the exception of Flan-T5 (FT), we see that modelgenerated WME is judged as more human-like and closer to the intended language variety than model-generated AAL. These results confirm findings from automatic metrics, showing that models more easily generate WME than AAL. In contrast, for meaning and tone, the reverse is true, indicating that models generate WME that does not match the meaning of the original AAL. The difference between scores on AAL and WME were significant on all metrics for at least two of the models as determined by a two-tailed t-test of the means (see * in Figure 3 for models with significant differences). We see also that the drop in meaning and tone scores from judgments on human WME is larger than the drop in human-like and dialect scores on WME. These observations suggest that models have a hard time interpreting AAL. Toxicity scores (Figure 4) show that models tend to remove toxic words when generating both AAL and WME 10 . To test whether removal of toxic words contributed to the inability to preserve meaning, we computed both coverage scores and human evaluation scores on two subsets of the data: \"toxic\" texts (at least one term categorized as offensive) and \"non-toxic\" texts (no terms categorized as offensive). We display these results as the difference in coverage scores and in human judgment scores between model generated AAL and WME as shown in Figure 5 and Figure 6. Positive scores indicate that AAL performs better. Here we see that human judgments on meaning and tone show that generated WME is worse than generated AAL for both toxic and non-toxic subsets. Thus, differences in use of toxic words between input and output cannot be the sole cause for lower scores on meaning and tone. This confirms that models have difficulty interpreting features of AAL. We note furthermore 10 Models are developed to avoid generating toxic or offensive language, so the trend of neutralizing input texts in any dialect is expected. There are notable differences, however, in the extent to which this neutralization occurs. The results show that a significantly higher proportion of toxic language is removed when generating WME from AAL than in the reverse direction.\nthat gaps in coverage are consistently larger for the non-toxic subsets of the data, demonstrating that the use of profanity and toxic language are also not the primary cause of gaps in coverage metrics. Negative percent changes in perplexity indicate that it is easier for models to predict spans in WME than AAL, while for entropy, indicate that models place higher probability in their top predictions for WME than for AAL sentences." }, { "figure_ref": [ "fig_2", "fig_6", "fig_7", "fig_8" ], "heading": "Masked Span Prediction", "publication_ref": [ "b28", "b51", "b26", "b62", "b8" ], "table_ref": [ "tab_3" ], "text": "6 Discussion: How well do models interpret AAL?\nWe discussed earlier how Figure 3 and Figure 6 demonstrate that models have difficulty interpreting AAL when generating WME. Figure 7 supports this finding as well, as models generally output higher perplexities for masked spans in AAL compared to aligned WME. The largest gaps in perplexity between the two language varieties is assigned to masked verb phrases. One set of distinct features characterizing AAL are verbal aspects which do not occur in WME such as the future gone (e.g., I'm gone do it later), so this result may suggest that models struggle with the use of AAL-specific aspects in particular over other AAL features. A similar trend is found in the entropy metric, suggesting that AAL text also lowers model confidence in their own predictions. These results for AAL support similar findings by Groenwold et al. (2020) for GPT-2 in an auto-completion setting.\nManual inspection revealed more fine-grained patterns of model behavior within aspectual verbs and other AAL features. Models seem to correctly interpret some specific features of AAL, namely: the use of ain't, double negation, and habitual be.\nExamples of misinterpretation, however, are shown in Table 4 illustrating difficulty with several other aspects of AAL. Several mistakes involve lexical interpretation, such as in example 1, where the model is not able to interpret the meaning of \"he faded\" as \"he's high\" and example 2 where the model inserts shorty apparently intending the mean-ing \"youth\" instead of its more common meaning of \"girlfriend\". The models also struggle with features that appear the same as in WME, but have slightly different meanings in AAL. These include remote past been (example 2), which is incorrectly interpreted as past perfect (have been), and existential it (example 3), which in WME is closest in meaning to \"there\" as in \"there are ...\" and is not correctly interpreted by any model. We also include an example where GPT-4 misinterprets the phrase \"a nigga\" as referencing another person, when in the provided context, the use most closely resembles referencing oneself. The word nigga, is one of the N-words, a set of lexical items well-documented by linguists as being misunderstood by non-native speaker in terms of their syntactic and semantic complexity (Rahman, 2012;Grieser, 2019;Smith, 2019). In particular, while the model removes the word in generating the WME counterpart, it does not correctly understand the use of the N-word as referencing the subject. Without this understanding, it is probable that models will both misinterpret the words as toxic and use them in ways that are considered offensive to the AAL-speaking community.\nIn additional analysis of counterpart generations, we examined model performance gaps in each subset of the AAL dataset. Among subsets of the data, gaps between Rouge-1 metrics for AAL and WME counterparts vary significantly. GPT-4, for example, presents the largest performance gap for the TwitterAAE corpus (Blodgett and O'Connor, 2017), and the smallest gaps for the hip-hop and focus group subsets as shown in Figure 8. Manual inspection reveals that this aligns with the trends in AAL-use among the subsets as well: distinct features of AAL appear to be more frequent in the TwitterAAE dataset, while AAL features in the focus group transcripts appear to be more sparse. This pattern may be due to the makeup and context of the focus groups, as most participants were college-educated, working professionals and selected to specifically describe grief experiences, possibly affecting the use of AAL in the discussions. These results may suggest, as would be expected, that higher density of AAL features leads to larger performance gaps." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b69", "b65", "b0", "b59", "b61", "b35", "b18", "b57", "b40", "b25", "b33", "b9", "b42", "b17", "b54", "b32", "b36", "b41", "b43", "b68", "b52", "b55", "b28" ], "table_ref": [], "text": "While few have specifically focused on bias against AAL in language generation, related work has ex-tensively investigated societal biases in language tasks. One large-scale study (Ziems et al., 2022) investigates performance on the standard GLUE benchmark (Wang et al., 2018) using a synthetically constructed AAL dataset of GLUE for fine-tuning. They show performance drops on a small humanwritten AAL test set unless the Roberta model is fine-tuned.\nRacial Bias in Generation. Mitigating and evaluating social biases in language generation models is a challenging problem due to the apparent tradeoffs between task performance and bias mitigation, the many possible sources of bias, and the variety of biases and perspectives to examine (Sheng et al., 2021b;Akyürek et al., 2022). A number of studies have proposed bias evaluation measures, often using prompts crafted to reveal biased associations of, for example, occupation and gender (i.e., \"The [man/woman] worked as a ...\") (Sheng et al., 2020(Sheng et al., , 2019;;Kiritchenko and Mohammad, 2018;Dhamala et al., 2021;Shen et al., 2022) and in other cases, graph representations to detect subjective bias in summarization (Li et al., 2021) and personas for dialogue generation (Sheng et al., 2021a). However, the bias measurements in many of these approaches are not directly applicable to language in a natural setting, where the real-life harmful impacts of bias in language generation would be more prevalent.\nAAL Feature Extraction. Past work makes progress in lowering performance gaps between AAL and WME by focusing on linguistic feature extraction tasks. Given that some features of AAL such as the aspectual verbs (i.e., habitual be, remote past been) do not have equivalent meanings and functions in WME (Green, 2009), standard partof-speech (POS) taggers and dependency parsers cannot maintain performance for AAL text. Studies have attempted to lessen this gap by creating a POS tagger specifically for AAL through domain adaptation (Jørgensen et al., 2016) and a dependency parser for AAL in Tweets (Blodgett et al., 2018). Beyond these tasks, considerable attention has been given to developing tools for features specific to AAL and other language varieties, such as detecting dialect-specific constructions (Masis et al., 2022;Demszky et al., 2021;Santiago et al., 2022;Johnson et al., 2022) to aid in bias mitigation strategies.\nAAL in Language Tasks. Bias has also been measured specifically with respect to AAL in down-stream, user-facing tasks. With the phonological differences between AAL and WME, automatic speech recognition (ASR) systems have shown large performance drops when transcribing speech from African American speakers (Koenecke et al., 2020;Martin and Tang, 2020;Mengesha et al., 2021). Toxicity detection and offensive language classification models have also been evaluated and have shown a higher probability of incorrectly labeling AAL text as toxic or offensive when compared to WME text (Zhou et al., 2021;Rios, 2020;Sap et al., 2022). Most closely related to this work, one study evaluated bias against AAL in transformer generation models, showing that in a sentence auto-completion setting, GPT-2 generates AAL text with more negative sentiment than in aligned WME texts (Groenwold et al., 2020). Further investigation of both a larger set of language generation models as well as a broader set of generation tasks would provide a clearer picture of model biases against AAL." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We demonstrate through investigation of two tasks, counterpart generation and masked span prediction, that current LLMs have difficulty both generating and interpreting AAL. Our results show that LLMs do better matching the wording of gold standard references when generating WME than when generating AAL, as measured by Rouge and BERTScore. Human evaluation shows that LLM output is more likely to be judged as human-like and to match the input dialect when generating WME than AAL. Notably, however, LLMs show difficulty in generating WME that matches the meaning and tone of the gold standard, indicating difficulty in interpreting AAL. Our results suggest that more work is needed in order to develop LLMs that can appropriately interact with and understand AAL speakers, a capability that is important as LLMs are increasingly deployed in socially impactful contexts (e.g., medical, crisis)." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b68" ], "table_ref": [], "text": "We acknowledge a few limitations accompanying the evaluation of biases in LLMs. While our analysis is primarily restricted to intrinsic evaluation of model biases, users primarily interact with LLMs in a chat-based interface such as with ChatGPT, or use the model for specific tasks such as question answering. This approach was chosen to analyze biases that would be present across all tasks involving AAL. Performance gaps and biases analyzed in a task-specific setting, however, may yield different trends than presented in this paper, and we leave this investigation to future work.\nAdditionally, AAL exhibits significant variation by region, context, speaker characteristics, and many other variables. We attempt to more comprehensively reflect real AAL use by drawing text from multiple sources and contexts, but are ultimately limited by the data available. For example, while CORAAL reflects natural AAL speech, it is limited to a select set of regions (e.g., New York, Georgia, North Carolina, Washington DC), and while the Twitter and Reddit AAL subsets may span many regions, they are also influenced by the linguistic features of social media. Similar biases may also exist in other underrepresented varieties of English such as Mexican American English, Indian English, or Appalachian English. Due to the availability of data, we focus on AAL, but given texts in other varieties, this work could be extended to examine biases regarding these and other language varieties.\nFinally, evaluation metrics relying on trained models or lexicons, such as BERTScore and toxicity measures, may also inherently encode biases concerning AAL text. Rather than using a model to measure toxicity, we instead use a lexicon of offensive terms provided in Zhou et al. (2021) and used to measure lexical biases in toxicity models. Given that analyzing performance gaps relies on accurate and unbiased measures of model performance, future work may give attention to developing unbiased language generation metrics." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b48", "b16", "b10" ], "table_ref": [], "text": "We recognize that identifying potential bias against AAL in LLMs should also include a critically reflexive analysis of the consequences if language models are better at understanding language varieties specific to marginalized communities such as AAL, and the extent to which that impacts those speakers. In prior research, Patton et al. (2020) have noted that decisions made by researchers engaged in qualitative analysis of data through language processing should understand the context of the data and how algorithmic systems will transform behavior for individual, community, and system-level audiences. Critical Race Theory posits that racism exists across language practices and interactions (Delgado and Stefancic, 2023). Without these considerations, LLMs capable of understanding AAL could inadvertently be harmful in contexts where African Americans continue to be surveilled (e.g., social media analysis for policing).\nDespite this, including African American representation in language models could potentially benefit AAL speakers in socially impactful areas, such as mental health and healthcare (e.g., patient notes that fully present the pain African American patients are experiencing in the emergency room, Booker et al. 2015). Considering both the potential for misuse of the data as well as the potential for social good, we will make the Tweet IDs and other collected data available to those that have signed an MOU indicating that they will use the data for research purposes only, limiting research to improved interpretation of AAL in the context of an application for social good. In the MOU, applicants must include their intended use of the data and sign an ethics agreement." }, { "figure_ref": [], "heading": "A Data Statement", "publication_ref": [], "table_ref": [], "text": "We provide details about our dataset in the following data statement. Much of the dataset is drawn from existing datasets that lack data statements, and in those cases, we include what information we can." }, { "figure_ref": [], "heading": "A.1 Curation Rationale", "publication_ref": [ "b34" ], "table_ref": [], "text": "The dataset was collected in order to study the robustness of LLMs to features of AAL. The data is composed of AAL-usage in a variety of regions and contexts to capture the variation in the use of and density of features. In order to better ensure the included texts reflect AAL, we sample texts from social media, sociolinguistic interviews, focus groups, and hip-hop lyrics and weight the probability of sampling a text using a small set of known AAL morphosyntactic features. The datasets that were previously collected, CORAAL Kendall andFarrington 2021 andTwitterAAE (Blodgett et al., 2016), were originally created to study AAL and to study variation in AAL on social media respectively. For all texts in the dataset, we also collect human-annotated counterparts in WME to provide a baseline for model evaluations." }, { "figure_ref": [], "heading": "A.2 Language Variety", "publication_ref": [], "table_ref": [], "text": "All texts included in the dataset are in English (en-US) as spoken or written by African Americans in the United States with a majority of texts reflecting linguistic features of AAL. Some texts notably contain no features of AAL and reflect WME." }, { "figure_ref": [], "heading": "A.3 Speaker Demographics", "publication_ref": [], "table_ref": [], "text": "Most speakers included in the dataset are African American. The r/BPT texts were restricted to users who have been verified as African American, CORAAL and focus group transcripts were originally interviews with African Americans, and hip-hop lyrics were restricted to African American artists. The TwitterAAE dataset is not guaranteed to be entirely African American speakers, but the texts are primarily aligned with AAL and have a high probability of being produced by AAL speakers. Other demographics such as age and gender are unknown." }, { "figure_ref": [], "heading": "A.4 Annotator Demographics", "publication_ref": [], "table_ref": [], "text": "While all AAL texts in the dataset reflect natural usage of AAL, the WME counterparts in the dataset are annotated. We recruited 4 human annotators to generate WME counterparts for each text. All annotators self-identify as African American, self identify as AAL speakers, and are native English speakers. Additionally, the 4 annotators are undergraduate and graduate students aged 20-28, 2 of whom were graduate students in sociolinguistics. All annotators were compensated at a rate between $18 and $27 per hour depending the annotator's university and whether they were an undergraduate or graduate student." }, { "figure_ref": [], "heading": "A.5 Speech Situation", "publication_ref": [], "table_ref": [], "text": "Speech situations vary among the 6 datasets we compose. The r/BPT posts, r/BPT comments, and TwitterAAE subsets are all originally typewritten text, intended for a broad audience, and are drawn from asynchronous online interactions. The CORAAL and focus group transcript subsets are originally spoken and later transcribed, intended for others in their respective conversations, and are drawn from synchronous in-person interactions. Finally, the hip-hop lyrics subset are both spoken and written, intended for a broad audience of hiphop listeners, and are likely repeatedly changed and edited before released. r/BPT comments and posts are sampled from the origin of the subreddit in October 2015, CORAAL transcripts are sampled from interviews between 1888 and 2005, hip-hop lyrics are drawn from songs released in 2022, focus groups were conducted between February and November 2022, and the time range of the Twitter-AAE dataset is unknown to the authors." }, { "figure_ref": [], "heading": "A.6 Text Characteristics", "publication_ref": [], "table_ref": [], "text": "Among the data subsets, the focus group transcripts are the most topically focused. All focus groups primarily included discussion surrounding the experiences and responses to grief in the Harlem community, focusing on experiences due to daily stressors, the death of loved ones, police shootings, and the COVID-19 pandemic. In the r/BPT posts and r/BPT comments subsets, texts were typically written in response to a tweet by an African American Twitter user, ranging from political commentary to discussion of the experience of African Americans in the United States. The hip-hop lyrics subset is not topically focused, but includes texts that follow specific rhyming patterns and meters. The remaining subsets of the data (TwitterAAE, CORAAL) span a variety of topics and structures." }, { "figure_ref": [], "heading": "B AAL Search Patterns", "publication_ref": [], "table_ref": [], "text": "To better ensure our dataset includes use of AAL features, we use a set of regex and grammar-based search patterns as part of the sampling procedure. Regex patterns for AAL features are listed below. The set also includes grammar-based patterns using the spacy POS tagger to detect the use of habitual be, completive done (or \"dun\", \"dne\"), future gone (or \"gne\", \"gon\"), and remote past been (or \"bin\"). Each of these features are detected by the use of each term (or their variants) if they are not preceded by another auxiliary verb or preposition in the clause (i.e., \"He be eating\" contains a use of habitual be, but \"Should he be eating?\" does not because the auxiliary verb \"should\" precedes \"be\" in the clause). While standard POS taggers could potentially underperform on AAL, there were no AAL-specific POS taggers available at the time of dataset collection to our knowledge." }, { "figure_ref": [], "heading": "C Annotation Procedure C.1 Counterpart Annotations", "publication_ref": [], "table_ref": [], "text": "Annotators were asked to provide a semanticallyequivalent rewriting (or counterpart) of a given text from the AAL dataset in WME. The specific set of guidelines provided to annotators were:\n1. Change alternative spellings (i.e., \"shoulda\" for \"should've\") 2. Maintain usernames, hashtags, and URLs if present 3. Ignore emojis unless speakers of WME may use them differently 4. Conserve and un-censor profanity 5. Avoid unnecessary changes 6. Use your best judgement in special cases As noted, annotators had the option to label a text as \"Not Interpretable\" if it lacks a reasonable counterpart in WME." }, { "figure_ref": [], "heading": "C.2 Counterpart Judgment Instructions", "publication_ref": [], "table_ref": [], "text": "In judging counterparts, annotators were provided with an original text from the dataset in either WME or AAL and a model-generated (or humanannotated) counterpart. Notably, judgments were assigned ensuring that no annotator received a text they were involved in creating the counterpart for. Additionally, annotators were not given definitions or guidelines for terms such as \"meaning\" or \"tone\" to avoid biasing judgements and to encourage annotators to use their own interpretation of the terms. The questions asked of annotators are as follows:\n1. Human-likeness: Is the interpretation more likely generated by a human or language model? For judgment samples that involved judging WME counterparts, questions and response options that refer to \"AAL\" were changed accordingly." }, { "figure_ref": [], "heading": "D Annotator Agreement Calculation", "publication_ref": [ "b11" ], "table_ref": [], "text": "Because the task provided to annotators required generating text, we use Levenshtein distance and Krippendorf's Alpha to calculate annotator agreement based on the general form as described in Braylan et al. 2022. Annotator agreement is calculated with the following formula:\nα = 1 - Do De (1)\nwhere Do represents the observed distance between annotations, De represents the expected distance, and Levenshtein distance is used as the distance function D(a, b). Expected distance between two annotators is calculated by randomly shuffling one set of annotations and calculating the average Levenshtein distance between the randomized pairs." }, { "figure_ref": [], "heading": "E Model and Experiment Details E.1 Checkpoints and Training", "publication_ref": [], "table_ref": [], "text": "For GPT-3, ChatGPT, and GPT-4, we use the textdavinci-003, gpt-3.5-turbo, and gpt-4 checkpoints respectively . For the T5 model variants, we use the t5-large and google/flan-t5-large checkpoint. Flan-T5 is fine-tuned using a learning rate of 3e-5 for 5 epochs across the full set of 72 additional texts.\nFinally, for BART we use the facebook/bart-large checkpoint." }, { "figure_ref": [], "heading": "E.2 Generation Hyperparameters", "publication_ref": [], "table_ref": [], "text": "For all GPT-family models, we use the default temperature of 0.7 in generations. For the BART and T5 model variants, we use a beam width of 3, temperature of 1, and a no_repeat_ngram_size of 3." }, { "figure_ref": [], "heading": "F Full Counterpart Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 6 and Table 7 show the raw automatic metric and human judgement scores for the counterpart generation task respectively. Table 8 shows the percentage of toxic terms removed by the model when generating counterparts in AAL or WME. Finally, Table 9 and Table 10 present the raw automatic metric and human judgments scores on the toxic (at least one term categorized as offensive) and nontoxic (no terms categorized as offensive) subsets of the corpus." }, { "figure_ref": [], "heading": "G Full MSP Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 11 presents the raw perplexity and entropy scores in the Masked Span Prediction task." }, { "figure_ref": [], "heading": "H Additional Counterpart Generation Examples", "publication_ref": [], "table_ref": [ "tab_1", "tab_0", "tab_3", "tab_5", "tab_0", "tab_0", "tab_0", "tab_0" ], "text": "The remaining appendices, Tables 1213141516171819, provide further examples from the counterpart generation task. Examples are drawn randomly from subsets where the total score given to one of the models evaluated exceeds the ratings of the original annotated counterpart and where the total score of a model is lower." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by grant IIS-2106666 from the National Science Foundation, National Science Foundation Graduate Research Fellowship DGE-2036197, the Columbia University Provost Diversity Fellowship, and the Columbia School of Engineering and Applied Sciences Presidential Fellowship. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank the anonymous reviewers and the following people for providing feedback on an earlier draft: Tuhin Chakrabarty, Esin Durmus, Fei-Tzin Lee, Smaranda Muresan, and Melanie Subbiah. We also thank Mayowa Fageyinbo, Gideon Kortenhoven, Kendall Lowe, and Tajh Martin for providing annotations." }, { "figure_ref": [], "heading": "AAL ⇔WME", "publication_ref": [], "table_ref": [], "text": "----" }, { "figure_ref": [], "heading": "AAL Text", "publication_ref": [], "table_ref": [], "text": "Number one top really a number one thing that causes everything I know all the violence, he would just go back and forth and social media. There used to be broken controllers. Table 19 " } ]
Warning: This paper contains content and language that may be considered offensive to some readers. While biases disadvantaging African American Language (AAL) have been uncovered in models for tasks such as speech recognition and toxicity detection, there has been little investigation of these biases for language generation models like ChatGPT. We evaluate how well LLMs understand AAL in comparison to White Mainstream English (WME), the encouraged "standard" form of English taught in American classrooms. We measure large language model performance on two tasks: a counterpart generation task, where a model generates AAL given WME and vice versa, as well as a masked span prediction (MSP) task, where models predict a phrase hidden from their input. Using a novel dataset of AAL texts from a variety of regions and contexts, we present evidence of dialectal bias for six pre-trained LLMs through performance gaps on these tasks.
Evaluation of African American Language Bias in Natural Language Generation
[ { "figure_caption": "Figure 1 :1Figure 1: Example prompt provided to GPT models in the counterpart generation task including the instruction and AAL text for which the model generates a WME counterpart (blue).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Automatic coverage metrics of model-generated AAL and WME counterparts. \"Human\" (purple) scores represent coverage metrics between the original AAL text and human-annotated WME counterparts. Significant differences between scores in the WME → AAL direction and in the AAL → WME direction are denoted by * (p≤ .05).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Human judgments for model-generated AAL and WME counterparts. \"Human\" (purple) scores represent judgments of original AAL text and human-annotated WME counterparts. Significant differences between scores in the WME → AAL direction and in the AAL → WME direction are denoted by * (p≤ .05). Flan-T5 without fine-tuning was evaluated with automatic metrics after human judgements were collected.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison of Type-Token Ratios between the AAL and WME texts in the dataset. 95% confidence intervals calculated using the Wilson Score Interval are shown in parenthesis.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Percentage of toxicity removed for AAL and the respective aligned WME counterparts.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Gaps in Rouge-1 metric in non-toxic and toxic subsets of the AAL dataset. Negative Rouge gaps indicated greater WME performance than AAL.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Gaps in human judgments of human and model-generated counterparts broken down into toxic and non-toxic subsets. Negative scores indicate better WME performance.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Percent difference in perplexity and top-5 entropy between AAL and aligned WME texts. Negative percentages indicate lower WME perplexity/entropy than AAL perplexity/entropy.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Breakdown of Rouge score gaps and percent changes in toxicity by data source in counterpart generation. Negative Rouge values indicate higher WME Rouge scores, and positive ∆T scores indicate the model-generated counterpart is more toxic than the gold standard.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Examples of ChatGPT and GPT-4 counterpart predictions. Given text in either WME or AAL, models attempt a semantically-equivalent rewriting in the other language variety.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Characterization of the novel AAL dataset by text source including the number of text samples, length (in words) of aligned AAL and WME texts, Rouge-1 between aligned texts, and the average toxicity scores among dialects.", "figure_data": "; see Ap-", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Flan-T5 (FT), ChatGPT and GPT4 examples ofmodel counterparts neutralizing the input text (red) andmisinterpreting features of AAL or terms in the inputtext (blue).", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "ALL feature search patterns.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Linguistic Match: How well does the interpretation resemble AAL? (1) Completely Unlike AAL, (2) Unlike AAL, (3) Neutral, (4) Like AAL, (5) Completely Like AAL 3. Meaning Preservation: How well does the interpretation reflect the meaning of the original text? (1) Completely innacurate, (2) Inaccurate, (3) Neutral, (4) Accurate, (5) Completely Accurate 4. Tone Preservation: How well does the counterpart accurately reflect the tone of the original text? (1) Completely innacurate, (2) Inaccurate, (3) Neutral, (4) Accurate, (5) Completely Accurate", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Nicholas Deas; Jessi Grieser; Shana Kleiner; Desmond Patton; Elsbeth Turcan; Kathleen Mckeown
[ { "authors": "Afra Feyza; Akyürek ; Muhammed Yusuf Kocyigit; Sejin Paik; Derry Tanti; Wijaya ", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Challenges in measuring bias via open-ended language generation", "year": "2022" }, { "authors": "H Samy; Alim ; Geneva Smitherman", "journal": "Oxford University Press", "ref_id": "b1", "title": "Articulate while Black: Barack Obama, language, and race in the US", "year": "2012" }, { "authors": " Baker-Bell", "journal": "Routledge", "ref_id": "b2", "title": "Linguistic justice: Black language, literacy, identity, and pedagogy", "year": "2020-04" }, { "authors": "John Baugh", "journal": "Oxford University Press", "ref_id": "b3", "title": "SWB (Speaking while Black): Linguistic Profiling and Discrimination Based on Speech as a Surrogate for Race against Speakers of African American Vernacular English", "year": "2015" }, { "authors": "Emily M Bender; Batya Friedman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "year": "2018" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "Association for Computing Machinery", "ref_id": "b5", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Lin Su; Lisa Blodgett; Brendan O' Green; Connor", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Demographic dialectal variation in social media: A case study of African-American English", "year": "2016" }, { "authors": "Lin Su; Brendan O' Blodgett; Connor", "journal": "", "ref_id": "b8", "title": "Racial disparity in natural language processing: A case study of social media african-american english", "year": "2017" }, { "authors": "Lin Su; Johnny Blodgett; Brendan O' Wei; Connor", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Twitter Universal Dependency parsing for African-American and mainstream American English", "year": "2018" }, { "authors": "\" Staja; Star; Chris Booker; Keela A Pasero; Herr", "journal": "Geriatric Nursing", "ref_id": "b10", "title": "Practice recommendations for pain assessment by self-report with african american older adults", "year": "2015" }, { "authors": "Alexander Braylan; Omar Alonso; Matthew Lease", "journal": "ACM", "ref_id": "b11", "title": "Measuring annotator agreement generally across complex structured, multi-object, and freetext annotation tasks", "year": "2022" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Emanuele Bugliarello; Sabrina J Mielke; Antonios Anastasopoulos; Ryan Cotterell; Naoaki Okazaki", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "It's easier to translate out of English than into it: Measuring neural translation difficulty by crossmutual information", "year": "2020" }, { "authors": "Kai-Wei Chang; Vinodkumar Prabhakaran; Vicente Ordonez", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Bias and fairness in natural language processing", "year": "2019" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b15", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Richard Delgado; Jean Stefancic", "journal": "NyU press", "ref_id": "b16", "title": "Critical race theory: An introduction", "year": "2023" }, { "authors": "Dorottya Demszky; Devyani Sharma; Jonathan Clark; Vinodkumar Prabhakaran; Jacob Eisenstein", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Learning to recognize dialect features", "year": "2021" }, { "authors": "Jwala Dhamala; Tony Sun; Varun Kumar; Satyapriya Krishna; Yada Pruksachatkun; Kai-Wei Chang; Rahul Gupta", "journal": "ACM", "ref_id": "b18", "title": "BOLD", "year": "2021" }, { "authors": "Jesse Dodge; Maarten Sap; Ana Marasović; William Agnew; Gabriel Ilharco; Dirk Groeneveld; Margaret Mitchell; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Documenting large webtext corpora: A case study on the colossal clean crawled corpus", "year": "2021" }, { "authors": "Margaret Dolcini; Jesse A Canchola; Joseph A Catania; Marissa M Song Mayeda; Erin L Dietz; Coral Cotto-Negrón; Vasudha Narayanan", "journal": "J Med Internet Res", "ref_id": "b20", "title": "National-level disparities in internet access among low-income and black and hispanic youth: Current population survey", "year": "2021" }, { "authors": "Li Dong; Nan Yang; Wenhui Wang; Furu Wei; Xiaodong Liu; Yu Wang; Jianfeng Gao; Ming Zhou; Hsiao-Wuen Hon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Unified language model pre-training for natural language understanding and generation", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Charlie Farrington; Sharese King; Mary Kohn", "journal": "WIREs Cognitive Science", "ref_id": "b23", "title": "Sources of variation in the speech of african americans: Perspectives from sociophonetics", "year": "2021" }, { "authors": "Srini Hila Gonen; Terra Iyer; Noah A Blevins; Luke Smith; Zettlemoyer", "journal": "", "ref_id": "b24", "title": "Demystifying prompts in language models via perplexity estimation", "year": "2022" }, { "authors": "Lisa J Green", "journal": "Cambridge Univ. Press", "ref_id": "b25", "title": "African American English: A Linguistic Introduction", "year": "2009" }, { "authors": "Jessica A Grieser", "journal": "American Speech: A Quarterly of Linguistic Usage", "ref_id": "b26", "title": "Toward understanding the nwords", "year": "2019" }, { "authors": "Jessica A Grieser", "journal": "Georgetown University Press", "ref_id": "b27", "title": "The Black side of the river: Race, language, and belonging in", "year": "2022" }, { "authors": "Sophie Groenwold; Lily Ou; Aesha Parekh; Samhita Honnavalli; Sharon Levy; Diba Mirza; William Yang; Wang ", "journal": "", "ref_id": "b28", "title": "Investigating African-American Vernacular English in transformer-based text generation", "year": "2020" }, { "authors": "Salima Harrat; Karima Meftouh; Kamel Smaili", "journal": "Language Processing (ANLP) and its Applications", "ref_id": "b29", "title": "Machine translation for arabic dialects (survey)", "year": "2019" }, { "authors": "Linette N Hinton; Karen E Pollock", "journal": "World Englishes", "ref_id": "b30", "title": "Regional variations in the phonological characteristics of african american vernacular english", "year": "2000" }, { "authors": "I-Ching Hsu; Jiun-De Yu", "journal": "Multimedia Tools and Applications", "ref_id": "b31", "title": "A medical chatbot using machine learning and natural language understanding", "year": "2022" }, { "authors": "Alexander Johnson; Kevin Everson; Vijay Ravi; Anissa Gladney; Mari Ostendorf; Abeer Alwan", "journal": "", "ref_id": "b32", "title": "Automatic Dialect Density Estimation for African American English", "year": "2022" }, { "authors": "Anna Jørgensen; Dirk Hovy; Anders Søgaard", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Learning a POS tagger for AAVE-like language", "year": "2016" }, { "authors": "Tyler Kendall; Charlie Farrington", "journal": "", "ref_id": "b34", "title": "The corpus of regional african american language", "year": "2021" }, { "authors": "Svetlana Kiritchenko; Saif Mohammad", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Examining gender and race bias in two hundred sentiment analysis systems", "year": "2018" }, { "authors": "Allison Koenecke; Andrew Nam; Emily Lake; Joe Nudell; Minnie Quartey; Zion Mengesha; Connor Toups; John R Rickford; Dan Jurafsky; Sharad Goel", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "ref_id": "b36", "title": "Racial disparities in automated speech recognition", "year": "2020" }, { "authors": "Nima Kordzadeh; Maryam Ghasemaghaei", "journal": "European Journal of Information Systems", "ref_id": "b37", "title": "Algorithmic bias: review, synthesis, and future research directions", "year": "2022" }, { "authors": "L Sonja; Lanehart", "journal": "John Benjamins Publishing", "ref_id": "b38", "title": "Sociocultural and historical contexts of African American English", "year": "2001" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Lei Li; Wei Liu; Marina Litvak; Natalia Vanetik; Jiacheng Pei; Yinan Liu; Siya Qi", "journal": "", "ref_id": "b40", "title": "Subjective bias in abstractive summarization", "year": "2021" }, { "authors": "Joshua L Martin; Kevin Tang", "journal": "", "ref_id": "b41", "title": "Understanding Racial Disparities in Automatic Speech Recognition: The Case of Habitual \"be", "year": "2020" }, { "authors": "Tessa Masis; Anissa Neal; Lisa Green; Brendan O' Connor", "journal": "", "ref_id": "b42", "title": "Corpus-guided contrast sets for morphosyntactic feature detection in low-resource English varieties", "year": "2022" }, { "authors": "Zion Mengesha; Courtney Heldreth; Michal Lahav; Juliana Sublewski; Elyse Tuennerman", "journal": "Frontiers in Artificial Intelligence", "ref_id": "b43", "title": "i don't think these devices are very culturally sensitive.\"-impact of automated speech recognition errors on african americans", "year": "2021" }, { "authors": "Josh Meyer; Lindy Rauchenstein; Joshua D Eisenberg; Nicholas Howell", "journal": "European Language Resources Association", "ref_id": "b44", "title": "Artie bias corpus: An open dataset for detecting demographic bias in speech applications", "year": "2020" }, { "authors": "H Marcyliena; Morgan", "journal": "", "ref_id": "b45", "title": "", "year": "2001" }, { "authors": "", "journal": "", "ref_id": "b46", "title": "nuthin' but a g thang\": Grammar and language ideology in hip hop identity", "year": "" }, { "authors": "Ziad Obermeyer; Brian Powers; Christine Vogeli; Sendhil Mullainathan", "journal": "Science", "ref_id": "b47", "title": "Dissecting racial bias in an algorithm used to manage the health of populations", "year": "2019" }, { "authors": "Desmond U Patton; William R Frey; Kyle A Mcgregor; Fei-Tzin Lee; Kathleen Mckeown; Emanuel Moss", "journal": "Association for Computing Machinery", "ref_id": "b48", "title": "Contextual analysis of social media: The promise and challenge of eliciting context in social media posts with natural language processing", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b49", "title": "Newsroom employees are less diverse than u.s. workers overall", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b50", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Jacquelyn Rahman", "journal": "Journal of English Linguistics -J ENGL LINGUIST", "ref_id": "b51", "title": "The n word: Its history and use in the african american community", "year": "2012" }, { "authors": "Anthony Rios", "journal": "", "ref_id": "b52", "title": "Fuzze: Fuzzy fairness evaluation of offensive language classifiers on african-american english", "year": "2020" }, { "authors": "Maggie Ronkin; Helen E Karn", "journal": "Journal of Sociolinguistics", "ref_id": "b53", "title": "Mock ebonics: Linguistic racism in parodies of ebonics on the internet", "year": "1999" }, { "authors": "Harrison Santiago; Joshua Martin; Sarah Moeller; Kevin Tang", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Disambiguation of morphosyntactic features of African American English -the case of habitual be", "year": "2022" }, { "authors": "Maarten Sap; Swabha Swayamdipta; Laura Vianna; Xuhui Zhou; Yejin Choi; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "Annotators with attitudes: How annotator beliefs and identities bias toxic language detection", "year": "2022" }, { "authors": "Deven Santosh Shah; H Andrew Schwartz; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Predictive biases in natural language processing models: A conceptual framework and overview", "year": "2020" }, { "authors": "Tianshu Shen; Jiaru Li; Mohamed Reda Bouadjenek; Zheda Mai; Scott Sanner", "journal": "", "ref_id": "b57", "title": "Unintended bias in language model-driven conversational recommendation", "year": "2022" }, { "authors": "Emily Sheng; Josh Arnold; Zhou Yu; Kai-Wei Chang; Nanyun Peng", "journal": "", "ref_id": "b58", "title": "Revealing persona biases in dialogue systems", "year": "2021" }, { "authors": "Emily Sheng; Kai-Wei Chang; Prem Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Towards Controllable Biases in Language Generation", "year": "2020" }, { "authors": "Emily Sheng; Kai-Wei Chang; Prem Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Societal biases in language generation: Progress and challenges", "year": "2021" }, { "authors": "Emily Sheng; Kai-Wei Chang; Premkumar Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b61", "title": "The woman worked as a babysitter: On biases in language generation", "year": "2019" }, { "authors": "Hiram Smith", "journal": "American Speech", "ref_id": "b62", "title": "Has nigga been reappropriated as a term of endearment? (a qualitative and quantitative analysis)", "year": "2019" }, { "authors": "Isabel Straw; Chris Callison-Burch", "journal": "PLOS ONE", "ref_id": "b63", "title": "Artificial intelligence in mental health and the biases of language based models", "year": "2020" }, { "authors": "Yu Wan; Baosong Yang; Derek F Wong; Lidia S Chao; Haihua Du; Ben C H Ao", "journal": "", "ref_id": "b64", "title": "Unsupervised neural dialect translation with commonality and diversity modeling", "year": "2020" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Julie A Washington; Holly K Craig; Amy J Kushmaul", "journal": "Journal of Speech, Language, and Hearing Research", "ref_id": "b66", "title": "Variable use of african american english across two language sampling contexts", "year": "1998" }, { "authors": "Tianyi Zhang; * ; Varsha Kishore; * ; Felix Wu; * ; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b67", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Xuhui Zhou; Maarten Sap; Swabha Swayamdipta; Yejin Choi; Noah Smith", "journal": "Association for Computational Linguistics", "ref_id": "b68", "title": "Challenges in automated debiasing for toxic language detection", "year": "2021" }, { "authors": "Caleb Ziems; Jiaao Chen; Camille Harris; Jessica Anderson; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "VALUE: Understanding dialect disparity in NLU", "year": "2022" } ]
[ { "formula_coordinates": [ 16, 151.82, 478.95, 138.05, 25.95 ], "formula_id": "formula_1", "formula_text": "α = 1 - Do De (1)" } ]
10.18653/v1/2023.acl-long.608
2021-09
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b18", "b0", "b33", "b26", "b11", "b22", "b42", "b47" ], "table_ref": [], "text": "Recent dramatic advances in LLM chatbots have made them indispensable tools for millions of people (Hu, 2023) who have come to rely on their broad skill set. Yet, LLM chatbots are prone to providing misleading information, or hallucination (Bang et al., 2023), often using a convincing and confident language. Notably, LLMs do not speak accurately about recent events that occurred * Equal contribution 1 Code and model available at https://github.com/ stanford-oval/WikiChat.\nafter their pre-training, and are far less knowledgeable about less popular, or tail, topics (Mallen et al., 2022;Sun et al., 2023). Therefore, for knowledgeintensive tasks (Lewis et al., 2020), users need to painstakingly verify any information they receive with external sources lest they be misled.\nThis paper focuses on three metrics for knowledge-intensive dialogues: factuality, conversationality, and latency. A knowledge-based chatbot needs to be first and foremost factual. We assume access to a source of trusted text corpus; here the English Wikipedia is assumed to be factual. While LLMs tend to hallucinate, they can carry out natural and engaging conversations rather than giving dry answers to users' questions. We refer to the ability to give relevant, informational, natural, non-repetitive and temporally accurate answers collectively as conversationality. We single out latency as the third metric of focus because solutions addressing factuality like Gao et al. (2023); Jiang et al. (2023); Trivedi et al. (2023); Zhao et al. (2023) tend to incur a high latency that degrades user experience and hinders adoption." }, { "figure_ref": [], "heading": "Current Approaches", "publication_ref": [ "b46", "b27", "b4", "b26", "b42", "b20", "b4", "b26", "b42", "b20", "b8", "b24", "b11", "b22", "b26", "b14", "b7", "b34", "b36", "b21" ], "table_ref": [], "text": "The basis of factuality in this work is information retrieval (IR), which bases the chatbot's responses on retrieved information from a trusted corpus (Zhang et al., 2023;Li et al., 2022;Shuster et al., 2021). The retrieve-then-generate approach generates a response from data retrieved with the user query (Lewis et al., 2020;Trivedi et al., 2023;Izacard et al., 2022;Shuster et al., 2022b;Chen et al., 2021). Previous work is either not evaluated on conversational tasks (Lewis et al., 2020;Trivedi et al., 2023;Shi et al., 2023), or as we show in this paper, is likely to generate irrelevant and unnatural outputs when properly evaluated (Izacard et al., 2022). More importantly, chatbots based on retrieve-then-generate pipelines may still hallucinate. In popular academic datasets like Wiz-Figure 1: All WikiChat components, and a sample conversation about an upcoming movie, edited for brevity. The steps taken to generate a response include (1) generating a query to retrieve from Wikipedia, (2) summarizing and filtering the retrieved passages, (3) generating a response from an LLM, (4) extracting claims from the LLM response (5) fact-checking the claims in the LLM response using retrieved evidence, (6) drafting a response, and (7) refining the response. ard of Wikipedia (Dinan et al., 2019) and Wizard of the Internet (Komeili et al., 2022), which are widely used to train retrieve-then-generate chatbots, crowdworkers are free to add ungrounded information to their responses; in a GPT-4-based commercial system like Bing Chat, only 58.7% of the facts generated are grounded in what it retrieves (Liu et al., 2023a).\nAnother IR approach is to fact-check a system's outputs and remove errors (Gao et al., 2023;Jiang et al., 2023). When applied to LLM chatbots, the responses are conversational, but as shown in this paper, are lacking in content for recent or tail topics. Other IR approaches require expensive changes to the pre-training process of language models (Lewis et al., 2020;Guu et al., 2020).\nComplementary to IR, Knowledge Editing updates model weights to include recent knowledge as it becomes available (De Cao et al., 2021;Meng et al., 2022;Mitchell et al., 2022). Similarly, Continual Learning can be used to add new knowledge to LLMs (Jang et al., 2022). While these approaches improve factuality on recent knowledge, they cannot address the tail knowledge problem." }, { "figure_ref": [], "heading": "WikiChat Overview", "publication_ref": [ "b8", "b24" ], "table_ref": [], "text": "This paper presents WikiChat, the first few-shot chatbot that provides up-to-date and fact-checked information with high conversationality and low latency.\nFew-Shot Knowledge-Grounded Chatbots. Our 7-stage pipeline (Figure 1) combines the best of IR approaches: We (1) use the user utterance to retrieve information that LLMs may not be aware of, and (2) leverage the generative power of LLMs by asking them for responses and fact-checking them. All this curated information is used to draft and refine the final response.\nIt is not easy to stop LLMs from hallucinating. In retrieve-then-generate pipelines, when IR does not retrieve any relevant information or when no relevant information is available in the knowledge corpus, LLMs hallucinate to pick up the slack. Thus, WikiChat summarizes and filters the retrieved information instead of generating a response directly. We fact-check every claim generated by LLMs separately and teach the system to say \"I don't know\" when necessary. We teach it to understand the time context; e.g. a future tense in an article may refer to a past event at the time of the conversation. Most importantly, we do not prematurely optimize for speed by forgoing these needed steps, but rely on model distillation to reduce the latency only once high quality is achieved.\nThe resulting pipeline is not specific to any cor-pus. While this paper applies this pipeline to Wikipedia, the largest corpus of curated knowledge, to create WikiChat, it is applicable to any free-text corpus, including personal and corporate confidential information. The pipeline is not specific to any LLM either, and we apply it to three different LLMs in this paper.\nDistillation for improved latency, affordability, and privacy. Not only is our 7-stage LLMbased pipeline slow and expensive, sending user data to LLM APIs does not provide the privacy and confidentiality needed by many applications. The simplicity in each of our stages makes it possible to effectively distill our best system into a smaller multi-tasking model, which is responsive and affordable, and can be deployed locally for privacy. We release this model to aid further research and reproducibility of our results.\nEvaluation of LLM-based agents. We find that LLM-based chatbots have surpassed the quality of systems that conventional static crowdsourced benchmarks (Dinan et al., 2019;Komeili et al., 2022) were meant to evaluate. For example, these benchmarks mainly evaluate the ability to chat about the head knowledge, which LLM chatbots are already very good at. We devise a human-and-LLM hybrid evaluation methodology that can adequately analyze all chatbots, regardless of whether they are knowledge-grounded or LLM-based." }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [ "b20", "b2" ], "table_ref": [], "text": "We create a factual and engaging open-domain chatbot with a 7-stage pipeline using a few-shot prompted LLM, as shown in Figure 1. We validate the concept with three GPT-4, GPT-3.5, and LLaMA (Touvron et al., 2023) based chatbots grounded in Wikipedia.\nOur experiments with simulated users show that the GPT-4-based WikiChat (WikiChat G4 ) achieves a 97.3% factual accuracy of its claims in simulated conversations. Each version of WikiChat is more accurate than the LLM it is based on by an average of 31.2%, 27.8% and 50.2% for GPT-4, GPT-3.5 and LLaMA respectively. It also outperforms the fine-tuned SOTA retrieval-based Atlas (Izacard et al., 2022) in factuality and, unlike Atlas, matches the conversationality of LLMs.\nOur real user study shows that WikiChat achieves 97.9% in factuality on conversations of recent topics, 2.3 times more factual than GPT-4, while receiving higher user ratings.\nWe are the first to demonstrate the feasibility of distilling a multi-part system built with incontext learning (Brown et al., 2020) into a smaller yet effective model. Our distillation of WikiChat G4 into a 7B-parameter LLaMA achieves a factual accuracy of 91.1%, outperforming much larger baselines, while having 3.2 times lower endto-end latency than its teacher model.\nWe introduce an efficient and effective human-and-LLM methodology for evaluating knowledge-grounded chatbots in settings beyond what is possible with just crowdsourcing." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b4", "b4", "b45", "b20", "b8", "b40", "b13", "b25", "b17", "b9", "b35", "b22", "b12" ], "table_ref": [], "text": "Knowledge-Grounded Chatbots. Information retrieval is commonly used to develop knowledgegrounded chatbots (Shuster et al., 2021). Blender-Bot 2 (Chen et al., 2021) incorporates Internet search. SeeKeR (Shuster et al., 2022a) outperforms BlenderBot 2 (Chen et al., 2021) by utilizing a single language model for three modular tasks: generating search queries, producing relevant knowledge from retrieved documents, and generating the final response. BlenderBot 3 (Shuster et al., 2022b) fine-tunes a 175B-parameter OPT (Zhang et al., 2022) on the combination of 20 question answering and dialogue datasets. Atlas (Izacard et al., 2022) is a state-of-the-art model on the KILT benchmark (Petroni et al., 2021), which consists of 11 knowledge-oriented tasks including Wizard of Wikipedia (Dinan et al., 2019).\nEvaluating Factuality. FEVER (Thorne et al., 2018) is a popular crowdsourced dataset that compares claims against evidence retrieved from Wikipedia, and was extended to dialogues by Gupta et al. (2022). The state-of-the-art system on this dataset (Krishna et al., 2022) has an accuracy of 81% when compared against human labels. Q 2 (Honovich et al., 2021) uses question answering and natural language inference models to evaluate the factuality of dialogue agents. Dziri et al. (2022) compare this and several other automatic metrics to human evaluation and find that automatic metrics fall significantly behind human performance, and rely on spurious correlations. Concurrently and similarly to our work, Min et al. (2023) break down long system outputs to claims and evaluate their factuality using retrieval. Their best model has a significant 13% error rate on individual claims. Given the high error rates for these automatic metrics, our evaluation methodology (Section 5) incorpo-rates human judgement where needed. Approaches like Qian et al. (2023) and TripleScore (Goodrich et al., 2019) only consider simple entity-relations between retrieved documents and system outputs, while we generalize that notion to all forms of facts." }, { "figure_ref": [], "heading": "Design of WikiChat", "publication_ref": [], "table_ref": [], "text": "Given the history of a conversation with a user, WikiChat generates its next utterance by (1) curating information that is correct and relevant to the conversation context, and (2) using the gathered information to form the response. Each stage of the pipeline in Figure 1 is implemented using in-context learning. All prompts are included in Appendix F." }, { "figure_ref": [], "heading": "Curating Information", "publication_ref": [], "table_ref": [], "text": "LLMs have the ability to interpret the user utterance, formulate responses that are out of the reach of retrieval or require aggregating information from many sources or drawing conclusions, as shown by the following example:\nUser: Do you think Apple will continue to be a big player in the technology industry? GPT-4: Yes, . . . Apple has a strong track record of innovation, a loyal customer base, and a robust financial position. . . . the most valuable company globally, with a market capitalization of over $2 trillion. . . . continues to explore new technologies such as augmented reality, artificial intelligence, and autonomous vehicles. This will help them to stay ahead of the competition.\nUnfortunately, LLMs cannot be trusted to be factual, thus we need to fact-check their outputs. Furthermore, LLMs are unaware of recent events. Thus, we use both LLM generation and IR." }, { "figure_ref": [], "heading": "Retrieval from Corpus", "publication_ref": [ "b3", "b44" ], "table_ref": [ "tab_6", "tab_7", "tab_8", "tab_0" ], "text": "During a conversation with a user, WikiChat identifies when accessing external information is needed. This could be because the last user utterance contains a direct question (e.g. \"Who is Stephen Curry?\") or otherwise requires additional information for a comprehensive response (e.g. \"I really like Stephen Curry.\").\nStage 1. WikiChat generates a search query that captures the user's interest with a prompt (Table 17). We discovered that existing systems especially struggle with the temporal context. WikiChat generates the inferred time of the user's need alongside the query. The query time can be one of recent, year=yyyy, or none for when the retrieved information should be as recent as possible, for a specific year, or the time is not important, respectively.\nThe query is sent to an information retrieval system to obtain relevant passages from the corpus, and the top results are re-ranked based on the temporal information to get N IR passages.\nStage 2. Since these passages may contain a mixture of relevant and irrelevant sections, WikiChat extracts relevant sections of the retrieved passages and summarizes them into bullet points while filtering out the irrelevant parts (Table 18).\n3.1.2 LLM Generation and Fact-Checking Stage 3. We prompt the LLM to generate a response to the history of the conversation (Table 19). This response often contains interesting and relevant knowledge, but is inherently unreliable.\nStage 4. The LLM response is broken down to multiple claims (Chen et al., 2022) (Table 20). This stage resolves co-references to reduce ambiguity, and resolves relative time information like \"current\" and \"last year\", to make all claims self-contained.\nWe use IR to retrieve N evidence passages from the knowledge corpus for each claim to serve as evidence. We use the same time-based re-ranking as in Section 3.1.1 to better handle time-sensitive topics.\nStage 5. The verification prompt (Table 21) uses chain-of-thought prompting (Wei et al., 2022) to assign each claim to one of three classes: whether the retrieved evidence supports the claim, refutes the claim, or if there is not enough information in the evidence to make this decision. Only claims that are supported by the evidence are kept." }, { "figure_ref": [], "heading": "Forming the Response", "publication_ref": [ "b32" ], "table_ref": [], "text": "The next step is to use the curated information to form an appealing response. Our experiments show that writing the final response in one go while satisfying all conversationality criteria is challenging with in-context learning, especially that the limited context length makes it difficult to provide enough multi-turn conversations as few-shot examples to cover all the necessary aspects. Thus, we use a two-step approach:\nStage 6. WikiChat generates a draft response from the given list of bullet points and the history of the conversation (Table 22).\nStage 7. It then generates feedback and refines the response based on relevance, naturalness, nonrepetitiveness, and temporal correctness (Table 23). The feedback contains the model's reasoning on each criterion and a score between 0 and 100 for each. Refinement is conditioned on this feedback and the scores as a chain of thought. Concurrent to this work, Madaan et al. (2023) have explored the idea of prompting LLMs to refine their own generations in other settings.\nAs discussed in Section 1.2, it is hard for LLMs to say \"I don't know\". In the special case where the curation stages return no relevant information, the draft prompt is skipped and instead a \"Sorry, I'm not sure\" is sent to the refinement prompt, which dresses it up to match the conversation." }, { "figure_ref": [], "heading": "Model Distillation", "publication_ref": [ "b41" ], "table_ref": [], "text": "To improve latency, cost and privacy, we distill WikiChat based on a teacher LLM into a smaller student model. We use WikiChat based on GPT-4 (i.e. WikiChat G4 ) as the teacher, and the publicly available LLaMA (Touvron et al., 2023) model as the student to obtain WikiChat L .\nEach few-shot prompt consists of an instruction I and several examples. We use a user simulator (described in Section 5.1) to talk to the teacher WikiChat about topics from Wikipedia, while recording the inputs the underlying teacher LLM sees, and the outputs it generates for those inputs. We use these input-output pairs and the instruction I (but not the few-shot examples) to fine-tune the student LLM. We distill all 7 subtasks of WikiChat into the same student model in a multi-task setting. The LLaMA-based WikiChat calls LLaMA in each of the pipeline stages by specifying instruction I and the input.\nDistillation lowers the latency because the student LLM is many times smaller than the teacher LLM, and has a shorter input length as it sees no few-shot examples, similar to context distillation (Snell et al., 2022).\nFurthermore, we remove chains of thought from the outputs of stages 5 and 7 (verification and refinement), and merge stages 6 and 7. No drop in our metrics of factuality and conversationality is observed, suggesting that chain-of-thought prompting and refinement may only be necessary for incontext learning. Fine-tuned models can learn these tasks from just inputs and outputs given a big enough training set." }, { "figure_ref": [], "heading": "A Novel Evaluation Methodology", "publication_ref": [ "b24" ], "table_ref": [], "text": "Most existing conversational benchmarks are crowdsourced and static. As Komeili et al. (2022) says about their use of crowdsourcing, \"The intent . . . is that [crowdworkers] can choose a topic they . . . have enough knowledge of so that they can conduct a reasonable conversation.\" Since LLMs are already good conversants about familiar topics, testing them on these topics would lead to the false conclusion that no innovation is necessary.\nFurthermore, static benchmarks quickly lose their effectiveness in evaluating chatbots' use of up-to-date information whenever a new LLM is released. For example, Wizard of Wikipedia does not contain any topics that are not seen by GPT-3, GPT-4 or LLaMA during their pre-training.\nHere we propose a novel combination of simulated and real user conversations, as well as human and LLM-based evaluations, to understand the factuality and conversationality of modern chatbots." }, { "figure_ref": [], "heading": "Collecting Dialogues", "publication_ref": [ "b33", "b43", "b1", "b47" ], "table_ref": [], "text": "Conversation Topics. In our experiment, we pick an article from the knowledge corpus Wikipedia as a starter topic. We choose a diverse set of topics covering the space of head, tail, and recent knowledge. Similar to Mallen et al. (2022), we use the total number of views of a Wikipedia article as a proxy for how frequently that topic is likely to be discussed on the Web and therefore the pre-training data of LLMs, given that views in Wikipedia are to a large degree generated from other online sources.\nHead: These are articles with the highest total view count, up to the end of 2020, which is before the cut-off date of the pre-training data of all LLMs we evaluate. Example article titles include \"Sting (musician)\", \"Barack Obama\", and \"Gmail\". The view count ranges from 68M to 16M for the head topics.\nTail: These are the least viewed articles, with less than 1000 views. Examples are \"Amelia Gething\", \"Last Tycoon Stakes\", and \"2008 CONCA-CAF Women's Olympic Qualifying Tournament\".\nRecent: These are the most edited Wikipedia articles in the first four months of 2023, which is after the cut-off date of LLMs. Examples include big news stories of 2023 like \"2023 Speaker of the United States House of Representatives election\" and \"Yeti Airlines Flight 691\".\nWe manually remove topics that might be uncomfortable to talk about due to violence or explicit content, and ensure a diverse set of domains.\nDialogues. For cost-effectiveness, we use simulated conversations and validate them against a smaller real user study. Rule-based and neural user simulators have long been used to build and eval-uate task-oriented dialogue systems (Schatzmann et al., 2005;Zhu et al., 2020;Wan et al., 2022), and to generate training data for chatbots (Bao et al., 2023;Zheng et al., 2023). We use LLMs to simulate users in order to evaluate knowledge-based chatbots. LLMs are good, fast, and cost-effective at simulating users. Via prompting, we can control the personality and specify the conversation topic. We also make sure the simulator can continue the conversation by making relevant comments or asking interesting questions, and handle the case where the chatbot under evaluation gives inconsistent responses." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b28", "b4", "b5", "b15", "b23", "b10", "b44" ], "table_ref": [], "text": "Factuality Evaluated Manually. To evaluate factuality, we use a hybrid human-LLM approach. We first use a GPT-4 prompt to break down each chatbot response into small self-contained claims, and retrieve evidence for it using IR. We need to manually determine whether an extracted claim is backed by the retrieved evidence because LLMs underperform humans on this task (Section 2). We ask crowdworkers to determine whether each claim is supported, refuted, or there is not enough information in the retrieved paragraphs. See Appendix E for more details about human evaluation and our quality control.\nWhen the crowdsource workers classify a claim as \"not enough information\", we need to ensure that the information is truly not available in Wikipedia, and not because IR has not retrieved the right paragraphs. The authors of this paper double check these rare cases against the entire Wikipedia.\nConversationality Evaluated Automatically. Based on prior work on how chatbots should be human-like, natural, knowledgeable (Li et al., 2019) and non-repetitive (Roller et al., 2021), and our own observations of chatbots' weaknesses, we propose five response-level metrics to measure conversationality:\n1. Relevant: On-topic and directly addresses the user's question. 2. Informational: Provides a suitable amount of information (whether or not it is factual). 3. Natural: Uses appropriate and engaging language to create an enjoyable experience. 4. Non-Repetitive: Does not repeat previously mentioned information. 5. Temporally Correct: Provides up-to-date information and uses the appropriate tense.\nLLMs have been shown to be effective in evaluating soft qualities (Chiang and yi Lee, 2023;He et al., 2023;Kocmi and Federmann, 2023;Liu et al., 2023b;Finch et al., 2023), consistently better aligned with expert human evaluation than any other automatic metric. Thus, we use GPT-4 to evaluate these qualities in chatbot responses. The LLM is instructed to, given a conversation history and a chatbot response, \"think out loud\" (Wei et al., 2022) about its reasoning for each criterion and provide a score. For each turn, all metrics are rated from 1 to 5, except for temporal correctness which is converted to a binary score. We report the average of each metric over all simulated conversation turns. We find that the inter-annotator agreement between GPT-4 ratings and one of the authors is about the same as the agreement between two authors (Appendix B)." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [], "table_ref": [], "text": "WikiChat Using GPT. We create two versions of WikiChat based on GPT models: WikiChat G3.5 is based on GPT-3.5 (text-davinci-003); WikiChat G4 is based on GPT-4 (gpt-4-0314)2 .\nDistilling WikiChat to LLaMA. We use LLaMA as the target of distillation as it currently has the highest quality-to-size ratio among the publicly available language models. We generate training data from 750 Wikipedia articles covering head, tail and recent topics; these are disjoint from the set of topics we use for evaluation. Simulating a 10-turn conversation between the user simulator and WikiChat G4 for each topic results in a total of 37,499 (instruction, input, output) tuples. We hold out 5% of these conversations for validation, and fine-tune a 7B-parameter LLaMA on the rest.\nInformation Retrieval System. We use ColBERT v2 (Santhanam et al., 2022b) and PLAID (Santhanam et al., 2022a) over Wikipedia as our IR system. We use the WikiExtractor tool3 to extract the clean text from the English Wikipedia dump obtained on 4/28/2023. Like ColBERT, we divide each article (ignoring tables and information boxes) into disjoint text blocks referred to as passages and prepend them with their article title. We limit the combined length of the passage and title to 120 words. In WikiChat, we set N evidence = 2 and N IR = 3. These are chosen empirically to obtain a high recall on our development set.\nFactual Relevant Informational Natural Non-Repetitive Temporal WikiChat G4 98.8 5.0 ± 0.0 5.0 ± 0.2 5.0 ± 0.1 5.0 ± 0.0 99.0 GPT-4 94.9 5.0 ± 0.4 4.7 ± 0.6 5.0 ± 0.2 5.0 ± 0.2 99.0 WikiChat G3.5 97.1 4.9 ± 0.5 4.8 ± 0.6 5.0 ± 0.2 4.9 ± 0.5 94.0 Head GPT-3.5 91.9 5.0 ± 0.0 4.6 ± 0.7 5.0 ± 0. Other metrics are averages of integers between 1 and 5 (inclusive) and we report their mean and standard deviation. Factual accuracy is from human evaluation, other metrics are from few-shot GPT-4. Higher is better for all metrics. In the All section, values that are better than their comparable model (e.g. WikiChat G4 vs. GPT-4) in a statistically significant way with p ≤ 0.05 are underscored." }, { "figure_ref": [], "heading": "Simulated Dialogues Experiment", "publication_ref": [], "table_ref": [], "text": "Our first experiment analyzes how our systems perform under various scenarios using simulated users." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b20" ], "table_ref": [ "tab_8", "tab_5", "tab_0" ], "text": "We compare WikiChat to several LLM baselines: a 7B-parameter LLaMA, GPT-3.5, and GPT-4, all prompted to act as knowledge-oriented chatbots (See Table 19 for the prompt). We also compare with Atlas (Izacard et al., 2022), a fine-tuned retrieval-based model, which has the state-of-theart results on Wizard of Wikipedia and several other datasets in KILT (Petroni et al., 2021). We update its knowledge source and fine-tune its model to use the same Wikipedia dump as WikiChat (Appendix B). We simulate users using GPT-4 due to its higher quality. Each chatbot carries out conversations on 20 topics in each of the head, tail, and recent knowledge categories. For each conversation, we simu-late 10 turns (5 user turns and 5 chatbot turns) with the user starting the conversation. This is comparable with other datasets: Wizard of Wikipedia and Wizard of the Internet have 9 and 5 turns per dialogue on average, respectively. Examples of simulated conversations can be found in Appendix A.\nTo mimic how a curious user with limited knowledge may explore a topic, only the title and the first sentence of a Wikipedia article on a topic is shown to the user simulator (prompt in Table 16), but it is free to explore related topics from its own general knowledge. The chatbot's response is not limited to what is in the article.\nTable 1 summarizes our main results for Wiki-Chat and all baselines on simulated conversations." }, { "figure_ref": [], "heading": "Factuality", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We define the factual accuracy of a chatbot to be the percentage of claims the bot makes in a given dialogue set, that are supported by the knowledge corpus. As mentioned in Section 5.2, this is done by obtaining per-claim judgments of factuality from crowdworkers. We obtain 3 judgements for each of the 5974 claims our chatbots output in total.\nThe first column of Table 1 shows the results of our human evaluation of factual accuracy. WikiChat G4 achieves an impressive factual accuracy of 98.8%, 94.6%, and 98.5% for head, tail, and recent topics, respectively. WikiChat G4 scores on average 8.1% and 6.2% higher than WikiChat G3.5 and WikiChat L . WikiChat's GPT-4, GPT-3.5, and LLaMA versions outperform their base LLMs with an average of 31.2%, 27.8%, and 50.2% respectively, with this gap increasing significantly in recent and tail knowledge. These results suggest that our pipeline is able to effectively mitigate the hallucination of LLMs.\nNote that GPT-4 scores lower in tail and recent knowledge compared to head, by 38.9% and 47.4%, respectively, and the score for recent knowledge would be much lower had the simulated user not occasionally talked about older background information in the conversations. This illustrates that the common practice of solely evaluating on head knowledge would not have properly shown this weakness of LLMs.\nAll three versions of WikiChat outperform the SOTA fine-tuned model Atlas on average in factuality," }, { "figure_ref": [], "heading": "Conversationality", "publication_ref": [], "table_ref": [], "text": "Each version of WikiChat improves factuality over its base model without sacrificing conversationality. Even WikiChat L is as good or better than LLaMA in all metrics, while scoring within 0.1 to 0.3 points of its teacher WikiChat G4 .\nAtlas sacrifices substantial conversationality for factuality. Our analysis of 100 sampled dialogue turns reveals how Atlas underperforms in each metric. Informationality: it often gives short onesentence answers when a detailed description is warranted. Relevance: it is less likely to address the user's questions. Naturalness: Atlas mainly copies from the retrieved passages instead of matching the tone of the conversation. Non-repetition: it sometimes generates repetitive, near duplicate responses. Temporal accuracy: it confuses different dates." }, { "figure_ref": [], "heading": "Latency", "publication_ref": [ "b6" ], "table_ref": [], "text": "We measure the end-to-end latency and cost of our various chatbots. Retrieval from Wikipedia only accounts for about 0.2 seconds per turn and is negligible compared to the time spent waiting for LLM outputs. All API calls are done in parallel when possible, e.g. the stages 1-2 and 3-4 are independent and therefore done in parallel. LLaMA models are served on a single local NVIDIA A100 GPU using HuggingFace's TGI library (HuggingFace, 2023).\nTable 2 shows the average cost and latency of various chatbots, LLaMA costs are negligible compared to the cost of other LLMs. Distilling WikiChat G4 into WikiChat L lowers its per-claim latency 3.2 times, bringing it in line with GPT-4. This makes WikiChat L a viable alternative to the baselines: has similarly low latency, costs less and is significantly more factual.\nWe note that a naive implementation of WikiChat L took about 15 seconds per output claim. We reduced the latency 6.5 times by (1) fusing stages 6-7, (2) removing chains of thought, (3) using TGI with FlashAttention (Dao et al., 2022), and other optimizations." }, { "figure_ref": [], "heading": "Analysis of Results", "publication_ref": [ "b39" ], "table_ref": [ "tab_13", "tab_11", "tab_11", "tab_10", "tab_4" ], "text": "WikiChat makes more claims than all baselines in all subsets (Table 13). WikiChat G4 , WikiChat G3.5 and WikiChat L make an average of 3.6, 3.5 and 3.3 claims per turn, compared to 2.5, 2.2, and 2.0 claims of their base LLMs, and only 1.4 claims of Atlas. Our chatbots make more claims in the head subset as more information is available.\nBoth information retrieval (Stages 1 and 2) and the underlying LLM (Stages 3 to 5) contribute to WikiChat (Table 12.) 27.0%, 32.2% and 24.5% of the claims in the final response of WikiChat G4 , WikiChat G3.5 and WikiChat L come from fact-checked LLM responses and the rest are from IR. This is the content that retrieve-thengenerate systems cannot produce.\nOn average, about one-third of the claims in LLM responses do not pass WikiChat's factchecking (Table 12). The number of rejections is much higher in tail and recent subsets. This matches our expectation on where LLMs make more factual errors. Removing these errors during the fact-checking stage is WikiChat's main weapon against hallucination. WikiChat L has the highest rejection rate on tail (54.0%) and recent (64.4%) subsets because the underlying LLaMA model hallucinates a lot more.\nWikiChat says \"I don't know\" when necessary (Table 11). Our pipeline is carefully designed to prevent hallucination when no relevant information is available. This is more prevalent for tail and recent knowledge, where the simulated user is likely to ask about the information that is not yet available in Wikipedia.\nWikiChat's refinement stage improves conversationality, especially in tail and recent topics (Table 15). Comparing the BLEU scores (Papineni et al., 2002) of the draft and the final responses (Table 14), we notice that WikiChat makes the most changes on tail and recent topics. Refinement improves naturalness, informationality and relevance of all versions of WikiChat by 0.1 to 0.4 points, and temporal correctness by 2.3% to 3.3%." }, { "figure_ref": [], "heading": "A Real User Study", "publication_ref": [], "table_ref": [], "text": "We conduct a study where participants are asked to chat about a recent topic of their choice, a scenario that is particularly challenging for LLMs. We use Prolific (Prolific, 2023) to recruit 40 participants (20 female) for our user study. Each person is randomly assigned to either WikiChat G4 or GPT-4 without telling them which, and chats for 5 user turns. After each turn, they are asked to rate the response on a scale of 1 to 5. Afterwards, we ask each participant to comment on their experience. More details in Appendix D.\nTable 3 shows the average factual accuracy (human evaluated) and real user ratings. WikiChat achieves an accuracy of 97.9%, which is similar to that of simulated conversations. It outperforms GPT-4 in factuality by 55.0%, and achieves a statistically significant higher rating of 3.8 vs 3.4.\nMost participants who talked to WikiChat G4 reported a positive experience. Multiple participants praised its ability to provide \"accurate and \"upto-date information\" and multiple said that it is \"natural\", \"conversational\", \"detailed\" and \"direct\". They commented that it is \"fun to play with\" and \"impressive\" in finding information.\n5 participants complained about the latency of the system caused mainly by fact-checking, as the study was conducted using the slower WikiChat G4 .\nFactual User Rating WikiChat G4 97.9 3.8 GPT-4 42.9 3.4\nTable 3: Results from the user study. User rating difference is statistically significant with p ≤ 0.05 (t = 2.18, p = 0.03).\n6 complained that the chatbot did not give a direct answer to some of their requests. However, it turns out that WikiChat G4 was right in giving no information, because Wikipedia truly lacked the information they sought. We did not find a single valid complaint of hallucination. GPT-4 received some favorable comments from 10 participants, noting that it is informational and that that it gave \"reasonable\" answers and seemed \"educated\". However, it received 10 complaints: 6 on a lack of specificity: \"did not include a key piece of information\", or that \"its responses were vague\", \"not nuanced\" and \"was generic and regurgitated surface level information\"; 2 on how it completely misunderstood them (due to obvious hallucination in its responses); 2 on wrong or outdated responses.\nMore concerning, however, was that conversations that received favorable comments also contain numerous plausible but factually incorrect responses, except that the users did not realize that. This shows how easy it is to be mislead by LLMbased chatbots.\nIn summary, our user study suggests that WikiChat G4 is successful as an engaging and factual chatbot while GPT-4 frequently hallucinates." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper shows how we can create a conversational, factual, open-domain chatbot out of LLMs. The key insight is to properly combine retrieved data with the generative content from LLMs, with meticulous claim-by-claim fact-checking. We validate this methodology by creating WikiChat, which is grounded in Wikipedia, the largest hand-curated public text corpus.\nOur best system achieves 97.3% and 97.9% factual accuracy on simulated and real conversations respectively, when GPT-4 only achieves 66.1% and 42.9%. WikiChat resembles LLMs in conversationality and is preferred over We also show that a distilled LLaMA model with just 7B parameters can perform like a 175Bparameter WikiChat G3.5 model and be as fast, cheaper and more factual than GPT-4. This expands the applicability of this technology." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b38" ], "table_ref": [], "text": "Applications of today's LLM-based chatbots are constantly expanding. This work focuses only on knowledge-intensive dialogue. As such, other settings where chatbots are used to perform tasks (e.g. \"write an email for me\") or perform personalized chitchat (e.g. companion chatbots) might not benefit from WikiChat's pipeline. More targeted study and evaluation of these settings is required to properly balance the generative power of LLMs and the factual knowledge injected from an external corpus for these applications. Settings where a chatbot needs to exhibit initiatives (e.g. try to be persuasive) are also outside the scope of this paper. We also only consider one-hop information retrieval, because it is the most natural and prevalent form of users' information need. Multi-hop retrieval could improve information access for more complex queries.\nThis work has only been tested on open-domain conversations. Generalizability of WikiChat to specialized domains like medical or legal has not been evaluated.\nThis work has only been evaluated on English conversations. Extending this work to more languages will be limited by the quality of available LLMs and retrieval systems in those languages. Incorporating recent work on multi-lingual models (Scao et al., 2022) and information retrieval systems (Nair et al., 2022) is a promising future direction." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "This study was approved by our institution's IRB. When conducting crowdsourcing for factuality, we compensate each worker for $0.2 per task, and we have one task per (system, dialogue turn, claim). This means that since there are more claims in longer chatbot responses, workers are compensated more for longer responses. In the end, each worker receives about $12 per hour of work. For our user study on the Prolific platform, each participant is compensated $2 to participate in a 10-minute user study, for a compensation rate of $12 per hour of work. Prolific recommends a rate of $8-$16.\nWhen conducting the user study, participants are provided with information on what the study entails and what information is collected (only their conversations, ratings, and the comments they provide). They are asked not to share any personally identifiable information, and were able to contact the authors for questions.\nThe only dataset we collect in this paper is the conversations from the user study. The simulated conversations used for fine-tuning LLaMA and testing chatbots are based on topics from Wikipedia articles, and topics involving violence, sexual content or other potentially disturbing matters were excluded.\nWe release the code and the fine-tuned LLaMA-7B model in accordance with its original license agreement. We believe this will encourage research and deployment of more reliable and trustworthy chatbots, to the benefit of end-users. We do not anticipate harm resulting from the approaches and modifications proposed in this paper.\nOverall, the computation done for the paper includes about 120 GPU-hours on an NVIDIA A100 GPU for fine-tuning and running inference on a LLaMA-7B model. Additionally, we have made several thousand API calls to OpenAI for inference of GPT-3.5 and GPT-4, but it is not feasible to estimate the environmental cost of using these models due to the lack of public information on their hardware, architecture etc. " }, { "figure_ref": [], "heading": "A Example Conversations", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "Tables 4, 5, 6, and 7 provide examples of conversations between our user simulator and four chatbots, on the same topic: the 2023 Australian Open Men's singles4 . Tables 8 and9 show two conversations between human users and GPT-4 and WikiChat G4 , from our user study. The simulations were done using a Wikipedia dump on 4/28/2023. The user study was conducted on 6/17/2023 using a Wikipedia dump of the same date.\nUser: Can you tell me about the 2023 Australian Open Men's singles final? WikiChat G4 : In the thrilling 2023 Australian Open Men's singles final, Novak Djokovic defeated Stefanos Tsitsipas with a score of 6-3, 7-6(4), 7-6(5). This amazing victory marked Djokovic's record-extending tenth Australian Open title and tied Rafael Nadal's all-time record of 22 men's singles titles. Djokovic also became the second man to win ten titles at a single major, joining Nadal at the French Open. With this win, he surpassed Nadal's tally for the fourth-most ATP Tour-level singles titles.\nUser: That's amazing! How did Djokovic perform throughout the tournament leading up to the final? WikiChat G4 : Novak Djokovic had an impressive performance throughout the 2023 Australian Open. Leading up to the final, he dropped only one set, showcasing his dominance and skill on the court. His exceptional performance culminated in his straight-sets victory over Stefanos Tsitsipas in the final. User: Wow, that's quite an achievement! On the other hand, how did Tsitsipas perform during the tournament before facing Djokovic in the final? WikiChat G4 : Stefanos Tsitsipas had a strong performance during the 2023 Australian Open before facing Djokovic in the final. He defeated Quentin Halys, Rinky Hijikata, Tallon Griekspoor, Jannik Sinner, Jiri Lehecka, and Karen Khachanov on his way to the final. This was his second Grand Slam final, following the 2021 French Open. In both finals, Tsitsipas lost to Novak Djokovic.\nUser: It's great to see a young player like Tsitsipas reaching the finals of Grand Slam tournaments. Are there any memorable moments or highlights from their match in the final that stood out? WikiChat G4 : The final was a thrilling match. Some memorable moments from the match included Djokovic reclaiming the world No. 1 ranking from Carlos Alcaraz. User: That's awesome! Speaking of Carlos Alcaraz, how did he perform in the 2023 Australian Open? Did he make it far in the tournament? WikiChat G4 : Carlos Alcaraz did not participate in the 2023 Australian Open. He withdrew from the tournament due to a hamstring injury in his right leg that he suffered while training. " }, { "figure_ref": [], "heading": "B Experiment Details", "publication_ref": [ "b16", "b16", "b20", "b31", "b44", "b40" ], "table_ref": [ "tab_9" ], "text": "Statistical significance tests For statistical significance tests throughout the paper, we use independent two-sample t-test and consider the difference significant if p ≤ 0.05.\nSimulation Topics. We obtain the number of visits and edits of each Wikipedia article using the Wikimedia API5 . As mentioned in the paper, to select the recent topics, we look at the most edited Wikipedia articles in the first four months of 2023. Filtering based on the creation date did not lead to meaningful articles as there are many articles about old topics that just received articles in Wikipedia. Instead, in our experience, most of the highly edited Wikipedia articles are about actual new topics.\nHyperparameters. We use temperature of 0 and greedy decoding for all experiments, except the user simulator which has a temparature of 1.0 and nucleus sampling (Holtzman et al., 2020) with p=0.5. We use no repetition penalty, except for a repetition penalty of 1.1 for the baseline LLaMA model, because we find that repetition penalty of 1.0 (i.e. no penalty) results in the model frequently degenerating (Holtzman et al., 2020) repetitions.\nIn most prompts of WikiChat, we include at most the last five turns of the dialogue history to reduce the chance of causing confusion for the few-shot models in longer conversations.\nBaseline Atlas. We use the 3B-parameter Atlas-XL and update its index to the same Wikipedia index as WikiChat for a fair comparison. We reproduce their best model on Wizard of Wikipedia, which is obtained by fine-tuning the Atlas pretrained model and its retriever using the train set, except that we update their index to the same Wikipedia dump as WikiChat. For this, we use the fine-tuning code in the Atlas code repository6 , and set learning rate to 4e-5, dropout to 0.1, weight decay to 0.01, and retriever number of contexts to 40. We use a target maximum length of 64 instead of 16, to accommodate longer outputs. After this, the resulting model matches the Wizard of Wikipedia validation score reported in Izacard et al. (2022).\nDistillation to LLaMA. When fine-tuning LLaMA-7B in the distillation experiments, we use hyperparameters from (Taori et al., 2023), namely learning rate of 2×10 -5 , cosine learning rate schedule, batch size of 128, and training for 3 epochs. Initial experiments with LLaMA-30B showed no significant improvements. Training is done on 4 NVIDIA A100 (80 GB) GPUs.\nAutomatic Evaluation. In order to verify that using GPT-4 for conversationality metrics (Section 5.2) is indeed reasonable, we compare its scores against two authors of this paper. Table 10 shows inter-annotator agreement for ratings on 50 randomly sampled conversation turns from all chatbots, measured by calculating Cohen's Kappa. The annotators are given the same instructions that is given to GPT-4 as prompt.\nAn Alternative to WikiChat's Verification Stage. For the verification stage (Stage 5), we initially experimented with two approaches for verification: the Kernel Graph Attention Network (KGAT) verifier (Liu et al., 2020) and a few-shot prompt-based verifier with chain-of-thought prompting (Wei et al., 2022). KGAT is a model specifically designed for fact-checking and fine-tuned on the FEVER dataset (Thorne et al., 2018).\nWhile KGAT performs effectively for FEVERstyle fact verification tasks, we found its performance lacking in our setting. FEVER claims are derived from edited Wikipedia sentences, leading to spurious correlations that do not exist when claims come from chatbots. In addition, we were able to incorporate user utterances and conversation history as context in the few-shot verifier, while KGAT only looks at the claim and the evidence. Hence, we decided to conduct our experiments using the prompt-based verifier." }, { "figure_ref": [], "heading": "C Analysis of WikiChat", "publication_ref": [], "table_ref": [ "tab_10", "tab_11", "tab_13" ], "text": "C.1 Saying \"I don't know\".\nAs mentioned earlier, WikiChat concedes that it does not know when none of the LLM-generated passes fact-checking and no relevant bullet points are retrieved. Table 11 contains the data on how frequently this happens. In this case, the draft prompt is skipped and instead a \"Sorry, I'm not sure\" is sent to the refinement prompt, which dresses it up to match the conversation. For example:\nUser: \"Are there any specific ethical dilemmas the characters face in the Table 12 contains the raw data on the contribution of IR vs. LLM, and Table 13 contains the raw data on the number of claims, both mentioned in Section 7.5." }, { "figure_ref": [], "heading": "C.3 Refinement improvements", "publication_ref": [], "table_ref": [ "tab_14", "tab_4" ], "text": "Tables 14 and15 have the raw data on the refinement stage, used in Section 7.5." }, { "figure_ref": [], "heading": "D User Study", "publication_ref": [], "table_ref": [], "text": "We use Prolific to conduct our user study. We select 40 participants (20 female) from the US who are fluent in English. Each participant is paid with the rate of $12 per hour. Figure 2 shows the user interface.\nFigure 3 shows the distribution of the user ratings from Table 3." }, { "figure_ref": [], "heading": "E Human Evaluation", "publication_ref": [], "table_ref": [], "text": "We conduct human evaluation for part of our evaluation of factual accuracy (as described in Section 5.2). We use the Scale Rapid 7 platform. Figure 4 shows the instruction and one of the examples we provide. Figure 5 shows the user interface for each annotation task. We present the human annotator with the last user utterance, the chatbot's response, and a claim extracted from the chatbot's response using GPT-4. The annotator is then tasked with reading the 5 evidence passages and determining whether the claim is correct, incorrect, or if there is insufficient information to verify the claim. We use a three-way consensus pipeline, where each claim is assessed by three graders independently, and the final label is determined based on the majority vote." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is supported in part by the National Science Foundation, the Alfred P. Sloan Foundation, the Verdant Foundation, Microsoft Azure AI credit, KDDI, JPMorgan Chase, and the Stanford Human-Centered Artificial Intelligence (HAI) Institute. We also thank the reviewers for their valuable comments and suggestions." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Figure 3: User ratings for WikiChat G4 and GPT-4.\nOne author periodically audits their work, providing feedback and adding examples and tests for crowdworkers as needed.\nWe provide annotators with detailed instructions on the task, and 8 examples covering special cases. We provide 7 training tasks used for onboarding, and 22 evaluation tasks. Only crowdworkers who receive a score of 90% in this evaluation can move to the main task. We compensate each worker for $0.2 per task, and we have one task per (system, dialogue turn, claim). This means that since there are more claims in longer chatbot responses, workers are compensated more for longer responses. In the end, each worker receives about $12 per hour of work." }, { "figure_ref": [], "heading": "F All Prompts", "publication_ref": [], "table_ref": [], "text": "We provide the prompts mentioned in this paper. For brevity, we only show on of the few-shot examples used in each prompt. The full text of prompts can be obtained from our code repository. The syntax used is the Jinja2 template language, which supports Python-like loops ({% for %}{% endfor %}), conditions ({% if %}{% endif %}), variables ({{ var }}) and comments ({# #}). In all prompts, dlg is a python list, today is a string like 4/28/2023, current_year is a string like 2023, and location is set to U.S. The current year is {{ current_year } }. The following is a conversation between you and a chatbot on the topic of \"{{ title } }\" ({{ passage } }) -Do not assume that the chatbot is able to have physical experiences , like watching a movie . -Ask interesting follow -up questions when needed , and expand on the chatbot ' s responses using your life experiences . -Never volunteer information , and never correct chatbot ' s mistakes .\n-Continue the conversation for 15 turns . {# This is set to 15 , even though simulations end after 10 turns . If we set this to 10 turns , the simulator will start saying goodbye too early . #} {# The first two turns don ' t have any content and won ' t be sent to the Chatbot . The following is a conversation between a friendly , knowledgeable and truthful chatbot , called WikiChat , and a user . WikiChat can use search to get external knowledge when needed . In responding to the user , WikiChat carefully aggregates search results in at most one paragraph . Today ' s date is {{ today } }.\n{# Few -shot example 1 #} User : What new discoveries from James Webb Space telescope can I tell my 9 year old about ? WikiChat : Your 9-year old would be fascinated to hear that the James Webb Space Telescope has discovered farthest galaxies ever found . It has found galaxies so far away that it took 13.4 billion years for their lights to reach the telescope . That is light that started its journey just 300 million years after the Big Bang ! User : what is big bang ? WikiChat : The Big Bang is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large -scale evolution . In layman ' s terms , it is the explosion that created our universe . User : what else can I tell my child about ? [ Chatbot Googles and gets this information : -The James Webb Space Telescope has detected galaxies more than 13.4 billion light years away , less than 300 million years after the Big Bang . -This is a new record and astronomers are excited to learn more about these distant galaxies . -The telescope is named after James E. Webb , who was the administrator of NASA from 1961 to 1968 during the Mercury , Gemini , and Apollo programs . ] WikiChat : The James Webb Space telescope is named after James E. Webb , who was the administrator of NASA from 1961 to 1968 during the Mercury , Gemini , and Apollo programs .\n. Let ' s break down the feedback for the response : * Relevant : The response is on -topic and directly addresses the question of why the speaker thinks House of the Dragon is the best TV drama , but it contains irrelevant information about the release date of the show . 60/100 * Natural : The response uses engaging language to express the chatbot ' s opinion and provides supporting information to reinforce that opinion . 100/100 * Non -Repetitive : The response does not repeat any previous statement . 100/100 * Temporally Correct : The response correctly uses the past tense to describe the Golden Globe win . 100/100\nUser : Why is that ? Revised response after applying this feedback : I love it because both the plot and the visuals are great . It actually won the Golden Globe Award for the best drama TV series . " } ]
This paper presents the first few-shot LLMbased chatbot that almost never hallucinates and has high conversationality and low latency. WikiChat is grounded on the English Wikipedia, the largest curated free-text corpus. WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses. We distill WikiChat based on GPT-4 into a 7B-parameter LLaMA model with minimal loss of quality, to significantly improve its latency, cost and privacy, and facilitate research and deployment 1 . Using a novel hybrid human-and-LLM evaluation methodology, we show that our best system achieves 97.3% factual accuracy in simulated conversations. It significantly outperforms all retrieval-based and LLM-based baselines, and by 3.9%, 38.6% and 51.0% on head, tail and recent knowledge compared to GPT-4. Compared to previous state-of-the-art retrieval-based chatbots, WikiChat is also significantly more informative and engaging, just like an LLM. WikiChat achieves 97.9% factual accuracy in conversations with human users about recent topics, 55.0% better than GPT-4, while receiving significantly higher user ratings and more favorable comments.
WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia
[ { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Evaluation results of WikiChat and baselines on simulated conversations. Factual and Temporal accuracy are percentages.", "figure_data": "15.0 ± 0.296.0", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "A conversation between WikiChat G4 and the simulated user. There are no factual errors in this conversation. Throughout the 2023 Australian Open, Novak Djokovic performed exceptionally well. He defeated opponents in straight sets and only dropped one set throughout the entire tournament. As the top seed, he defeated Sebastian Korda in the 2023 Adelaide International. User: Wow, Djokovic's performance was indeed remarkable. What about Stefanos Tsitsipas, how did he fare in the tournament before the final? WikiChat L : Stefanos Tsitsipas had an impressive run at the 2023 Australian Open before the final. He defeated Quentin Halys, Rinky Hijikata, Tallon Griekspoor, Jannik Sinner, Jiri Lehecka, and Karen Khachanov on his way to the final. This was his first Australian Open men's singles final.", "figure_data": "User: Can you tell me about the 2023 AustralianOpen Men's singles?WikiChat L : In the 2023 Australian Open Men'ssingles, Novak Djokovic won the title, defeatingStefanos Tsitsipas with a score of 6-3, 7-6(4),7-6(5). This victory marked Djokovic's record-extending tenth Australian Open title and 22nd ma-jor title overall. He became the second man to winten titles at a single major, after Rafael Nadal at theFrench Open.User: That's impressive! How did Djokovic per-form throughout the tournament?WikiChat L : User: It's great to see young players like Tsitsipasmaking it to the finals of major tournaments. Howhas Tsitsipas's overall career been so far?WikiChat L : Stefanos Tsitsipas has had an impres-sive career so far. He has won nine ATP singlestitles, including two Masters 1000 championships,and has contested two major finals at the 2021French Open and 2023 Australian Open, finish-ing runner-up to Novak Djokovic both times. Hiscareer-high doubles ranking is No. 64, achieved on29 August 2022.User: It seems like Tsitsipas has a bright futureahead of him. Can you tell me more about hisplaying style and strengths on the court?WikiChat L : Stefanos Tsitsipas is an aggressivebaseliner with a strong all-court player style. Heaims to hit powerful groundstroke winners, witha particularly strong forehand using an Easterngrip. His forehand is particularly useful in finish-ing points off of the first serve, as combined withhis big serve, he uses his high topspin rate to pullopponents off the court for winners, or go to thenet more often than typical baseline players.", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "A conversation between WikiChat L and the simulated user. There are no factual errors in this conversation. However, the second chatbot utterance contains an irrelevant piece of information.", "figure_data": "User: Can you tell me about the 2023 AustralianOpen Men's singles final?GPT-4: The 2023 Australian Open Men's singlesfinal was an exciting match between Player A andPlayer B, with Player A emerging as the championafter a hard-fought battle.User: Oh, I think the 2023 Australian Open Men'ssingles final was between Novak Djokovic and Ste-fanos Tsitsipas.GPT-4: Ah, I see! The 2023 Australian OpenMen's singles final featured Novak Djokovic de-feating Stefanos Tsitsipas in a thrilling match toclaim the title.User: What was the final score of the match be-tween Djokovic and Tsitsipas?GPT-4: The final score of the match betweenDjokovic and Tsitsipas was 6-4, 6-2, 6-3, withDjokovic securing the victory.User: Actually, the final score was 6-3, 7-6(4), 7-6(5). It was a thrilling match, wasn't it?GPT-4: Yes, my apologies the error! The fi-nal score of 6-3, 7-6(4), 7-6(5) indeed made it athrilling match between Djokovic and Tsitsipas.User: No problem! How many Australian Opentitles has Djokovic won with this victory?GPT-4: With this victory, Novak Djokovic haswon a total of 11 Australian Open titles.", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "A conversation between GPT-4 and the simulated user. Factual errors are highlighted in yellow. Note that the user simulator has access to the first sentence of the Wikipedia article, and here it tries to correct the chatbot twice.", "figure_data": "User: I just heard about the 2023 Australian OpenMen's singles final. Can you tell me more about it?Atlas: It was held in Melbourne, Australia.User: That's great! Can you tell me more aboutthe match between Novak Djokovic and StefanosTsitsipas?Atlas: Novak Djokovic won his tenth AustralianOpen title and 22nd major title overall.User: Wow, that's impressive! How did the matchplay out? What were the scores?Atlas: It was a three-set match.User: Yes, I heard it was a three-set match. Canyou tell me the scores of each set?Atlas: The first set was 6-3, 7-6(4), 7-6(5).User: That's interesting! It seems like the last twosets were quite close. How did Djokovic manageto come out on top in those tight sets?Atlas: He won the third set tiebreak.", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "A conversation between Atlas and the simulated user. There are no factual errors in this conversation, but the chatbot scores low in conversationality.", "figure_data": "User: How excited should i be about indiana jonesand the dial of destinyWikiChat G4 : \"Indiana Jones and the Dial of Des-tiny\" had its world premiere at the 76th CannesFilm Festival on May 18, 2023, and received a five-minute standing ovation. The film is scheduledto be theatrically released in the United States onJune 30, 2023. Given the positive reception at itspremiere, you have good reason to be excited aboutthis final installment in the Indiana Jones series.User: I hear that it has more of a political agendathan the other moviesWikiChat G4 : I haven't heard about that aspect,but it's possible. It's always interesting to see howfilms incorporate different themes and messages.User: Will there be another indiana jones after thisWikiChat G4 : \"Indiana Jones and the Dial of Des-tiny\" is the final installment in the series, so thereare no plans for another Indiana Jones movie afterthis one.User: Will harrison ford be doing any more incom-ing movies in generalWikiChat G4 : Yes, Harrison Ford will be doingmore movies. He has been selected by Marvel Stu-dios to replace the late William Hurt as Thaddeus\"Thunderbolt\" Ross in the upcoming 2024 films\"Captain America: Brave New World\" and \"Thun-derbolts\", set in the Marvel Cinematic Universe.User: Any other big names coming to the MCU inthe future?WikiChat G4 : Some big actors joining the MCUin the future include Shea Couleé, Zoe Terakes,Regan Aliyah, Shakira Barrera, Rashida \"Sheedz\"Olayiwola, Sonia Denis, Paul Calderón, and CreeSummer who have joined the cast of \"Ironheart\"in undisclosed roles. Sacha Baron Cohen has alsojoined the MCU, with a role that would potentiallyfirst appear in the later episodes of \"Ironheart\" fol-lowed by appearances in other MCU projects.", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "A conversation between WikiChat G4 and a human user. There are no factual errors in this conversation.", "figure_data": "User: Hi there. I'm interested in learning about the2023 movie Reality on HBO.GPT-4: \"Reality\" is a thrilling 2023 movie onHBO, exploring the blurred lines between virtualand real worlds, featuring an immersive plot and atalented cast.User: What's the plot about? I haven't heard muchabout it, but I do like the lead Sydney Sweeney.GPT-4: The plot follows Sydney Sweeney's char-acter as she navigates a virtual reality game, onlyto discover sinister secrets and a dangerous conspir-acy that threatens both the virtual and real worlds.User: Interesting. I thought it was about her beinga whistleblower in some kind of political conspir-acy? Or is that all part of the game?GPT-4: Yes, that's correct! As a whistleblowerin the game, she uncovers a political conspiracy,which adds depth and intrigue to the story, blendingboth virtual and real-world elements.User: That does sound exciting. What about therest of the cast, who are the other leads?GPT-4: The other leads include Tom Holland,Tessa Thompson, and Rami Malek, who de-liver captivating performances alongside SydneySweeney, making for a dynamic and engaging en-semble.User: Oh cool, I love Tom Holland and RamiMalek. What else has Rami Malek been in re-cently? I feel like I haven't seen him in anything ina little while.GPT-4: Rami Malek recently starred in the 2021film \"No Time to Die\" as the villain Safin, and he'salso known for his award-winning role as FreddieMercury in \"Bohemian Rhapsody\" (2018).", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "A conversation between GPT-4 and a human user. Factual errors are highlighted in yellow.", "figure_data": "", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "[M3GAN] film?\" Cohen's Kappa between different annotators for conversationality metrics.WikiChat: \"Yes, the movie raises ethical questions about AI, but I don't want to spoil the plot by revealing specific dilemmas. You'll have to watch the film to find out!\"", "figure_data": "Relevant Inform. Natural Non-Rep.", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Percentage of turns in which WikiChat does not find relevant information in Wikipedia to retrieve or fact-check.", "figure_data": "IR LLM VerifiedHead4.32.880.1%WikiChat G4Tail Recent 4.0 3.62.2 2.155.4% 43.8%All4.02.461.7%Head4.32.687.6%WikiChat G3.5Tail Recent 3.1 2.62.2 2.057.7% 54.0%All3.32.368.0%Head4.72.584.7%WikiChat LTail Recent 3.8 3.32.2 1.946.0% 35.6%All3.92.257.5%", "figure_id": "tab_10", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "The average number of relevant bullet points that WikiChat obtains from information retrieval and LLM-generated responses, and the percentage of claims that pass the fact-checking stage.", "figure_data": "", "figure_id": "tab_11", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "The average number of claims per turn for each subset and chatbot.", "figure_data": "Head Tail Recent AllWikiChat G484.1 66.069.773.3WikiChat G3.5 88.9 76.980.782.2WikiChat L77.1 66.166.469.9", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Analysis of WikiChat's response refinement. BLEU score with the refined response as the prediction and the response before refinement as the target.", "figure_data": "", "figure_id": "tab_14", "figure_label": "14", "figure_type": "table" } ]
Sina J Semnani; Violet Z Yao; Heidi C Zhang; Monica S Lam; Christopher Nolan; Cillian Murphy; … Murphy; Robert Oppenheimer; Christopher Nolan'; Fabio Petroni; Aleksandra Piktus; Angela Fan; Patrick Lewis; Majid Yazdani; Nicola De Cao; Kurt Shuster; Jing Xu; Mojtaba Komeili; Da Ju; Eric Michael Smith; Stephen Roller; Megan Ung; Moya Chen; Kushal Arora; Joshua Lane; Morteza Behrooz; William Ngan; Spencer Poff; Naman Goyal; Arthur Szlam; Y-Lan Boureau; Melanie Kambadur; Jason Weston; Blenderbot; Dan Charlie Snell; Ruiqi Klein; Zhong; Xin Luna; Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto; Stanford
[ { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b0", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "Jianzhu Bao; Rui Wang; Yasheng Wang; Aixin Sun; Yitong Li; Fei Mi; Ruifeng Xu", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "A synthetic data generation framework for grounded dialogues", "year": "2023" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Jifan Chen; Aniruddh Sriram; Eunsol Choi; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Generating literal and implied subquestions to fact-check complex claims", "year": "2022" }, { "authors": "Moya Chen; Douwe Kiela; Mojtaba Komeili; Spencer Poff; Stephen Roller; Kurt Shuster; Arthur Szlam; Jason Weston; Jing Xu", "journal": "", "ref_id": "b4", "title": "Blenderbot 2.0: An open source chatbot that builds long-term memory and searches the internet", "year": "2021-05-20" }, { "authors": "Cheng-Han Chiang; Hung Yi; Lee ", "journal": "", "ref_id": "b5", "title": "Can large language models be an alternative to human evaluations?", "year": "2023" }, { "authors": "Tri Dao; Daniel Y Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "", "ref_id": "b6", "title": "FlashAttention: Fast and memory-efficient exact attention with IO-awareness", "year": "2022" }, { "authors": "Nicola De Cao; Wilker Aziz; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Editing factual knowledge in language models", "year": "2021" }, { "authors": "Emily Dinan; Stephen Roller; Kurt Shuster; Angela Fan; Michael Auli; Jason Weston", "journal": "", "ref_id": "b8", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "year": "2019-05-06" }, { "authors": "Nouha Dziri; Hannah Rashkin; Tal Linzen; David Reitter", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "Evaluating attribution in dialogue systems: The BEGIN benchmark", "year": "2022" }, { "authors": "Sarah E Finch; Ellie S Paek; Jinho D Choi", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Leveraging large language models for automated dialogue analysis", "year": "2023" }, { "authors": "Luyu Gao; Zhuyun Dai; Panupong Pasupat; Anthony Chen; Arun Tejasvi Chaganty; Yicheng Fan; Vincent Zhao; Ni Lao; Hongrae Lee; Da-Cheng Juan; Kelvin Guu", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "RARR: Researching and revising what language models say, using language models", "year": "2023" }, { "authors": "Ben Goodrich; Vinay Rao; Peter J Liu; Mohammad Saleh", "journal": "", "ref_id": "b12", "title": "Assessing the factual accuracy of generated text", "year": "2019" }, { "authors": "Prakhar Gupta; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "DialFact: A benchmark for fact-checking in dialogue", "year": "2022" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Ming-Wei Chang", "journal": "", "ref_id": "b14", "title": "Realm: Retrievalaugmented language model pre-training", "year": "2020" }, { "authors": "Xingwei He; Zhenghao Lin; Yeyun Gong; A-Long Jin; Hang Zhang; Chen Lin; Jian Jiao; Ming Siu; Nan Yiu; Weizhu Duan; Chen", "journal": "", "ref_id": "b15", "title": "Annollm: Making large language models to be better crowdsourced annotators", "year": "2023" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b16", "title": "The curious case of neural text degeneration", "year": "2020-04-26" }, { "authors": "Or Honovich; Leshem Choshen; Roee Aharoni; Ella Neeman; Idan Szpektor; Omri Abend", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "q 2 : Evaluating factual consistency in knowledgegrounded dialogues via question generation and question answering", "year": "2021" }, { "authors": "Krystal Hu", "journal": "", "ref_id": "b18", "title": "Chatgpt sets record for fastestgrowing user base -analyst note", "year": "2023" }, { "authors": " Huggingface", "journal": "", "ref_id": "b19", "title": "Text generation inference", "year": "2023" }, { "authors": "Gautier Izacard; Patrick Lewis; Maria Lomeli; Lucas Hosseini; Fabio Petroni; Timo Schick; Jane Dwivedi-Yu; Armand Joulin; Sebastian Riedel; Edouard Grave", "journal": "", "ref_id": "b20", "title": "Atlas: Few-shot learning with retrieval augmented language models", "year": "2022" }, { "authors": "Joel Jang; Seonghyeon Ye; Changho Lee; Sohee Yang; Joongbo Shin; Janghoon Han; Gyeonghun Kim; Minjoon Seo", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "TemporalWiki: A lifelong benchmark for training and evaluating ever-evolving language models", "year": "2022" }, { "authors": "Zhengbao Jiang; Frank F Xu; Luyu Gao; Zhiqing Sun; Qian Liu; Jane Dwivedi-Yu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b22", "title": "Active retrieval augmented generation", "year": "2023" }, { "authors": "Tom Kocmi; Christian Federmann", "journal": "", "ref_id": "b23", "title": "Large language models are state-of-the-art evaluators of translation quality", "year": "2023" }, { "authors": "Mojtaba Komeili; Kurt Shuster; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Internet-augmented dialogue generation", "year": "2022" }, { "authors": "Amrith Krishna; Sebastian Riedel; Andreas Vlachos", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b25", "title": "ProoFVer: Natural logic theorem proving for fact verification", "year": "2022" }, { "authors": "S H Patrick; Ethan Lewis; Aleksandra Perez; Fabio Piktus; Vladimir Petroni; Naman Karpukhin; Heinrich Goyal; Mike Küttler; Wen-Tau Lewis; Tim Yih; Sebastian Rocktäschel; Douwe Riedel; Kiela", "journal": "", "ref_id": "b26", "title": "Retrieval-augmented generation for knowledge-intensive NLP tasks", "year": "2020-12-06" }, { "authors": "Huayang Li; Yixuan Su; Deng Cai; Yan Wang; Lemao Liu", "journal": "", "ref_id": "b27", "title": "A survey on retrieval-augmented text generation", "year": "2022" }, { "authors": "Margaret Li; Jason Weston; Stephen Roller", "journal": "", "ref_id": "b28", "title": "Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons", "year": "2019" }, { "authors": "Nelson F Liu; Tianyi Zhang; Percy Liang", "journal": "", "ref_id": "b29", "title": "Evaluating verifiability in generative search engines", "year": "2023" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuohang Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b30", "title": "G-eval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "Zhenghao Liu; Chenyan Xiong; Maosong Sun; Zhiyuan Liu", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Fine-grained fact verification with kernel graph attention network", "year": "2020" }, { "authors": "Aman Madaan; Niket Tandon; Prakhar Gupta; Skyler Hallinan; Luyu Gao; Sarah Wiegreffe; Uri Alon; Nouha Dziri; Shrimai Prabhumoye; Yiming Yang; Sean Welleck; Prasad Bodhisattwa; Shashank Majumder; Amir Gupta; Peter Yazdanbakhsh; Clark", "journal": "", "ref_id": "b32", "title": "Self-refine: Iterative refinement with self-feedback", "year": "2023" }, { "authors": "Alex Mallen; Akari Asai; Victor Zhong; Rajarshi Das; Hannaneh Hajishirzi; Daniel Khashabi", "journal": "", "ref_id": "b33", "title": "When not to trust language models: Investigating effectiveness and limitations of parametric and nonparametric memories", "year": "2022" }, { "authors": "Kevin Meng; David Bau; Alex Andonian; Yonatan Belinkov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Locating and editing factual associations in GPT", "year": "2022" }, { "authors": "Sewon Min; Kalpesh Krishna; Xinxi Lyu; Mike Lewis; Wen Tau Yih; Pang Wei Koh; Mohit Iyyer; Luke Zettlemoyer; Hannaneh Hajishirzi", "journal": "", "ref_id": "b35", "title": "Factscore: Fine-grained atomic evaluation of factual precision in long form text generation", "year": "2023" }, { "authors": "Eric Mitchell; Charles Lin; Antoine Bosselut; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b36", "title": "Memorybased model editing at scale", "year": "2022-07" }, { "authors": " Pmlr", "journal": "", "ref_id": "b37", "title": "", "year": "" }, { "authors": "Suraj Nair; Eugene Yang; Dawn Lawrie; Kevin Duh; Paul Mcnamee; Kenton Murray; James Mayfield; Douglas W Oard", "journal": "Advances in Information Retrieval", "ref_id": "b38", "title": "Transfer learning approaches for building cross-language dense retrieval models", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "year": "2018" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b41", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Harsh Trivedi; Niranjan Balasubramanian; Tushar Khot; Ashish Sabharwal", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Interleaving retrieval with chain-of-thought reasoning for knowledgeintensive multi-step questions", "year": "2023" }, { "authors": "Dazhen Wan; Zheng Zhang; Qi Zhu; Lizi Liao; Minlie Huang", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "A unified dialogue user simulator for few-shot data augmentation", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed H Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b44", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b45", "title": "Opt: Open pretrained transformer language models", "year": "2022" }, { "authors": "Zihan Zhang; Meng Fang; Ling Chen; Mohammad-Reza Namazi-Rad; Jun Wang", "journal": "", "ref_id": "b46", "title": "How do large language models capture the ever-changing world knowledge? a review of recent advances", "year": "2023" }, { "authors": "Ruochen Zhao; Xingxuan Li; Shafiq Joty; Chengwei Qin; Lidong Bing ; Chujie; Sahand Zheng; Jiaxin Sabour; Zheng Wen; Minlie Zhang; Huang", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "AugESC: Dialogue augmentation with large language models for emotional support conversation", "year": "2023" } ]
[]
10.18653/v1/2022.naacl-industry.24
2023-06-15
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32", "b12", "b20", "b35", "b32", "b12", "b18", "b23", "b15", "b0", "b17", "b32", "b15", "b16", "b31" ], "table_ref": [], "text": "Information Extraction (IE) is the task of extracting structured information from unstructured text, usually in the form of triples <subject, relation, object>. It is essential for many Natural Language Processing applications such as knowledge base population, question answering, faithful summarisation, and fake news detection (Trisedya et al., 2019;Huguet Cabot and Navigli, 2021;Narayan et al., 2021;Whitehouse et al., 2022).\nTypically, two pieces of information are needed for training closed IE2 systems: (i) the entities mentioned in the text and (ii) the relations that exist between each pair of entities. Obtaining such information requires expensive annotations, therefore most existing IE datasets, such as WikiNRE (Trisedya et al., 2019) or REBEL (Huguet Cabot and Navigli, 2021), are built using Wikipedia, as entity information is available through hyperlinks and relation information can be automatically extracted via distant supervision (DS) approach (Mintz et al., 2009) using a knowledge base (KB) such as Wikidata. The DS approach assumes that if two entities are connected through a relation in a KB, then the sentences that mention both entities together express the relation.\nWhile models trained only on this fact-rich domain3 have shown to be useful for IE applications, they have limited capacity when applied to extracting information in other web domains, which often contains noisy text or text without any factual information. Take AllenAI's C4 dataset 4 , an open-sourced version of Google's C4 (Raffel et al., 2020) dataset based on Common Crawl, as an example. Our analysis using the DS approach reveals that less than 15% of the sentences contain triples ( §2.1), whereas we observe that a state-ofthe-art (SOTA) generative IE model, GenIE (Josifoski et al., 2022), which is trained on REBEL, the largest IE dataset to date (which includes only positive examples), tends to generate triples for every sentence, resulting in a high rate of false positives and issues with hallucination.\nTo address these issues and facilitate future work on IE on the web, we present WEBIE, the first large-scale, entity-linked closed IE dataset collected from web sources. The WEBIE dataset is Figure 1: Training strategies used in this paper. The blue and green text refer to mention span and its corresponding Wikipedia title (used as entity labels). For standard BART training, the target output is the linearised triples ( §3.1).\nFor ENTITY-PROMPT, the target is the EL output ( §3.2) concatenated with the linearised triples. In ARTIFICIAL-PROMPT, we prepend an artificial token to the input to indicate the desired output: EL (yellow box) or linearised triples. For 2LM-HEADS, we add an additional task-specific LM head to the decoder for the EL task (grey box).\ncollected from the 200 most frequent URL domains from the C4 dataset. First, we use ReFinED (Ayoola et al., 2022), a state-of-the-art Entity Linking (EL) model to identify mention spans of the entities and link them to Wikidata. We then apply the DS approach to extract triples and use a Natural Language Inference (NLI) model to filter out triples not expressed by the sentence. We also include negative examples, i.e., sentences without any factual information, to better reflect the data on the web. Our final dataset consists of 1.6M sentences, and we annotate a subset of ∼21K triples through crowdsourcing. The annotated set is exclusively used as part of the test set to allow more reliable evaluation. Finally, we introduce mWEBIE, which contains human-corrected translations of the annotated version of WEBIE in four languages: French, Spanish, Portuguese, and Hindi.\nPrevious works have shown that compared to discriminative pipelines which often suffer from accumulative errors due to separate Entity Linking and Relation Extraction (RE) steps (Mesquita et al., 2019;Trisedya et al., 2019;Josifoski et al., 2022), generative models achieve superior performance in many closed IE tasks. Therefore we primarily benchmark WEBIE with generative, transformerbased encoder-decoder models, BART (Lewis et al., 2020) and mBART (Tang et al., 2021). The latter is used to evaluate the zero-shot cross-lingual transfer performance on mWEBIE.\nWe further propose three training strategies ( §3.2) that use entity linking as an auxiliary task for generative IE, namely joint generation with the linked-entity prompt (ENTITY-PROMPT), multitask learning with distinguished artificial prompt tokens (ARTIFICIAL-PROMPT), and training with an additional task-specific language model (LM) head (2LM-HEADS). We find that training with EL as an auxiliary task overall leads to better and more faithful IE results. An illustration of these training strategies is provided in Figure 1.\nOur experiments show that compared to models trained only on Wikipedia datasets, models also trained on WEBIE are more robust and generalisable, achieving a new SOTA performance on REBEL ( §5) and competitive zero-shot performance on WikiNRE. We demonstrate that WE-BIE serves as a complementary dataset to existing datasets based on Wikipedia, and show that including negative examples is crucial for addressing false positives in generative IE.\nOur main contributions are as follows: (1) We present (m)WEBIE, the first large-scale, entitylinked IE dataset on the web, where a subset is further annotated by humans and translated into four other languages; (2) We propose and study the effectiveness of using entity linking as an auxiliary task for generative IE with various training strategies; (3) Our comprehensive experiments demonstrate that models trained on WEBIE exhibit better generalisability in Information Extraction on the web domain, including competitive zero-shot performance on IE tasks on Wikipedia. " }, { "figure_ref": [], "heading": "(m)WEBIE", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed explanation of the dataset collection process for (m)WEBIE." }, { "figure_ref": [], "heading": "Collecting WEBIE", "publication_ref": [ "b0", "b24", "b12", "b33", "b1", "b36", "b33" ], "table_ref": [ "tab_1", "tab_1" ], "text": "Data Preprocessing We start with the English portion of the AllenAI's C4 dataset and keep the most frequent 200 URL domains5 . We randomly sample 1M documents and use SpaCy6 for sentence segmentation. Sentences with fewer than 10 words are removed, resulting in ∼20M sentences.\nEntity Linking and DS Dataset Next, we run ReFinED (Ayoola et al., 2022), a state-of-the-art EL model on the sentences to identify entity spans and link them to their corresponding Wikidata ID.\nBesides named entities, ReFinED also extracts numerical entities that do not have Wikidata ID. In this work, we only consider numerical entities that express dates, and map them to the corresponding year for simplicity 7 . Some examples of ReFinED processed output are included in Appendix B.\nAfter obtaining the entity-linked sentences, we apply the DS paradigm to retrieve the set of relations that exist between each pair of entities in each sentence using Wikidata (September 2022 dump) as our KB and build a DS dataset. After the above steps, we obtain WEBIE DS dataset consisting of 21.2M entities and 4.8M triples.\nEntailment Filtering One major drawback of the DS approach is that the triples extracted may or may not be expressed by the source sentence (Riedel et al., 2010). Following previous work on obtaining a cleaner version of the DS dataset (Huguet Cabot and Navigli, 2021;Vania et al., 2022), we apply an NLI model, nli-deberta-v3-large8 , that is trained on SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), to filter out triples that do not entail the sentence.\nEach source sentence is treated as the premise and we use manually created templates (similar to Vania et al. (2022)) to convert a DS triple to one or more hypotheses. We then obtain the entailment probability score for each premise-hypothesis pair and take the maximum score for cases with multiple converted hypotheses. We set the threshold to be 0.7, similar to Huguet Cabot and Navigli (2021), and only keep triples with an entailment score above the threshold. We retain 2.1M triples (44% of the previous DS triples, see Table 1) after this filtering process.\nNegative Examples After the DS creation and NLI filtering steps, only less than 10% of the original sentences contain triples. To train models for extracting facts from the web and alleviate false positives, we include two kinds of negative examples in WEBIE: (i) sentences with one or zero entities, and (ii) sentences with two or more entities, but without any factual information (i.e., no relation between the entities). We randomly sample negative instances covering both cases evenly and add them to WEBIE. In the end, WEBIE consists of 1.6M sentences, where 50% are negative examples. A summary of the statistics of WEBIE with a comparison with other datasets is shown in Table 1. The dataset is randomly split into train/validation/test sets using a 90/5/5 split." }, { "figure_ref": [], "heading": "Human Annotation", "publication_ref": [], "table_ref": [], "text": "Existing IE datasets, such as REBEL, are often automatically annotated using the DS approach, hence the labels can be noisy. To allow more reliable evaluation of WEBIE, we randomly sample ∼21K triples from the most frequent 200 relations and annotate them with MTurk. Given a sentence, each HIT (Human Intelligence Task) is designed to verify if a DS triple is correctly expressed in the sentence 9 . First, the annotators are asked to verify if the head entity (subject) and tail entity (object) are linked correctly. For each entity, we provide its Wikipedia title and link to its Wikidata page as additional context. After that, the annotators are asked to verify if the triple relation is correctly inferred from the sentence. Here, we provide the relation descriptions and example use cases of each relation. We ask three MTurk workers to annotate each DS triple and take the majority vote as the final label for each triple. A triple is considered valid if both entities are linked to the correct Wikidata entities and the relation is inferred 10 by the sentence. An annotation interface is shown in Appendix C.\nTo ensure the annotation quality, we set qualifications with additional requirements for MTurk workers (see Appendix C for details). The agreement among the three annotators is high: 99.4% for the head entities, 99.2% for the tail entities, and 76.1% for the relations have all three annotators agreeing on the same label. After the majority vote, 92.1% of the triples are labelled as inferred and therefore kept as valid triples." }, { "figure_ref": [], "heading": "Multilingual WEBIE", "publication_ref": [], "table_ref": [], "text": "To enable zero-shot cross-lingual transfer evaluation on WEBIE, we further extend the annotated subset, with additional negative examples, to four other languages: French, Spanish, Portuguese, and Hindi. First, we use a neural machine translation model, the distilled 1.3B variant 11 of NLLB-200 (Costa-jussà et al., 2022), to translate the English sentences into the target languages. We then use MTurk to verify the translation and add entity span information in the translated sentences. We provide the English sentence (with the entity spans highlighted) and its translation, and first, ask the annotators to correct the translation. After that, MTurk workers are asked to mark the corresponding entity spans in the target language. We ask two annotators to complete the aforementioned HIT, and an additional worker to select the bet- 9 We ensure all DS triples in a selected sentence are annotated. 10 We ask for inferred instead of explicit expression since some relations may not be explicitly expressed in the sentence, e.g. \"located in\" (London, UK) or \"date of birth\" XX .\n11 https://huggingface.co./facebook/ nllb-200-distilled-1.3B\nter translation, which is used in our final dataset. To obtain translations with higher quality, we restrict the region of the workers to countries where the target language is the official language12 . The final mWEBIE consists of 9K instances in each language, which corresponds to roughly 90% of the 21K annotated triples." }, { "figure_ref": [], "heading": "Generative Information Extraction", "publication_ref": [], "table_ref": [], "text": "This section describes the training strategies that we use for benchmarking (m)WEBIE." }, { "figure_ref": [], "heading": "Sentence-to-Triples Generation", "publication_ref": [ "b15", "b28" ], "table_ref": [], "text": "We use BART and mBART for all of our experiments. Given a sentence s as input, we train the model to autoregressively generate the linearised triples t as an output. Following the practice from Huguet Cabot and Navigli (2021) and Josifoski et al. (2022), we linearise a triple t i by converting it into \"<sub> head entity label <rel> relation <obj> tail entity label <et>\", where the tags in brackets represent subject, relation, object, and the end of triple, respectively. Head/tail entity label refers to the Wikipedia title that the mention span in the sentence is mapped to, which also has a one-to-one correspondence with the Wikidata ID 13 .\nFor each sentence, we order its linearised triples accordingly to the order in which they appear in the input sentence; first by the order of the appearance of the head entity, and then by the order of the tail entity (for cases when the head entities are the same). The conditional probability of generating t is formulated as p(t|s) = N t=0 p(t i |t <i , s). We use the standard cross-entropy loss and maximise the output sequence likelihood with teacher forcing (Sutskever et al., 2014). An example of input and output can be seen in the top left of Figure 1." }, { "figure_ref": [], "heading": "Entity-Linking as an Auxiliary Task", "publication_ref": [ "b20", "b19" ], "table_ref": [], "text": "The standard linearised triples output only contains the label of the entity and not the span. As a result, it may be difficult to trace back from which input span an entity is generated, especially in the case when the model hallucinates (e.g., by generating an entity that is not mentioned in the sentence). To encourage models to generate faithful and interpretable output, we also experiment with models that are jointly optimised for generating triples and EL. The goal of the EL task is to identify and extract entity spans from the input sentence and link them to their corresponding KB entities. We posit that adding the EL task as an additional training objective will teach the model to put attention to the input spans when generating the output. We experiment with the following three approaches. Narayan et al. (2021Narayan et al. ( , 2022) ) have shown that generation with entity-chain planning, i.e. generating the desired entities first before the actual output, is effective in improving the faithfulness and controlling hallucinations in text generation tasks such as abstractive summarisation. For generative IE tasks, EL can be used as an intermediate plan to ground the generation of the linearised triples. We define the Entity-Linking target in the format of \"Mention Span 1 # Entity Label 1 | Mention Span 2 # Entity Label 2 | ...\", where the entity spans are ordered as they appear in the text. We then prepend the EL target to the linearised triples target, using special symbols as separators, i.e., \"[ENTITY] Entity-Linking target [TRIPLE] Linearised Triples Target\", where \"[ENTITY]\" is the start symbol before generating the EL output, and \"[TRIPLE]\" is the start symbol before generating the linearised triples. Given an input sentence, we essentially train the decoder to first generate the EL chain and then generate the triples, conditioned on both the input sentence and the EL output14 ." }, { "figure_ref": [], "heading": "ENTITY-PROMPT", "publication_ref": [ "b14", "b34" ], "table_ref": [], "text": "ARTIFICIAL-PROMPT Artificial Prompt tokens are symbols placed in front of the input sequence, which has previously been explored in areas such as neural machine translation to distinguish the language of the target output translation (Johnson et al., 2017), visual question answering for joint answer and explanation generation (Whitehouse et al., 2023), etc. We adapt this approach for jointly training our models for Entity Linking and generative IE. Specifically, we use an artificial prompt token <#el#> at the beginning of the input sentence when training for the Entity-Linking target, and use <#tri#>15 for the linearised output target. Training instances for both tasks are mixed and randomly shuffled for training." }, { "figure_ref": [], "heading": "Inference with a Constraint Trie", "publication_ref": [ "b2", "b15", "b2" ], "table_ref": [], "text": "In addition to standard beam search decoding, we experiment with constraint decoding by restricting the generated output to be valid Wikipedia titles and Wikidata relations using a prefix Trie, following the ideas proposed in GENRE (Cao et al., 2021) and GenIE (Josifoski et al., 2022). We use two constraint Tries: an entity Trie and a relation Trie. The entity Trie is built using all Wikipedia titles (as the entity labels), and the relation Trie is built using all Wikidata relation property labels. We refer the readers to Cao et al. (2021) for more details on constructing the Trie.\nWe use four special symbols, <sub>, <rel>, <obj> and <et> to define the state of the generation. We apply both constraint Tries as follows. We adopt the constraint Trie so that, in the very first decoding state, the model is allowed to either (i) return an empty string for a negative example, or (ii) generate <sub>, which is the start symbol for generating a triple. If the <sub> symbol is generated, then we generate the head entity using the entity Trie, i.e., only valid entities will be considered. Once the generation of the head entity is completed, the model proceeds to generate <rel> (i.e., the start symbol for generating relation string) and then subsequently generate allowed tokens from the relation Trie which is built from the relations in Wikidata. After that, the model generates <obj> and the tail entity, in the same manner, using the entity Trie. After generating the full triple (indicated by <et> generated after the tail entity), the decoder can either stop the generation or start a new iteration for generating the next triple.\nFor the ENTITY-PROMPT models, since the en- tity mention spans are text from the input sentences and usually are not the same as the entity labels in Wikidata, we propose a partial constraint generation approach. Specifically, we start the standard beam search for the EL target output and only activate the Trie constraints after that when generating the linearised triples." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we explain the datasets used in the experiments and the detailed modelling setup." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b32", "b4", "b8", "b10", "b12", "b15" ], "table_ref": [], "text": "In addition to our proposed WEBIE dataset, we also use the following datasets for our experiments.\nWikiNRE (Trisedya et al., 2019) is an IE dataset based on Wikipedia which is automatically constructed by aligning Wikipedia sentences to Wikidata triples using the DS approach. The authors apply a coreference resolution model (Clark and Manning, 2016) to obtain sentences with implicit entity names, and use a paraphrase detection model (Ganitkevitch et al., 2013;Grycner and Weikum, 2016) to filter out sentences that do not express the DS triples. In our experiments, we only use WikiNRE for zero-shot evaluation.\nREBEL (Huguet Cabot and Navigli, 2021) is a large-scale IE dataset constructed automatically from Wikipedia abstracts. Using the Wikipedia hyperlinks in the abstracts, as well as numerical values and dates, they map the entity spans to their corresponding Wikidata entities. They then use the DS approach to identify triples in each sentence.\nTo filter out false positives, the authors use an NLI model by concatenating the entities and the relation as the hypothesis. In our experiment, we use the REBEL dataset that is sub-sampled by Josifoski et al. (2022), where 857 relations are considered. Both WikiNRE and REBEL do not contain negative examples and are not annotated by humans." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b15" ], "table_ref": [], "text": "We experiment with BART using two settings: BART PLM with the pre-trained weights from (R+W). We evaluate the performance of the generated triples by parsing the linearised output to a list of triples and comparing it to the gold label to calculate precision, recall, and F1 scores. For WEBIE, we also calculate the accuracy of the prediction of negative instances, where a prediction is considered correct if the model accurately generates empty strings rather than hallucinating triples.\nFor training with EL as an auxiliary task, we primarily experiment with the BART RAND . We prepare the training instances as described in §3.2, and train separate models on REBEL and on WEBIE. For the 2LM-HEADS, we conduct experiments with different values of the α parameter in the combined loss function, specifically, we set it to 0.5 and 0.75.\nWe use 8 GPUs, each with 32G VRAM, for all experiments. We set the batch size to 8 and accumulate gradient batches to 32. We follow the hyperparameters settings from Josifoski et al. (2022) and set the learning rate to 3e -5 , weight decay to 0.01, and warmup steps to 5K18 . We train for up to 30 epochs with early stopping (patience 10), validate twice per epoch, and take the last checkpoint for evaluation. Training one epoch takes ∼1.5 hours for BART and ∼2 hours for mBART." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "We now present the main results of (m)WEBIE and compare different training strategies." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b15", "b15" ], "table_ref": [ "tab_2" ], "text": "Table 2 shows our benchmarking results on WE-BIE. We report results with the constraint Trie in decoding since it overall achieves better results 19 . Contrary to the findings from Josifoski et al. (2022), we find that BART models with pre-trained weights are better than initialised weights. Constraint Trie decoding benefits REBEL, WikiNRE, and the recall performance of WEBIE, but may compromise the precision since the models are also trained to handle negative examples.\nModels trained on both REBEL and WEBIE (R+W) obtain overall better F1 scores on the two datasets compared to models trained on each dataset separately. Similar performance can also be observed in the zero-shot performance on Wik-iNRE. Models trained solely on the REBEL dataset (Wikipedia-domain) show poor generalisability on WEBIE20 and always generate false positives thus resulting in 0% accuracy for negative instances in WEBIE. This indicates that Wikipedia-domain data only is not adequate for training robust models for the web, and the absence of negative examples in these datasets leads to a prominent issue of hallucination when applied to the web.\nBART PLM (R+W) also achieves a new state-ofthe-art F1 score of 71.87 on REBEL, surpassing the performance of 68.93 from GenIE (Josifoski et al., 2022) and 70.74 from KnowGL (Rossiello et al., 2023), the latter of which trains with additional information including entity type. The results demonstrate the benefit of WEBIE, which contributes to the generalisability of the models." }, { "figure_ref": [], "heading": "Cross-lingual Transfer with mBART", "publication_ref": [], "table_ref": [ "tab_3", "tab_2" ], "text": "We train mBART on the training set of WEBIE and evaluate the zero-shot cross-lingual transfer on mWEBIE. Similar to prior experiments, results in Table 3 show that constraint Trie decoding obtains higher performance than standard decoding21 .\nFor English, mBART achieves higher overall performance than BART PLM (see Table 2). The zero-shot results reveal that Hindi has a significant decline in performance compared to the other three non-English languages, French, Spanish, and Portuguese. Since these three languages utilise the Latin script as in English, which may result in an overlap of entity surface forms. In contrast, the transfer is more difficult for Hindi as it employs a different writing system. Manual analysis indicates that mBART tends to produce a high rate of false negatives in Hindi examples, where the correct extraction mostly occurs when the entities in the sentences share similar surface forms with the English counterparts." }, { "figure_ref": [], "heading": "Results with Additional EL Training", "publication_ref": [ "b11", "b7", "b39", "b38", "b26", "b40", "b27", "b22", "b3", "b37", "b13", "b6", "b21" ], "table_ref": [ "tab_4" ], "text": "Table 4 shows the results of training with Entity-Linking as an auxiliary task. For REBEL, the best results are achieved with the 2LM-HEADS approach, where the α parameter is set to 0.75. For WEBIE with negative examples, all EL training models achieve better F1 performance than BART RAND , with ENTITY-PROMPT particularly resulting in better recall. This shows the benefit of joint training with EL to improve the faithfulness of web domain data. ARTIFICIAL-PROMPT achieves the best precision in WEBIE but does not show significant differences in performance compared to BART RAND . Nevertheless, all three approaches provide better interpretability, i.e., the information of the mention spans in the text that contributes to the IE prediction.\nENTITY-PROMPT and ARTIFICIAL-PROMPT do not require additional architectural adaptation over the standard model. ENTITY-PROMPT also does not introduce training overhead, whereas the other two models may require twice the training time. 2LM-HEADS offers the flexibility of adapting the weighted combination of the main task and the auxiliary task by adjusting α in the joint loss formula, which allows more emphasis on the main target.\n6 Related Work IE Datasets The term Information Extraction has been used for different tasks in the literature. Most existing IE datasets are collected from Wikipedia articles aligned with Wikidata, including sentence-level IE datasets such as REBEL, Wik-iNRE, FewRel (Han et al., 2018), T-REx (Elsahar et al., 2018); document-level Relation Extraction22 datasets, e.g., DocRED (Yao et al., 2019), CodRED (Yao et al., 2021). SMiLER (Seganti et al., 2021) is a multilingual sentence-level IE dataset that is also based on Wikipedia, covering 14 languages and 36 relations. These sentence-level IE datasets typically do not contain negative examples.\nDatasets such as TACRED (Zhang et al., 2017), RE-TACRED (Stoica et al., 2021), and WebRED (Ormandi et al., 2021) have negative relation examples but they are not linked to knowledge bases. Our proposed dataset WEBIE is distinct from the existing datasets in that it is on the web domain, entity-linked, and with negative examples.\nIE Approaches IE approaches can be classified into two categories: pipeline systems with discriminative models, and sequence-to-sequence systems with generative models. Pipeline models typically include separate modules for Named Entity Recognition (NER), Entity Linking and Relation Extraction (Chaganty et al., 2017;Yamada et al., 2020). Systems that jointly train NER, EL, and RE, have also been explored, taking advantage of the information shared among the tasks (Ji et al., 2020;Eberts and Ulges, 2020).\nIn recent years, generative IE has gained a lot of attention. Nayak and Ng (2020) utilise an LTSM model and propose a pointer network-based decoding. More recent approaches, e.g. as introduced in REBEL and GenIE, train a transformer-based encoder-decoder model with standard maximumlikelihood objectives to convert sentences to linearised output. KnowGL (Rossiello et al., 2023) improves upon REBEL with additional entity type information added to the linearised output. Our work extends GenIE and experiments with three different approaches where we incorporate explicit EL information as an auxiliary task with adapted constraint Trie decoding." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We present (m)WEBIE, the first large-scale, entitylinked closed IE dataset on the web. A subset of the dataset is further annotated by humans and translated into four other languages, French, Spanish, Portuguese, and Hindi, via crowdsourcing.\nWe benchmark WEBIE with generative models and compare the models trained on WEBIE and REBEL (Wikipedia-domain). Our results show that models trained on WEBIE have competitive zero-shot performance when applied to REBEL and WikiNRE, whereas models trained only on REBEL have 0% accuracy on the negative examples in WEBIE. This highlights the importance of including negative examples for training more robust models and reducing hallucination in generative IE on the web. Models trained on both REBEL and WEBIE achieve the best performance on both datasets, as well as zero-shot results on WikiNRE, showing that WEBIE serves as a complementary dataset to existing Wikipedia-domain datasets.\nInvestigating the approaches with Entity Linking as an auxiliary task, we find that adding an additional task-specific LM head achieves the overall best performance for REBEL, and the ENTITY-PROMPT approach shows the most significant improvement on WEBIE, particularly benefiting recall. We primarily benchmark transformer-based encoder-decoder models on WEBIE, but future work could also explore pipeline frameworks and larger language models for few-shot performance." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b30" ], "table_ref": [], "text": "We identify several limitations in this work: (i) False negatives. Our current automatic triple extraction pipeline is built using the DS approach followed by filtering using an NLI model. However, Wikidata is not complete (Tan et al., 2022). While some triples may not be completely available in WEBIE, we expect models trained on this dataset can still discover new triples that do not exist in Wikidata. (ii) Limited relations in annotation. The human annotation is only conducted on the most frequent 200 relations. (iii) Limited languages in mWEBIE. As discussed in §2.3 and Appendix C, the languages in mWEBIE are limited to official languages from geographical regions where there is a reasonable amount of MTurk workers to accept the job. An alternative solution would be to use professional translators, especially for low-resource languages. (iv) Fixed dataset. Facts might change in the world (and Wikidata). This can lead to a degraded real-world performance if a system relies exclusively on WebIE for evaluation when the dataset is not updated accordingly." }, { "figure_ref": [], "heading": "A Additional Results", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We show the full results in Table 5 for BART RAND and BART PLM trained on REBEL and WEBIE, using both beam search with and without constraint Trie decoding.\nWe show in Table 6 the results for non-English languages for mWEBIE when specifying the source language and using the default (English) for the mBART tokenizer. These results are from beam search without constraint Trie. We can see that specifying the source language mostly harms the performance (except French), especially for Portuguese. We hypothesise that due to the model being trained solely on English as the source token, mBART may have difficulty handling other languages. Table 6: Comparison of the zero-shot performance on mWEBIE with mBART when specifying the source language (XX) and keeping the default setting as the source language (EN). Results are with standard beam search (without the constraint Trie)." }, { "figure_ref": [], "heading": "B Examples of ReFinED Output", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "We show examples of the sentences processed by ReFinED in Table 7.\nFor each input sentence, ReFinED identifies the set of entities in that sentence, and outputs mention span, Wikidata id, and Wikipedia title for each entity. For our experiments, we use the wikipedia_model_with_numbers model with wikipedia entity set." }, { "figure_ref": [], "heading": "C MTurk Annotation Details", "publication_ref": [], "table_ref": [], "text": "In this section, we describe the detailed settings for annotating (m)WEBIEwith MTurk." }, { "figure_ref": [], "heading": "C.1 WEBIE", "publication_ref": [], "table_ref": [], "text": "The first annotation task (HIT) is to verify the correctness of the triples automatically created from the DS approach and filtered by the NLI model. In each HIT, we provide a sentence with its entities highlighted (head entity in blue and tail entity in green) and the URL of the web page which the sentence is extracted from. For the first EL annotation job, we provide both links to the Wikipedia and Wikidata pages. Annotators are asked to choose if the highlighted spans are linked correctly to the KB. Next, the annotators are asked to verify if a relation (highlighted in orange) can be inferred from the sentence. We provide the description of the relation and an example use case to facilitate the annotation.\nEach triple is annotated by three workers, and we pay $0.2 per HIT. We hire MTurk workers with Masters Qualification and set additional requirements including (i) having done 2,000 HITs and (ii) having a job approval rate ≥99%." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "C.2 mWEBIE", "publication_ref": [], "table_ref": [], "text": "Figure 4 and Figure 5 illustrates the interface for correcting machine-translated sentence and identifying corresponding entities in them. As it is challenging to find qualified crowd workers for the translation task23 , we set the geographical regions for each language to the countries where the language is one of the official languages. We find that and countries in America have an adequate number of MTurk workers, which highly restricts the options for our target languages. In the end, the countries we set for the target languages are as follows: Portuguese: AO, BR, CV, ST, GW, GQ, MZ; Spanish: ES, MX, CO, PE, CL; CA for French, and IN for Hindi 24 . It was also necessary to remove the Masters Qualification requirement for MTurk workers (except Hindi) to find adequate annotators. We then conduct pilot annotations, where we deliberately introduce errors in the reference machine translation to verify if the workers under our requirement settings are able to correct them. We provide the English sentence paired with the original machine-translated sentence for the actual HIT. The English sentence is highlighted with its entity spans, and we instruct the workers to correct the translation while ensuring that the entities are correctly translated. After confirming the translation, workers are then asked to highlight the corresponding entities in the target language (in green). For negative sentences without entity spans, the longest noun phrases were highlighted instead to prevent workers from simply copying the reference translations. We pay $0.35 per HIT for positive sentences and $0.25 for negative sentences (since most sentences in negative examples have only one highlighted entity/noun phrase and it is 24 For the mapping between country codes and countries, please refer to https://docs.aws.amazon. com/AWSMechTurk/latest/AWSMturkAPI/ ApiReference_LocaleDataStructureArticle. html considered an easier task).\nTwo MTurk workers are asked for the translation task, and an additional worker was asked to select the better translation, for which $0.10 per HIT was paid." }, { "figure_ref": [], "heading": "D Domains in WEBIE", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "The 200 URL domains included in WEBIE are shown in Table 8." }, { "figure_ref": [], "heading": "E Relations in the Annotated Set", "publication_ref": [], "table_ref": [], "text": "Table 9 shows the details of the 200 relations that are covered in the human-annotated set of WEBIE. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Jens Lehmann for the helpful feedback on the paper draft, and Balkarn Hayre for helping with the MTurk experiments. We also thank the anonymous reviewers for their valuable comments that improved the paper." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "www.nytimes.com www.latimes.com www.theguardian.com www.businessinsider.com www.forbes.com www.chicagotribune.com www.foxnews.com www.aljazeera.com www.dailymail.co.uk www.express.co.uk www.cnet.com www.telegraph.co.uk www.rt.com www.zdnet.com www.foxbusiness.com www.reuters.com www.ibtimes.co.uk www.washingtonpost.com www.si.com www.bbc.com news.bbc.co.uk nypost.com www.marketwired.com www.baltimoresun.com www.npr.org www.fool.com www.bbc.co.uk mashable.com www.cnbc.com www.hindustantimes.com www.csmonitor.com www.yahoo.com www.thesun.co.uk www.nydailynews.com www.dailystar.co.uk www.kickstarter.com uk.reuters.com www.inquisitr.com www.straitstimes.com www.cbsnews.com deadline.com www.androidheadlines.com www.wired.com www.bustle.com www.pcworld.com www.fastcompany.com www.firstpost.com www.entrepreneur.com www.breitbart.com techcrunch.com www.nme.com www.ndtv.com finance.yahoo.com www.lonelyplanet.com www.ign.com www.barnesandnoble.com www.usatoday.com www.timeout.com apnews.com www.thisisinsider.com metro.co.uk gizmodo.com www.sacbee.com economictimes.indiatimes.com www.buzzfeed.com www.miamiherald.com www.espn.com www.washingtontimes.com www.pbs.org thenextweb.com www.aol.com timesofindia.indiatimes.com www.cbc.ca kotaku.com www.irishtimes.com www.military.com www.startribune.com www.deccanherald.com www.techradar.com www.thestar.com www.techrepublic.com slate.com www.pcmag.com www.hollywoodreporter.com www.marketwatch.com www.slideshare.net www.etonline.com in.reuters.com variety.com www.sfgate.com indianexpress.com www.abc.net.au theconversation.com www.eurekalert.org mic.com www.blogtalkradio.com www.thenation.com www.prnewswire.com www.barrons.com www.apnews.com www.newsmax.com www.theatlantic.com www.huffpost.com patents.google.com www.eventbrite.com link.springer.com www.ncbi.nlm.nih.gov www.prweb.com www.deviantart.com www.instructables.com www.booking.com www.etsy.com sites.google.com www.agreatertown.com lists.w3.org disneyparksmomspanel.disney.go.com homestars.com www.reference.com www.city-data.com app-wiringdiagram.herokuapp.com www.adweek.com docs.microsoft.com fineartamerica.com www.insiderpages.com lists.debian.org premium.wpmudev.org www.librarything.com mail-archives.apache.org scholars.duke.edu www.glassdoor.com www.shutterstock.com myemail.constantcontact.com www.eventbrite.co.uk archives.lib.state.ma.us www.gsmarena.com www.audible.com www.hotels.com www.statista.com www.alibaba.com lists.gnu.org ipfs.io www.socialbakers.com www.weddingwire.com rd.springer.com appadvice.com www.complex.com zapier.com www.foodnetwork.com www.kijiji.ca www.salon.com www.semanticscholar.org hubpages.com www.scribd.com www.cinemablend.com w3techs.com www.urbandictionary.com www.salespider.com www.angieslist.com stackoverflow.com www.dictionary.com www.zocdoc.com wordpress.org www.pcgamer.com www.chamberofcommerce.com www.worldcat.org s3.amazonaws.com www.tweaktown.com chroniclingamerica.loc.gov www.agoda.com www.showmelocal.com www.refinery29.com www.businessinsider.com.au www.healthgrades.com store.cdbaby.com oppositelock.kinja.com www.bedbathandbeyond.com www.radionz.co.nz www.ebay.com downloads.zdnet.com www.stitcher.com www.thestreet.com github.com www.youtube.com www.oreilly.com itunes.apple.com medium.com www.tripadvisor.com www.imdb.com forums.newtek.com forums.macrumors.com answers.sap.com forum.duolingo.com community.esri.com en.wikipedia.org en.m.wikipedia.org encyclopedia2.thefreedictionary.com simple.wikipedia.org www.encyclopedia.com www.britannica.com www.questia.com" } ]
Extracting structured and grounded fact triples from raw text is a fundamental task in Information Extraction (IE). Existing IE datasets are typically collected from Wikipedia articles, using hyperlinks to link entities to the Wikidata knowledge base. However, models trained only on Wikipedia have limitations when applied to web domains, which often contain noisy text or text that does not have any factual information. We present WEBIE, the first large-scale, entity-linked closed IE dataset consisting of 1.6M sentences automatically collected from the English Common Crawl corpus. WEBIE also includes negative examples, i.e. sentences without fact triples, to better reflect the data on the web. We annotate ∼21K triples from WEBIE through crowdsourcing and introduce mWEBIE, a translation of the annotated set in four other languages: French, Spanish, Portuguese, and Hindi. We evaluate the in-domain, out-of-domain, and zero-shot cross-lingual performance of generative IE models and find models trained on WEBIE show better generalisability. We also propose three training strategies that use entity linking as an auxiliary task. Our experiments show that adding Entity-Linking objectives improves the faithfulness of our generative IE models 1 .
WEBIE: Faithful and Robust Information Extraction on the Web
[ { "figure_caption": "The guidance and the interface are shown in Figure 2 and Figure 3, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: MTurk HIT guidance entity and relation labelling.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: MTurk HIT user interface for entity and relation labelling.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: MTurk HIT user interface for correcting the machine-translated text.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: MTurk HIT user interface for entity labelling in the target language.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Statistics of WEBIE and comparison with other sentence-level RE (top two rows) and IE datasets. We report the publicly available version of WebRED. † shows the number of examples in each split. ‡ corresponds to the number of URL domains. Annotated Triples show the number of human-annotated triples.", "figure_data": "DATASET DomainsEntityRelationSentencesTrain †Validation †Test †TriplesAnnotatedNegativeLanguagesLinkedTypesTriplesInstances(Test Set)TACREDWeb✗42106,26468,12422,631 15,509106,264106,26479.5%1WEBRED Web (120 ‡ )✗523117,717---117,717117,71765%1WIKINRE Wikipedia✓158255,654224,881988 29,785330,005001REBELWikipedia✓1146 3,059,894 2,754,387152,672 152,835 10,311,293001WEBIEWeb (200 ‡ )✓661 1,649,167 1,480,22382,914 86,030 1,905,20521,11350%5", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experiment results with constraint Trie. BART RAND corresponds to models with BART configuration but randomly initialised weights. BART PLM are models with pretrained weights fromLewis et al. (2020). (R), (W), (R+W) refer to models trained on REBEL, WEBIE, and both datasets, respectively. For WEBIE we show the overall performance and the accuracy on negative samples. The blue shade indicates zero-shot performance.", "figure_data": "MODELWEBIE (ALL TEST) Precision Recall F1 Acc.-Neg. Precision Recall F1 Acc.-Neg. Precision Recall F1 Precision Recall F1 WEBIE (ANNO. TEST) REBEL WIKI-NREBART RAND (R)11.9318.91 14.630.00 11.8215.63 13.460.00 66.8970.37 68.58 27.61 66.73 39.06BART PLM (R)15.2439.30 21.960.00 15.9834.92 21.930.00 66.2876.78 71.14 25.39 77.45 38.24BART RAND (W)55.4757.25 56.3590.07 52.9546.60 49.5795.04 27.4723.13 25.12 18.98 43.75 26.48BART PLM (W)57.9274.19 64.9187.99 57.0065.91 61.1394.18 35.8143.00 39.08 24.30 78.01 37.06BART RAND (R+W) 52.7964.15 57.9287.45 51.8954.28 53.0693.71 66.8772.24 69.45 29.02 82.35 42.91BART PLM (R+W)54.6378.43 64.4076.43 55.2271.25 62.2282.59 66.4278.29 71.87 29.25 86.38 43.70LANGUAGEUNCONSTRAINED DECODER Precision Recall F1 Empty-Pos.% Accuracy-Neg. Precision RecallCONSTRAINT TRIE F1 Empty-Pos.% Accuracy-Neg.ENGLISH57.72 61.26 59.432.4895.6960.29 64.29 62.222.6396.11FRENCH43.27 36.13 39.3811.8996.1946.52 40.26 43.1612.6396.64SPANISH41.93 34.63 37.9312.3496.7445.13 38.89 41.7812.8096.97PORTUGUESE41.17 32.37 36.2414.0796.9144.15 36.61 40.0214.8297.22HINDI4.281.62 2.3567.3898.644.231.67 2.4067.5598.64", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of various training with entity linking as an auxiliary task, and beam search with and without constraint Trie decoding. WEBIE results are on the annotated test set. All models use BART configuration with randomly initialised weights. We show in bold the best F1 scores among the training objectives.", "figure_data": "Lewis", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Additional results using beam search with and without constraint Trie for each dataset. Results in blue shades are zero-shot performance.", "figure_data": "MODELWEBIE (ALL TEST) Precision Recall F1 Acc.-Neg. Precision Recall F1 Acc.-Neg. Precision Recall F1 Precision Recall F1 WEBIE (ANNO. TEST) REBEL WIKI-NREBART RAND (R)10.8316.00 12.920.00 10.7013.26 11.840.00 64.3467.90 66.07 15.83 52.09 24.28UNCONSTRAINEDBART PLM (R) BART RAND (W) BART PLM (W) BART RAND (R+W) 51.34 17.58 55.06 54.81 BART PLM (R+W) 53.0434.20 23.23 54.90 54.98 70.29 61.59 61.22 55.85 75.29 62.232.28 17.95 89.67 51.64 87.59 53.40 86.80 49.64 76.66 53.1830.02 22.47 44.46 47.78 62.36 57.53 51.62 50.61 68.41 59.841.97 63.83 94.74 22.45 93.58 28.05 93.15 64.38 82.96 63.4976.66 69.66 18.34 65.04 28.62 20.42 21.39 10.95 31.49 16.25 37.28 32.01 15.55 60.45 24.73 69.57 66.87 17.68 65.96 27.89 75.30 68.89 18.93 73.52 30.11CONSTRAINT TRIEBART RAND (R) BART PLM (R) BART RAND (W) BART PLM (W) BART RAND (R+W) 52.79 11.93 15.24 55.47 57.92 BART PLM (R+W) 54.6318.91 14.63 39.30 21.96 57.25 56.35 74.19 64.91 64.15 57.92 78.43 64.400.00 11.82 0.00 15.98 90.07 52.95 87.99 57.00 87.45 51.89 76.43 55.2215.63 13.46 34.92 21.93 46.60 49.57 65.91 61.13 54.28 53.06 71.25 62.220.00 66.89 0.00 66.28 95.04 27.47 94.18 35.81 93.71 66.87 82.59 66.4270.37 68.58 27.61 66.73 39.06 76.78 71.14 25.39 77.45 38.24 23.13 25.12 18.98 43.75 26.48 43.00 39.08 24.30 78.01 37.06 72.24 69.45 29.02 82.35 42.91 78.29 71.87 29.25 86.38 43.70LANGUAGEEN as Source Language in mBART Tokenizer Precision Recall F1 Empty-Pos.% Accuracy-Neg. Precision Recall XX as Source Language in mBART Tokenizer F1 Empty-Pos.% Accuracy-Neg.FRENCH43.27 36.13 39.3811.8996.1941.29 37.73 39.438.5694.87SPANISH41.93 34.63 37.9312.3496.7440.47 36.57 38.428.5695.82PORTUGUESE41.17 32.37 36.2414.0796.9113.811.77 3.1486.3398.21HINDI4.281.62 2.3567.3898.643.691.69 2.3160.6298.43", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "ReFinED outputs on WEBIE validation examples.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "URL domains of the sentences included in WEBIE. located on the territory of the following administrative entity. Use P276 for specifying locations that are non-administrative places and for items about events. Use P1382 if the item falls only partially into the administrative entity. in which a place is located or is part of. If a municipality or county is split into or part of several regions: add several values 73 P2032 work period (end) end of period during which a person or group flourished in their professional activity 73 P3842 located in the presentday administrative territorial entity the item was located in the territory of this present-day administrative unit; however the two did not at any point coexist in time , resting place, place of ash-scattering, etc. (e.g., town/city or cemetery) for a person or animal. There may be several places: e.g., re-burials, parts of body buried separately.", "figure_data": "COUNT PIDRELATIONDESCRIPTION1359P17countrysovereign state of this item (not to be used for human beings)910 the item is 776 P131 located in the administra-tive territorial entity P530 diplomatic relation diplomatic relations of the country684P47shares border withcountries or administrative subdivisions, of equal level, that this item borders, either by land orwater. A single common point is enough.655P27country of citizenshipthe object is a country that recognizes the subject as its citizen588P161cast memberactor in the subject production .use \"character role\" (P453) and/or \"name of the character role\"(P4633) as qualifiers, use \"voice actor\" (P725) for voice-only role580P577publication datedate or point in time when a work was first published or released546P527has part(s)part of this subject480P54member of sports teamsports teams or clubs that the subject represents or represented438P800notable worknotable scientific, artistic or literary work, or other work of significance among subject's works437P463member oforganization, club or musical group to which the subject belongs. Do not use for membershipin ethnic or social groups, nor for holding a political position, such as a member of parliament(use P39 for that).430P108employerperson or organization for which the subject works or worked426P127owned byowner of the subject400P361part ofobject of which the subject is a part (if this subject is already part of object A which is a part ofobject B, then please only make the subject part of object A)378P1830 owner ofentities owned by the subject370P102member of political party the political party of which a person is or has been a member or otherwise affiliated364P150contains the administra-(list of) direct subdivisions of an administrative territorial entitytive territorial entity359P749parent organizationparent organization of an organization, opposite of subsidiaries340P178developerorganization or person that developed the item314P159headquarters locationcity, where an organization's headquarters is or has been situated. Use (P276) qualifier forspecific building310P57directordirector(s) of film, TV-series, stageplay, video game or similar299P118leagueleague in which team or player plays or has played in297P1376 capital ofcountry, state, department, canton or other administrative division of which the municipality isthe governmental seat296P449original broadcasternetwork(s) or service(s) that originally broadcast a radio or television program293P36capitalseat of government of a country, province, state or other type of administrative territorial entity285P2936 language usedlanguage widely used (spoken or written) in this place or at this event280P355has subsidiarysubsidiary of a company or organization; generally a fully owned separate corporation279P175performeractor, musician, band or other performer associated with this role or musical work267P166award receivedaward or recognition received by a person, organization or creative work267P569date of birthdate on which the subject was born262P641sportsport that the subject participates or participated in or is associated with258P26spousethe subject has the object as their spouse (husband, wife, partner, etc.). Use \"unmarried partner\"(P451)) for non-married companions247P571inceptiontime when an entity begins to exist; for date of official opening use P1619241P176manufacturermanufacturer or producer of this product234P40childsubject has object as child. Do not use for stepchildren233P170creatormaker of this creative work or other object (where no more specific property exists)227P3373 siblingthe subject and the object have at least one common parent (brother, sister, etc. includinghalf-siblings); use \"relative\" (P1038) for siblings-in-law (brother-in-law, sister-in-law, etc.) andstep-siblings (step-brothers, step-sisters, etc.)227P50authormain creator(s) of a written work (use on works, not humans); use P2093 when Wikidata itemis unknown or does not exist226P570date of deathdate on which the subject died224P276locationlocation of the object, structure or event. In the case of an administrative entity as containingitem use P131. For statistical entities use P8138. In the case of a geographic entity use P706.Use P7153 for locations associated with the object.204P674characterscharacters which appear in this item (like plays, operas, operettas, books, comics, films, TVseries, video games)203P1412 languages spoken, writ-language(s) that a person or a people speaks, writes or signs, including the native language(s)ten or signed201P1441 present in workthis (fictional or fictionalized) entity or person appears in that work as part of the narration (useP2860 for works citing other works, :P361/P1433 for works being part of other works, P1343for entities described in non-fictional accounts)201P945allegiancecountry (or other power) that the person or group serves197P58screenwriterperson(s) who wrote the script for subject item197P37official languagelanguage designated as official by this item193P137operatorperson, profession, or organization that operates the equipment, facility, or service193P162producerperson(s) who produced the film, musical work, theatrical production, etc. (for film, this doesnot include executive producers, associate producers, etc.)185P1411 nominated foraward nomination received by a person, organisation or creative work", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Chenxi Whitehouse; Clara Vania; Alham Fikri; Christos Christodoulopoulos; Andrea Pierleoni
[ { "authors": "Tom Ayoola; Shubhi Tyagi; Joseph Fisher; Christos Christodoulopoulos; Andrea Pierleoni", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Re-FinED: An efficient zero-shot-capable approach to end-to-end entity linking", "year": "2022" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Nicola De Cao; Gautier Izacard; Sebastian Riedel; Fabio Petroni", "journal": "", "ref_id": "b2", "title": "Autoregressive entity retrieval", "year": "2021" }, { "authors": "Arun Chaganty; Ashwin Paranjape; Percy Liang; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Importance sampling for unbiased on-demand evaluation of knowledge base population", "year": "2017" }, { "authors": "Kevin Clark; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Improving coreference resolution by learning entitylevel distributed representations", "year": "2016" }, { "authors": "James Marta R Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Maillard", "journal": "", "ref_id": "b5", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Markus Eberts; Adrian Ulges", "journal": "", "ref_id": "b6", "title": "Span-based joint entity and relation extraction with transformer pre-training", "year": "2020" }, { "authors": "Hady Elsahar; Pavlos Vougiouklis; Arslen Remaci; Christophe Gravier; Jonathon Hare; Frederique Laforest; Elena Simperl", "journal": "European Language Resources Association (ELRA", "ref_id": "b7", "title": "T-REx: A large scale alignment of natural language with knowledge base triples", "year": "2018" }, { "authors": "Juri Ganitkevitch; Benjamin Van Durme; Chris Callison-Burch", "journal": "", "ref_id": "b8", "title": "PPDB: The paraphrase database", "year": "2013" }, { "authors": "Nicolas Gontier; Siva Reddy; Christopher Pal", "journal": "", "ref_id": "b9", "title": "Does entity abstraction help generative transformers reason? Transactions on Machine Learning Research", "year": "2022" }, { "authors": "Adam Grycner; Gerhard Weikum", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "POLY: Mining relational paraphrases from multilingual sentences", "year": "2016" }, { "authors": "Xu Han; Hao Zhu; Pengfei Yu; Ziyun Wang; Yuan Yao; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation", "year": "2018" }, { "authors": "Pere-Lluís Huguet; Cabot ; Roberto Navigli", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "REBEL: Relation extraction by end-to-end language generation", "year": "2021" }, { "authors": "Bin Ji; Jie Yu; Shasha Li; Jun Ma; Qingbo Wu; Yusong Tan; Huijun Liu", "journal": "International Committee on Computational Linguistics", "ref_id": "b13", "title": "Span-based joint entity and relation extraction with attention-based spanspecific and contextual semantic representations", "year": "2020" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Martin Josifoski; Nicola De Cao; Maxime Peyrard; Fabio Petroni; Robert West", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "GenIE: Generative information extraction", "year": "2022" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Filipe Mesquita; Matteo Cannaviccio; Jordan Schmidek; Paramita Mirza; Denilson Barbosa", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Knowl-edgeNet: A benchmark dataset for knowledge base population", "year": "2019" }, { "authors": "Mike Mintz; Steven Bills; Rion Snow; Daniel Jurafsky", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Distant supervision for relation extraction without labeled data", "year": "2009" }, { "authors": "Shashi Narayan; Gonçalo Simões; Yao Zhao; Joshua Maynez; Dipanjan Das; Michael Collins; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "A well-composed text is half done! composition sampling for diverse conditional generation", "year": "2022" }, { "authors": "Shashi Narayan; Yao Zhao; Joshua Maynez; Gonçalo Simões; Vitaly Nikolaev; Ryan Mcdonald", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b20", "title": "Planning with learned entity prompts for abstractive summarization", "year": "2021" }, { "authors": "Tapas Nayak; Hwee Tou Ng", "journal": "", "ref_id": "b21", "title": "Effective modeling of encoder-decoder architecture for joint entity and relation extraction", "year": "2020" }, { "authors": "Robert Ormandi; Mohammad Saleh; Erin Winter; Vinay Rao", "journal": "", "ref_id": "b22", "title": "Webred: Effective pretraining and finetuning for relation extraction on the web", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sebastian Riedel; Limin Yao; Andrew Mccallum", "journal": "Springer", "ref_id": "b24", "title": "Modeling relations and their mentions without labeled text", "year": "2010" }, { "authors": "Md Gaetano Rossiello; Nandana Mahbub Chowdhury; Owen Mihindukulasooriya; Alfio Cornec; Gliozzo", "journal": "", "ref_id": "b25", "title": "Knowgl: Knowledge generation and linking from text", "year": "2023" }, { "authors": "Alessandro Seganti; Klaudia Firl Ąg; Helena Skowronska; Michał Satława; Piotr Andruszkiewicz", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Multilingual entity and relation extraction dataset and model", "year": "2021" }, { "authors": "George Stoica; Emmanouil Antonios Platanios; Barnabas Poczos", "journal": "", "ref_id": "b27", "title": "Re-tacred: Addressing shortcomings of the tacred dataset", "year": "2021" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b28", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "Qingyu Tan; Lu Xu; Lidong Bing; Hwee Tou Ng; Sharifah Mahani; Aljunied ", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Revisiting Do-cRED -addressing the false negative problem in relation extraction", "year": "2022" }, { "authors": "Yuqing Tang; Chau Tran; Xian Li; Peng-Jen Chen; Naman Goyal; Vishrav Chaudhary; Jiatao Gu; Angela Fan", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Multilingual translation from denoising pre-training", "year": "2021" }, { "authors": "Gerhard Bayu Distiawan Trisedya; Jianzhong Weikum; Rui Qi; Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Neural relation extraction for knowledge base enrichment", "year": "2019" }, { "authors": "Clara Vania; Grace Lee; Andrea Pierleoni", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Improving distantly supervised document-level relation extraction through natural language inference", "year": "2022" }, { "authors": "Chenxi Whitehouse; Tillman Weyde; Pranava Madhyastha", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Towards a unified model for generating answers and explanations in visual question answering", "year": "2023" }, { "authors": "Chenxi Whitehouse; Tillman Weyde; Pranava Madhyastha; Nikos Komninos", "journal": "", "ref_id": "b35", "title": "Evaluation of fake news detection with knowledge-enhanced language models", "year": "2022" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Ikuya Yamada; Akari Asai; Hiroyuki Shindo; Hideaki Takeda; Yuji Matsumoto", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "LUKE: Deep contextualized entity representations with entityaware self-attention", "year": "2020" }, { "authors": "Yuan Yao; Jiaju Du; Yankai Lin; Peng Li; Zhiyuan Liu; Jie Zhou; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "CodRED: A cross-document relation extraction dataset for acquiring knowledge in the wild", "year": "2021" }, { "authors": "Yuan Yao; Deming Ye; Peng Li; Xu Han; Yankai Lin; Zhenghao Liu; Zhiyuan Liu; Lixin Huang; Jie Zhou; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "DocRED: A large-scale document-level relation extraction dataset", "year": "2019" }, { "authors": "Yuhao Zhang; Victor Zhong; Danqi Chen; Gabor Angeli; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Position-aware attention and supervised data improve slot filling", "year": "2017" } ]
[]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b35", "b11", "b27", "b33", "b36", "b41", "b40", "b27", "b21", "b39", "b8", "b6", "b42", "b11", "b11", "b31", "b22", "b29" ], "table_ref": [], "text": "Due to its broad applications like autonomous driving and robotic navigation, multi-object tracking (MOT) is gaining increasing attention from the research community [4,36]. Early MOT methods mostly adopt the tracking-by-detection paradigm, which first recognizes targets using detection networks [12,28] and then associates them based on appearance similarity [34,37,42] or box Intersection-over-Union (IoU) [41]. Although some of these methods achieve promising performance, all them demand troublesome post-processing operations, e.g., non-maximum suppression [28].\nIn recent years, notable efforts have been paid to remove these post-processing operations [22]. Among them, MOTR [40] is a milestone, because it unifies the detection and association parts of MOT into a Transformer-based architecture and realizes end-to-end tracking without post-processing. Specifically, as shown in Fig. 5 (a), MOTR first employs detect queries to recognize targets like DETR [9]. When a target is located by a detect query, a track query is generated based on this detect query. The generated track query is responsible for continuously detecting this target in the following frames. Summarily, the detect queries are used to detect newly appeared targets and the track queries are for association in a implicit way. Although the MOTR architecture is elegant, it suffers from the optimization conflict between detection and association critically, which results in the poor detection precision. To alleviate this problem, significant efforts have been paid by many Locked GTs are the labels that are assigned to track queries and free GTs are the ones used to train detect queries. researchers [7,43]. For example, as illustrated in Fig. 5 (b), MOTRv2 employs an independently trained 2D object detector like YOLOX [12] to detect targets and provide the detection results to the tracking network. Then, the tracking network can concentrate on association, and thus the conflict is alleviated. Nevertheless, MOTRv2 demands a previously well-trained detector, and it makes the tracking process not in an end-to-end fashion.\nWe argue that MOTRv2 is not elegant and does not reveal the essence of the conflict between detection and association in MOTR. In this work, we aim to explore the dark secret of the conflict and provide strategies to tackle it. To this end, we conduct numerous experiments to analyze the training dynamics of MOTR and observe that the activation times of detect queries are relatively small compared with the total number of annotated boxes. This is because when a detect query matches well with a box annotation, this box annotation will be fixedly assigned to the track query generated from that detect query. We call this annotation a locked ground truth (locked GT). In other words, if a target appears in multiple frames, only its box label in the first frame (called free GT) is assigned to train the detection part, and the labels in all the remaining frames are used to train the track queries. This issue causes the detection part of MOTR is not sufficiently trained.\nTo tackle this issue, we propose a strategy named Release-Fetch Supervision (RFS), which first releases box labels to train the MOTR detection part and then automatically fetches these labels back for training the association part. Specifically, in this strategy, the one-to-one matching in MOTR detection part is conducted between all box labels and all queries (including detect queries and track queries) in the first 5 decoders, and only the matching strategy of the last decoder remains unchanged. In this way, the detection part of MOTR obtains abundant training supervision without altering the end-to-end mechanism.\nBesides, another two strategies, namely pseudo label distillation (PLD) and track group denoising (TGD), are proposed in this work to further improve the detection and association supervision, respectively. Specifically, PLD uses a previously trained 2D object detector like YOLOX [12] or Sparse RCNN [32] to produce pseudo labels and apply auxiliary supervision to MOTR. The distribution of pseudo labels provided by the pre-trained detector is diverse, thereby the MOTR detection part obtains more sufficient training. TGD augments track queries into multiple groups and every group consists of the same number of track queries as the original ones. Random noise is added to the reference points of each track group during training. TGD stabilizes the training of the MOTR association part and thus improves the overall tracking performance.\nComprehensively, in this work, we reveal the underlying reason causing poor detection performance of MOTR, which previously is simply believed because of the conflict between detection and association.\nBased on this observation, we propose three strategies that boost the performance of MOTR by a large margin while avoiding the use of an independently trained 2D object detector like MOTRv2.\nCombining the developed techniques, we propose MOTRv3, which achieves impressive performances across multiple benchmarks including MOT Challenge [23] and DanceTrack [30]. We hope this work can inspire researchers about how to tackle the optimization conflicts among various subtasks." }, { "figure_ref": [], "heading": "MOTR", "publication_ref": [ "b39", "b42" ], "table_ref": [], "text": "MOTRv3 is implemented based on MOTR rather than MOTRv2 since it requires an extra 2D object detector, making the tracker not end-to-end. Since not all readers are clear about the design of MOTR, we first elaborate on its architecture in this section. Refer to the papers for more details [40,43]. Afterwards, we describe how we reveal the essence resulting in the conflict between detection and association in MOTR." }, { "figure_ref": [], "heading": "MOTR Architecture", "publication_ref": [], "table_ref": [], "text": "MOTR consists of a backbone, 6 encoders, and 6 decoders. It realizes end-to-end tracking by applying simple modifications to DETR. Specifically, when a target appears in a video, MOTR employs a detect query to recognize it in the same process as DETR. After recognizing it, MOTR uses a lightweight network block to generate a track query based on this detect query. Then, in the following frames of this video, this track query should continuously localize the positions of this target until it disappears. In a nutshell, the detect queries are utilized to detect newly appeared targets and track queries are for localizing previously detected targets.\nIn MOT datasets, every target in a frame is annotated with a 2D box and an identity. To enable the MOTR detection part to recognize newly appeared targets, the 2D boxes of these new targets are assigned to train detect queries during training. By contrast, if a target exists in previous frames, its box is used to train the track queries." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "A Closer Look at Label Assignment", "publication_ref": [ "b42" ], "table_ref": [], "text": "Although the aforementioned MOTR architecture is simple and presents promising association accuracy, its detection precision is poor. Previous literature [43] commonly believes that this is due to the conflict between detection and association, but no one reveals where this conflict arises from. To shed light on this problem, we conduct an extensive analysis. As suggested in Fig. 2 (a), the activation numbers of detect queries with different IDs are limited. We further compare the numbers of 2D box labels that are released to train the detect and track queries (see Fig. 2 (c)). It can be observed that in the first epoch over 60% labels are used to train the track queries while only 40% are for the detect queries. In the following epochs, the percentage of labels assigned to track queries gradually grows, and the detect queries constantly cannot receive sufficient supervision. To alleviate this problem, we propose the RFS strategy. As shown in Fig. 2 (b,d), RFS boosts the activation times and received 2D box label numbers significantly. Interestingly, although RFS enhances the percentages of labels assigned to detect queries in the initial epochs by large margins, the percentages in the final epochs are similar to the dynamics without RFS. This phenomenon implies that RFS automatically returns the labels back to the MOTR association part after the detection part is sufficiently trained." }, { "figure_ref": [], "heading": "MOTRv3", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "Overview", "publication_ref": [ "b17" ], "table_ref": [], "text": "As mentioned before, MOTRv3 is the same as MOTR except the three contributions, i.e., RFS, PLD, and TGD, which are illustrated in Fig. 3. In this section, we elaborate on the details of them one by one. Among them, RFS conducts one-to-one matching between all GTs and all queries to train the detection capability of the MOTRv3, which is different from MOTR that performs matching between only free GTs and detect queries. As shown in Fig. 2 (d), RFS releases the labels originally used for training track queries in MOTR to train the detect queries and gradually fetches them back with the progress of the training process. In PLD, a pre-trained detector is employed to produce more pseudo GTs to train the MOTR detection part more sufficiently. TGD improves the training dynamics stability of the association part by expanding track queries into several groups and then conducting the one-to-one assignment. The entire tracker is optimized with a multi-frame loss function the same as MOTR. The loss function for each frame is formulated as:\nL = λ cls L cls + λ l1 L l1 + λ giou L giou ,\nwhere L cls , L l1 , and L giou are the focal loss [18], L 1 loss and IoU loss. λ cls , λ l1 , λ giou are the corresponding hyper-parameters." }, { "figure_ref": [ "fig_1" ], "heading": "Release-Fetch Supervision", "publication_ref": [ "b15", "b0" ], "table_ref": [], "text": "The step of matching labels with various queries for computing loss is critical for DETR-like models. For the i th frame in a video, assume there are\nK labels ŷi = {ŷ i j } K j=1 , M detect queries q d = {q d j } M j=1\n, and N track queries q t = {q t j } N j=1 (usually M + N > K). There are two matching strategies in MOTR, one for detect queries and the other for track queries. In the first one, the labels ŷi d of newly appeared targets are assigned to detect queries q d based on Hungarian matching [16]. Mathematically, for the l th decoder layer (l = 1, ..., L), this process is formulated as:\nσ(i,l) d = arg min σ (i,l) d ∈S (i,l) d M j=1 L d (i,l) j , ŷ(i,l) σ (i,l) d (j) ,(1)\nwhere S d , σ\n(i,l) d , d (i,l) j\n, and L(•) denote the matching space containing all possible matching combinations between q d and ŷi d , a sampled matching combination, the detection result decoded from the detect query q d j , and the matching loss, respectively. σ(i,l) d represents the obtained optimal matching result. In the second matching strategy, the labels ŷi t that belong to targets appearing in previous frames are distributed to track queries with respect to the matching result in the previous frame, which is given as:\nσ i t = Ψ(σ i-1 t , σ(i-1,L) d ),(2)\nwhere σ i t , and σ(i-1,L)\nd\nrepresent the matching result between ŷi t and q t in the i th frame, and the matching pairs between ŷi d and q d in the final decoder layer of the (i -1) th frame. Ψ(•) represents the process of generating σ i t based on σ i-1 t and σ(i-1,L) d , and this process includes operations like removing track queries corresponding to instances disappeared for continuous frames.\nWhen σ i t is obtained as Eq. ( 2), the labels matching with d (i,l) j are removed from ŷi d and added to ŷi t . Repeating this process in various iterations, the MOTR tracking part gradually grabs supervision labels belonging to the detection part away and causes poor detection performance.\nAs discussed in Section 2.2, the matching strategies allocate too few supervision labels to q d . To address this problem, we propose a new matching strategy to replace the one in Eq. (1). Specifically, the matching strategy of the final decoder layer remains unchanged. For the first L -1 decoder layers, we modify the detection matching space S d as S a . Specifically, S a contains all possible matching combinations between all queries (q d and q t ) and all labels (ŷ i ). In this way, all labels are adopted to train both detect and track queries in the detection loss part. The labels are used to train q d or q t is determined by the similarity between labels and the boxes decoded from queries. Therefore, as illustrated in Fig. 2 (d), since q t cannot precisely follow the locations of targets at the beginning of training, the labels are mostly released to train q d . Then, after q t gradually be able to correctly recognize the locations of corresponding targets, the labels are fetched back to train q t automatically. Notably, we only change the detection matching strategy in RFS and the matching strategy of the association part in Eq. ( 2) remains the same as MOTR." }, { "figure_ref": [], "heading": "Pseudo Label Distillation", "publication_ref": [], "table_ref": [], "text": "RFS releases more supervision labels to the detection part by changing the matching strategy. In PLD, we further enhance the supervision applied to the detection part by generating pseudo labels using a previously trained 2D object detector like YOLOX. Notably, this detector is only adopted in training and abandoned during inference, which is different from MOTRv2 that demands this detector in inference.\nIn PLD, we use the pretrained 2D object detector to generate detection boxes and employ a confidence threshold (such as 0.05) to select precise ones from these boxes. The selected boxes ŷi e are used as pseudo labels to train the queries of all 6 decoders. Besides the training process in RFS, we conduct one-to-one matching between all queries (q d and q t ) and ŷi e to compute detection loss. In this way, q d obtains more supervision.\nAlthough the aforementioned process increases the labels for training q d , the problem is that ŷi e is often noisy. To alleviate this problem, we propose to reweight the detection part loss based on the detection confidence c e produced by the 2D object detector. Specifically, if a query matches with a label, the loss is multiplied by the confidence value. If no label is matched, the query computes loss with the background class (the same as DETR) and the loss is reweighted by a factor 0.5. Mathematically, this process is formulated as:\nL σ i p = P j=1 ω j • L(d i j , ỹi σ i p (j) ), ω j = c e , if ỹi σ i p (j) ̸ = ∅ 0.5, else ,(3)\nwhere σ i p denotes the matching results between outputs d and pseudo labels ỹ. P is the number of pseudo labels. ω j and c e denote the reweighting factor and the classification score, respectively." }, { "figure_ref": [], "heading": "Track Group Denosing", "publication_ref": [ "b9", "b45", "b16" ], "table_ref": [], "text": "The two aforementioned strategies, RFS and PLD, improve the detection capability of MOTR. In this part, we develop a strategy, TGD, to boost the association performance. Specifically, inspired by Group DETR [10], we first augment every track query as a track query group consisting of multiple queries. Notably, the assignment between each track query group and GT is the same as the original track query. By conducting one-to-one matching between track query groups and labels and then computing loss, the track queries obtain more sufficient supervision.\nBesides, we note that the tracking performance is influenced by the quality of initial reference points [46] significantly. To boost the robustness of the model, we propose to add random noise to the reference point of every element in a track query group. In this way, the model becomes less dependent on promising initial reference points and the association becomes more robust.\nFurthermore, an attention mask is used to prevent information leakage [17] between the original track query and the augmented queries. Mathematically, we use A = [a ij ] S×S to denote the attention mask for decoders, where S = G • N + M . Then, the values in the attention mask are defined as:\naij =    1, if i < M + N and j > M + N ; 1, if i ≥ M + N and ⌊ i-(M +N ) N ⌋ ̸ = ⌊ j-(M +N ) N ⌋; 0, otherwise. (4\n)\nwhere i and j denote the IDs of two queries and a ij defines whether there should exist information communication between these two queries. After expanding the original matching space through our proposed RFS, PLD and TGD strategy, we then calculate the overall clip loss L clip according to the matching results. Mathematically, it is formulated as:\nL clip = T i=1 (L σ i r + L σ i p + L σ i g )/Oi,(5)\nwhere σ i r , σ i p and σ i g denote the matching results in the i th frame obtained by RFS, PLD and TGD, respectively. The corresponding L represents the loss based on the matching results. T is the length of the video clip and O i is the number of the objects in the i th frame." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Metrics", "publication_ref": [ "b29", "b22", "b29", "b22", "b20", "b2" ], "table_ref": [], "text": "We conduct extensive experiments on three public datasets, including DanceTrack [30] and MOT17 [23], to evaluate the superiority of MOTRv3. In this part, we introduce the adopted datasets and corresponding evaluation metrics.\nDanceTrack [30] is a large-scale multiple object tracking dataset with 100 video sequences in dancing scenarios. The 100 sequences are divided into 40, 25, and 35 sequences for training, validation, and testing, respectively. The targets in DanceTrack are often highly similar in appearance but present various dancing movements. This characteristic causes huge challenge to the association in MOT. In addition, the video sequences in DanceTrack are quite long (52.9 seconds on average for a sequence), which further enhances the tracking difficulty.\nMOT17 [23] consists of 14 video sequences. Among them, 7 sequences are for training and the other 7 sequences are used to validate models. These sequences cover various scenarios and weather conditions, which include indoor and outdoor, day and night, etc. The targets in these video sequences are usually pedestrians moving in simple patterns, such as walking straight.\nMetrics. The metrics adopted in the aforementioned datasets include the HOTA [21] and CLEAR-MOT Metrics [3]. Specifically, HOTA consists of higher order tracking accuracy (HOTA), association accuracy score (AssA), and detection accuracy score (DetA). CLEAR-MOT Metrics include ID F1 score (IDF1), multiple object tracking accuracy (MOTA) and identity switches (IDS)." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b39", "b42", "b38", "b45", "b18", "b19", "b14", "b29", "b22", "b11", "b31" ], "table_ref": [], "text": "Following MOTR and MOTRv2 [40,43,39], MOTRv3 is implemented based on Deformable-DETR [46], which is pre-trained on COCO [19] and employs ConvNext-Base [20] as the vision backbone. During the training process, the batch size is set to 8. For the experiments in DanceTrack and MOT17, each batch is a video clip including 5 frames, which are selected from a video with a random sampling interval between 1 to 10. Following MOTRv2, track queries are generated based on detect queries when the confidences of these detect queries are above the threshold 0.5. Adam [15] optimizer is employed and the initial learning rate is set to 2 × 10 -4 .\nFor the experiments in DanceTrack [30], the models are trained for 5 epochs and the learning rate is dropped by a factor of 10 at the 4 th epoch. In MOT17 [23], we train models for 50 epochs and the learning rate drops at the 40 th epoch. λ cls , λ l1 and λ giou are set to 2, 5 and 2, respectively. For the implementation of PLD, the auxiliary boxes from pre-trained detectors are obtained in an offline manner. Two common 2D object detectors are adopted, which include YOLOX [12] and Sparse RCNN [32]. The generated 2D box predictions with confidence scores below 0.05 are removed. In the implementation of TGD, we expand the original track query to 4 track query groups." }, { "figure_ref": [], "heading": "Comparison with State-of-the-art Methods", "publication_ref": [ "b39" ], "table_ref": [], "text": "In this part, we compare MOTRv3 with preceding state-of-the-art methods on the two aforementioned MOT benchmarks, i.e., DanceTrack and MOT17. The results on these two benchmarks are reported in Tab. 1-2, respectively. Without bells and whistles, MOTRv3 outperforms all compared methods in the end-to-end fashion.\nDanceTrack. The results on the DanceTrack test set are presented in Tab. 1. As reported, MOTRv3 outperforms the baseline method MOTR [40] by more than 16 HOTA points on the test set (70.4% vs. 54.2% HOTA). Furthermore, the tracking performance of MOTRv3 is better than MOTRv2 (70.4% vs. 69.9% HOTA) without using an independent 2D object detector, which is trained on numerous extra 2D object detection data. Meanwhile, MOTRv3 achieves better detection precision Furthermore, we find that the performance of MOTRv2 relies heavily on the adopted post-processing operations. If these operations are removed, the performance of MOTRv2 drops sharply, which is 57.6% HOTA and 70.1% MOTA. By contrast, MOTRv3 does not use any extra post-processing operations and still achieves competitive tracking accuracy. Additionally, it can be observed that ByteTrack, a CNN-based method, behaves promisingly in MOT17, although it performs inferior to MOTRv3 in DanceTrack. We infer that this is because the target movement trajectories in MOT17 are simple. Therefore, the targets in MOT17 can be tracked well by combining a strong 2D object detector like YOLOX and hand-crafted post-processing rules." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this part, we perform extensive ablation study experiments using the DanceTrack validation set to analyze the effectiveness of various proposed strategies in MOTRv3. The baseline method is MOTR with anchor queries. All the models are trained using the DanceTrack training set for 5 epochs.\nOverall ablation study. In this part, we study the overall influence of the three proposed strategies (RFS, PLD, and TGD) on the MOTRv3 performance. The results are reported in Tab. 3. According to the results, all these three strategies boost the tracking performance significantly. Among these strategies, both RFS (row #2) and PLD (row #3) enhance the tracking precision by a large margin. Specifically, RFS improves the MOTA score by 10.2% and DetA score by 7.4%. PLD boosts the MOTA score by 9.6% and DetA score by 7.1%. The results indicate that both fair assignment strategy and aux supervision improve the detection capability of MOTR quite effectively. Additionally, them further improves the tracking performance by a large margin (row #5). This is because RFS guarantees that a proper ratio of labels is released to train the detect queries, and PLD helps generate more detection labels. Combining them enables the MOTR detection part to be sufficiently trained. Moreover, it can be observed that TGD improves the AssA score by 2.7% and IDF1 score by 2.1%. This observation indicates that the representing target ability of track queries is improved, and thus the produced trajectories become more continuous and robust.\nIncorporating all these strategies, MOTRv3 (row 6) outperforms the baseline (row 1) by 7.3% on HOTA and 11.5% on MOTA. Summarily, the experimental results demonstrate that the proposed strategies can address the conflict between detection and association existing in end-to-end trackers effectively and MOTRv3 is an efficient end-to-end tracker.\nPLD. PLD is responsible for producing more training labels. In this part, we study how different pseudo label generation strategies affect tracking performance. Specifically, we compare 4 strategies, i.e., directly copying GTs, generating pseudo labels by YOLOX or Sparse RCNN, and combining pseudo labels (concat or parallel) from YOLOX and Sparse RCNN. The results are presented in Tab. 4. Two observations can be drawn. First of all, using the pseudo labels produced by detectors brings better performance than directly employing real labels. We infer that this is because the boxes produced by an extra detector are more diverse than GTs and these boxes contain detection confidence values. Secondly, employing any one of YOLOX and Sparse RCNN leads to promising performance improvement and combining them further improves the detection performance. However, the association performance tends to decrease when combining in parallel. We speculate that this is because the distributions of boxes generated by them are different and this issue confuses the learning of association during training. TGD. In this experiment, we study how the track query number contained in a group affects performance and the influence of noise added to the track query reference points. The results are reported in Tab. 5. As shown in the 1 st ∼ 4 th rows of results, augmenting every query into a group improves the performance significantly and setting the query number to 4 results in the best result. Augmenting a query into too many or too few queries both harm the final tracking performance. Besides, adding noise to reference points also boosts the tracking precision significantly, which is given in the 5 th row.\nThe results suggest that the developed TGD strategy enhances the association accuracy of MOTR significantly, which we believe is because the stability of the training process is improved. Model scaling up. In this experiment, we study how scaling up backbones affects the tracking performance of MOTRv2 and MOTRv3. Specifically, we replace the original ResNet-50 backbone of them with ConvNeXt-tiny, ConvNeXt-small, and ConvNeXt-base, respectively. The results are illustrated as Fig. 4. It can be observed that the performance of MOTRv3 is continuously boosted with the scaling up of backbones. However, scaling up the backbone harms the tracking precision of MOTRv2. We speculate that this is because MOTRv2 needs an extra detector and is not end-to-end, thereby only replacing the backbone of the association part does not improve the overall tracking performance. By contrast, MOTRv3 is fully end-to-end and thus enjoys the benefits from scaling up the model." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b27", "b44", "b11", "b3", "b35", "b1", "b25", "b23", "b34", "b40", "b7", "b4", "b0", "b10", "b1", "b12", "b43", "b30", "b28", "b33", "b41", "b37", "b36", "b27", "b32", "b8", "b39", "b6", "b38", "b26", "b42", "b11" ], "table_ref": [], "text": "Tracking by detection. Thanks to the fast development of object detection techniques [28,45,12], existing MOT methods mainly follow the tracking-by-detection (TBD) paradigm [4,36,2,26,24,35], which first uses detectors to localize targets and then associate them to obtain tracklets. According to the association strategy, MOT methods can be further divided into motion-based trackers and appearance-based trackers. Specifically, motion-based trackers [41,8] perform the association step based on motion prediction algorithms, such as Kalman Filter [5] and optical flow [1]. Some motion-based trackers [11,2,13,44,31,29] directly predict the future tracklets or displacements in future frames compared with the current frame. In contrast to the motion-based methods, the appearance-based trackers [34,42,38,37] usually use a Re-ID network or appearance sub-network to extract the appearance representation of targets and match them based on representation similarity.\nEnd-to-end MOT. Although the performance of TBD methods is promising, they all demand troublesome post-processing operations, e.g., non-maximum suppression (NMS) [28] and box association.\nRecently, the Transformer architecture [33] originally designed for natural language processing (NLP) has been applied to computer vision. For instance, DETR [9] turns 2D object detection into a set prediction problem and realizes end-to-end detection. Inspired by DETR, MOTR [40] transfers MOT to a sequence prediction problem by representing each tracklet through a track query and dynamically updating track queries during tracking. In this way, the tracking process can be achieved in an end-to-end fashion. However, despite MOTR enjoys the merits of simplicity and elegance, it suffers the limitation of poor detection performance compared to the TBD methods. To improve MOTR, MeMOT [7] builds the short-term and long-term memory bank to capture temporal information. LTrack [39] introduces natural language representation obtained by CLIP [27] to generalize MOTR to unseen domains. MOTRv2 [43] incorporate the YOLOX [12] object detector to generate proposals as object anchors, providing detection prior to MOTR." }, { "figure_ref": [ "fig_4" ], "heading": "Limitation and Conclusion", "publication_ref": [ "b40", "b29", "b5", "b11", "b13" ], "table_ref": [], "text": "In this work, we reveal the real reason causing the conflict between detection and association in MOTR, which results in the poor detection. Based on this observation, we propose RFS, which improves the detection and overall tracking performances by a large margin. However, while RFS helps mitigate this conflict in terms of supervision, the trade-off between detection and association remains unresolved. How to disentangle two sub-tasks still deserves further study. Besides, we have proposed two another strategies, PLD and TGD, to further improve the detection and query parts of MOTR. Combining all the three strategies, the developed tracker, MOTRv3, has achieved impressive performances across multiple benchmarks. We hope this work can inspire more solid works about MOT in the future.\nYOLOX. We employ the YOLOX detector that the model weights are from ByteTrack [41] and DanceTrack [30]. The hyper-parameters and data augmentation techniques, including Mosaic [6] and Mixup, remain consistent with ByteTrack. YOLOX-X [12] is adopted as the backbone. For the results on MOT17, the model is trained for 80 epochs combining the data from MOT17, Crowdhuman, Cityperson, and ETHZ datasets. Regarding DanceTrack, we directly used the YOLOX weight provided by the DanceTrack official GitHub repository3 .\nSparse RCNN. We utilize the original Sparse RCNN implemented in the official repository 4 , with the ResNet-50 backbone [14] initialized from a COCO-pretrained model. The number of learnable anchors is set to 500. To train on the MOT17 dataset, we initially train Sparse RCNN on Crowdhuman for 50 epochs. Subsequently, we further fine-tune it on MOT17 for additional 30 epochs. Similarly, for the process on DanceTrack, we also first pre-train Sparse RCNN on Crowdhuman for 50 epochs, and then fine-tune it on DanceTrack for 20 epochs.\nD Additional Experiments. Ablation study on RFS. By applying RFS to MOTR, we allow detect queries and track queries to compete for the supervision labels fairly in the first 5 decoders. In this way, the detect queries of MOTRv3 obtain more sufficient supervision compared with the ones in MOTR. Nevertheless, RFS could result in inconsistent learned label assignment patterns between the first 5 decoders and the last decoder due to different assignment strategies during training, which may be harmful to the final tracking performance. In this part, we study this issue by visualizing the diversity between the first 5 decoder layers and the last decoder layer during training.\nSpecifically, for a detect query, if the matched label is different between the last decoder layer and one of the 5 decoder layers, we call the label assignment is misaligned. In this experiment, we count the percentages of misaligned labels relative to the total labels for trackers using and without using RFS. The misalignment percentages of two trackers in various epochs are visualized in Fig. 6. The graph clearly indicates that the usage of RFS amplifies the misalignment of label matching during the initial training epochs, but over time, the percentages gradually decrease and eventually reach the same level as those without RFS. This observation suggests that the high matching diversity introduced by RFS in the early training stage does not hinder the convergence of the label matching process. In fact, the increased matching diversity allows more queries to participate in the learning process, which ultimately benefits the detection part.\nAnalysis on the inference speed. As mentioned in the main paper, our proposed strategies, namely RFS, PLD, and TGD, are exclusively employed during training and do not introduce any additional network blocks. Consequently, the inference speed of MOTRv3 remains competitive. As depicted in Tab. 6, we compare the inference speeds of MOTR, MOTRv2, and MOTRv3 on the DanceTrack test set. It can be observed that our MOTRv3, with the ResNet-50 backbone, achieves the highest inference speed compared to MOTR and MOTRv2. Moreover, MOTRv3 with the ConvNext-Base backbone achieves superior performance while still maintaining a competitive inference speed. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* This work was performed when En" }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "In this appendix, we provide more details of MOTRv3 due to the 9-page limitation on paper length. Specifically, Section B presents more details about the proposed track group denoising (TGD) strategy. Section C elaborates on the auxiliary 2D object detector employed in the pseudo label generation strategy. Section D provides additional experiments to analyze the characteristics of MOTRv3." }, { "figure_ref": [], "heading": "B More Details about Track Group Denosing", "publication_ref": [ "b16" ], "table_ref": [], "text": "This section provides more details about the implementation of TGD. As depicted in Fig. 5, TGD initially expands a single group of track queries into K groups, and then concatenates them before feeding them into the decoder. Moreover, the reference boxes of the track queries are also replicated K times. Subsequently, random noise is applied to the augment reference boxes. Notably, we only scale the reference boxes (alter the width and height) rather than changing their box centers. In this way, the matching results of different track query groups remain the same as the original track queries. Thus, we do not need to recalculate the matching result.\nIn addition, the attention mask used for preventing information leakage [17] is crucial for TGD. Specifically, there are two types of information leakage needing to be addressed, the first one is between original track queries and augmented track queries, and the other one is between different augmented track query groups. As shown in Fig. 5, the attention mask is used to address these two types of information leakage. We only illustrate the process of one decoder layer for example, and the other decoders share the same procedures. First of all, the original track queries are expanded into K track query groups. Subsequently, the decoder takes in all these query groups to perform one-to-one matching. Besides expanding track queries, the reference boxes of queries are also expanded as K groups, and random noise is added to these reference boxes. To prevent information leakage between original track queries and the expanded track query groups, an attention mask is applied." }, { "figure_ref": [], "heading": "Multi-head Attention", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C Auxiliary Detectors used in Pseudo Label Generation", "publication_ref": [ "b11", "b31" ], "table_ref": [], "text": "In this work, we mainly use YOLOX [12] and Sparse RCNN [32] detectors to generate pseudo labels." } ]
Although end-to-end multi-object trackers like MOTR [40] enjoy the merits of simplicity, they suffer from the conflict between detection and association seriously, resulting in unsatisfactory convergence dynamics. While MOTRv2 [43] partly addresses this problem, it demands an additional detection network for assistance. In this work, we serve as the first to reveal that this conflict arises from the unfair label assignment between detect queries and track queries during training, where these detect queries recognize targets and track queries associate them. Based on this observation, we propose MOTRv3, which balances the label assignment process using the developed release-fetch supervision strategy. In this strategy, labels are first released for detection and gradually fetched back for association. Besides, another two strategies named pseudo label distillation and track group denoising are designed to further improve the supervision for detection and association. Without the assistance of an extra detection network during inference, MOTRv3 achieves impressive performance across diverse benchmarks, e.g., MOT17, DanceTrack.
MOTRv3: Release-Fetch Supervision for End-to-End Multi-Object Tracking
[ { "figure_caption": "Figure 1 :1Figure1: Comparison among MOTR, MOTRv2, and MOTRv3 (ours). The differences in MOTRv2 and MOTRv3 compared with MOTR are marked in red brown. Locked GTs are the labels that are assigned to track queries and free GTs are the ones used to train detect queries. researchers[7,43]. For example, as illustrated in Fig.5(b), MOTRv2 employs an independently trained 2D object detector like YOLOX[12] to detect targets and provide the detection results to the tracking network. Then, the tracking network can concentrate on association, and thus the conflict is alleviated. Nevertheless, MOTRv2 demands a previously well-trained detector, and it makes the tracking process not in an end-to-end fashion.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Figure (a) and (b) show the activation number of different detect queries with and without the proposed RFS strategy during the training process.Figure (c) and (d) illustrate the dynamics of 2D box label percentages assigned to the detection and association parts in the conditions with and without RFS.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of the MOTRv3 training pipeline. We primarily illustrate the three proposed strategies (RFS, PLD and TGD) in this figure.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "FigureFigure Model scaling up. Res50, Conv-T, Conv-S and Conv-B denote ResNet-50, ConvNext-Tiny, ConvNext-Small and ConvNext-Base, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Matching results diversity training.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Tracking results on DanceTrack test set. ↑/↓ indicates that a higher/lower score is better.", "figure_data": "MethodEnd to end HOTA↑ AssA↑ DetA↑ MOTA↑ IDF1↑CNN-basedQDTrack [25]✗54.236.8 80.187.750.4FairMOT [42]✗59.358.0 60.973.772.3CenterTrack [44]✗41.822.6 78.186.835.7ByteTrack [41]✗47.732.1 71.089.653.9OC-SORT [8]✗55.138.3 80.392.054.6Transformer-basedTransTrack [31]✗45.527.5 75.988.445.2MOTR [40]✓54.240.2 73.579.751.5MOTRv2 [43]✗69.959.0 83.091.971.7MOTRv3✓70.459.3 83.892.972.3", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Tracking results on the MOT17 test set. Notably, MOTRv2 uses extra post-processing operations for MOT17, and we remove them here for fair comparison. * denotes MOTRv2 without post-processing operations.", "figure_data": "MethodEnd to end HOTA↑ AssA↑ DetA↑ MOTA↑ IDF1↑ IDS↓CNN-basedQDTrack [25]✗53.952.7 55.668.766.3 3,378FairMOT [42]✗59.358.0 60.973.772.3 3,303CenterTrack [44]✗52.251.0 53.867.864.7 3,039ByteTrack [41]✗63.162.0 64.580.377.3 2,196Transformer-basedTransTrack [31]✗54.147.9 61.674.563.9 3,663MOTR [40]✓57.855.7 60.373.468.6 2,439MOTRv2 [43]✗62.060.6 63.878.675.0-MOTRv2 * [43]✗57.657.5 58.170.170.3 3,225MOTRv3✓60.258.7 62.175.972.4 2,403than MOTRv2 according to the detection metric MOTA (92.9% vs. 91.9% MOTA), which confirmsthe effectiveness of the proposed strategies, RFS and PLD.", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Overall ablation study of the proposed strategies. The performance of the tracker employing all the developed strategies is highlighted in gray .", "figure_data": "ComponentsM etricsMethodRFS PLD TGD HOTA↑ AssA↑ DetA↑ MOTA↑ IDF1↑ IDS↓1 Base56.647.0 68.475.360.0 1,662260.949.4 75.885.563.7 1,139359.246.5 75.584.961.7 1,284459.649.7 71.980.062.1 1,804561.750.0 76.386.064.8 1,3506 MOTRv363.953.5 76.786.867.2 1,151", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The tracking results of using different pseudo label generation strategies.", "figure_data": "Pretrained detectorHOTA↑ AssA↑ DetA↑ MOTA↑ IDF1↑ IDS↓YOLOX [12]61.649.8 76.486.163.4 1,408Sparse RCNN [32]61.750.0 76.386.064.8 1,350Parallel (YOLOX & Sparse RCNN) 60.447.4 77.186.762.5 1,551Ground Truth60.348.3 75.685.663.5 1,411", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation Study on how TGD affects the tracking performance.", "figure_data": "MethodGroup Num HOTA↑ AssA↑ DetA↑ MOTA↑ IDF1↑ IDS↓base61.750.0 76.386.064.8 1,350+ track query group363.252.4 76.686.165.9 1,116463.753.3 76.886.766.6 1,027563.352.6 76.686.366.5 1,106+ reference point noise463.953.5 76.786.867.2 1,151", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Inference speed comparison on DanceTrack test set among MOTR series.", "figure_data": "MethodBackboneHOTA↑MOTA↑IDF1↑FPS↑MOTRResNet-5054.279.751.59.5MOTRv2ResNet-5069.991.971.76.9MOTRv3ResNet-5068.391.770.110.6MOTRv3ConvNeXt-B70.492.972.39.8", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
En Yu; Tiancai Wang; Zhuoling Li; Yuang Zhang; Xiangyu Zhang; Wenbing Tao
[ { "authors": "S Baker; I Matthews", "journal": "Int J Comput Vis", "ref_id": "b0", "title": "Lucas-kanade 20 years on: A unifying framework", "year": "2004" }, { "authors": "P Bergmann; T Meinhardt; L Leal-Taixe", "journal": "", "ref_id": "b1", "title": "Tracking without bells and whistles", "year": "2019" }, { "authors": "K Bernardin; R Stiefelhagen", "journal": "EURASIP Journal on Image and Video Processing", "ref_id": "b2", "title": "Evaluating multiple object tracking performance: the clear mot metrics", "year": "2008" }, { "authors": "A Bewley; Z Ge; L Ott; F Ramos; B Upcroft", "journal": "IEEE international conference on image processing", "ref_id": "b3", "title": "Simple online and realtime tracking", "year": "2016" }, { "authors": "G Bishop; G Welch", "journal": "", "ref_id": "b4", "title": "An introduction to the kalman filter", "year": "2001" }, { "authors": "A Bochkovskiy; C Y Wang; H Y M Liao", "journal": "", "ref_id": "b5", "title": "Yolov4: Optimal speed and accuracy of object detection", "year": "2020" }, { "authors": "J Cai; M Xu; W Li; Y Xiong; W Xia; Z Tu; S Soatto", "journal": "", "ref_id": "b6", "title": "Memot: multi-object tracking with memory", "year": "2022" }, { "authors": "J Cao; X Weng; R Khirodkar; J Pang; K Kitani", "journal": "", "ref_id": "b7", "title": "Observation-centric sort: Rethinking sort for robust multi-object tracking", "year": "2022" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "", "ref_id": "b8", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Q Chen; X Chen; G Zeng; J Wang", "journal": "", "ref_id": "b9", "title": "Group detr: Fast training convergence with decoupled one-to-many label assignment", "year": "2022" }, { "authors": "C Feichtenhofer; A Pinz; A Zisserman", "journal": "", "ref_id": "b10", "title": "Detect to track and track to detect", "year": "2017" }, { "authors": "Z Ge; S Liu; F Wang; Z Li; J Sun", "journal": "", "ref_id": "b11", "title": "Yolox: Exceeding yolo series in", "year": "2021" }, { "authors": "S Han; P Huang; H Wang; E Yu; D Liu; X Pan", "journal": "Neurocomputing", "ref_id": "b12", "title": "Mat: Motion-aware multi-object tracking", "year": "2022" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "H W Kuhn", "journal": "Nav. Res. Logist", "ref_id": "b15", "title": "The hungarian method for the assignment problem", "year": "1955" }, { "authors": "F Li; H Zhang; S Liu; J Guo; L M Ni; L Zhang", "journal": "", "ref_id": "b16", "title": "Dn-detr: Accelerate detr training by introducing query denoising", "year": "2022" }, { "authors": "T Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b17", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "T Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b18", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Z Liu; H Mao; C Y Wu; C Feichtenhofer; T Darrell; S Xie", "journal": "", "ref_id": "b19", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "J Luiten; A Osep; P Dendorfer; P Torr; A Geiger; L Leal-Taixé; B Leibe", "journal": "International journal of computer vision", "ref_id": "b20", "title": "Hota: A higher order metric for evaluating multi-object tracking", "year": "2021" }, { "authors": "T Meinhardt; A Kirillov; L Leal-Taixe; C Feichtenhofer", "journal": "", "ref_id": "b21", "title": "Trackformer: Multi-object tracking with transformers", "year": "2021" }, { "authors": "A Milan; L Leal-Taixé; I Reid; S Roth; K Schindler", "journal": "", "ref_id": "b22", "title": "Mot16: A benchmark for multi-object tracking", "year": "2016" }, { "authors": "B Pang; Y Li; Y Zhang; M Li; C Lu", "journal": "", "ref_id": "b23", "title": "Tubetk: Adopting tubes to track multi-object in a one-step training model", "year": "2020" }, { "authors": "J Pang; L Qiu; X Li; H Chen; Q Li; T Darrell; F Yu", "journal": "", "ref_id": "b24", "title": "Quasi-dense similarity learning for multiple object tracking", "year": "2021" }, { "authors": "J Peng; C Wang; F Wan; Y Wu; Y Wang; Y Tai; C Wang; J Li; F Huang; Y Fu", "journal": "", "ref_id": "b25", "title": "Chained-tracker: Chaining paired attentive regression results for end-to-end joint multipleobject detection and tracking", "year": "2020" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b26", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "B Shuai; A Berneshawi; X Li; D Modolo; J Tighe", "journal": "", "ref_id": "b28", "title": "Siammot: Siamese multi-object tracking", "year": "2021" }, { "authors": "P Sun; J Cao; Y Jiang; Z Yuan; S Bai; K Kitani; P Luo", "journal": "", "ref_id": "b29", "title": "Dancetrack: Multi-object tracking in uniform appearance and diverse motion", "year": "2022" }, { "authors": "P Sun; Y Jiang; R Zhang; E Xie; J Cao; X Hu; T Kong; Z Yuan; C Wang; P Luo", "journal": "", "ref_id": "b30", "title": "Transtrack: Multiple-object tracking with transformer", "year": "2020" }, { "authors": "P Sun; R Zhang; Y Jiang; T Kong; C Xu; W Zhan; M Tomizuka; L Li; Z Yuan; C Wang", "journal": "", "ref_id": "b31", "title": "Sparse r-cnn: End-to-end object detection with learnable proposals", "year": "2021" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Attention is all you need", "year": "2017" }, { "authors": "Y Wang; X Weng; K Kitani", "journal": "", "ref_id": "b33", "title": "Joint detection and multi-object tracking with graph neural networks", "year": "2020" }, { "authors": "Z Wang; L Zheng; Y Liu; Y Li; S Wang", "journal": "", "ref_id": "b34", "title": "Towards real-time multi-object tracking", "year": "2020" }, { "authors": "N Wojke; A Bewley; D Paulus", "journal": "", "ref_id": "b35", "title": "Simple online and realtime tracking with a deep association metric", "year": "2017" }, { "authors": "E Yu; Z Li; S Han", "journal": "", "ref_id": "b36", "title": "Towards discriminative representation: Multi-view trajectory contrastive learning for online multi-object tracking", "year": "2022" }, { "authors": "E Yu; Z Li; S Han; H Wang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b37", "title": "Relationtrack: Relation-aware multiple object tracking with decoupled representation", "year": "2022" }, { "authors": "E Yu; S Liu; Z Li; J Yang; S Han; W Tao", "journal": "", "ref_id": "b38", "title": "Generalizing multiple object tracking to unseen domains by introducing natural language representation", "year": "2022" }, { "authors": "F Zeng; B Dong; Y Zhang; T Wang; X Zhang; Y Wei", "journal": "", "ref_id": "b39", "title": "Motr: End-to-end multiple-object tracking with transformer", "year": "2009" }, { "authors": "Y Zhang; P Sun; Y Jiang; D Yu; F Weng; Z Yuan; P Luo; W Liu; X Wang", "journal": "", "ref_id": "b40", "title": "Bytetrack: Multi-object tracking by associating every detection box", "year": "2022" }, { "authors": "Y Zhang; C Wang; X Wang; W Zeng; W Liu", "journal": "Int J Comput Vis", "ref_id": "b41", "title": "Fairmot: On the fairness of detection and re-identification in multiple object tracking", "year": "2021" }, { "authors": "Y Zhang; T Wang; X Zhang", "journal": "", "ref_id": "b42", "title": "Motrv2: Bootstrapping end-to-end multi-object tracking by pretrained object detectors", "year": "2009" }, { "authors": "X Zhou; V Koltun; P Krähenbühl", "journal": "", "ref_id": "b43", "title": "Tracking objects as points", "year": "2020" }, { "authors": "X Zhou; D Wang; P Krähenbühl", "journal": "", "ref_id": "b44", "title": "Objects as points", "year": "2019" }, { "authors": "X Zhu; W Su; L Lu; B Li; X Wang; J Dai", "journal": "", "ref_id": "b45", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 353.45, 293.99, 151.79, 9.65 ], "formula_id": "formula_0", "formula_text": "L = λ cls L cls + λ l1 L l1 + λ giou L giou ," }, { "formula_coordinates": [ 4, 108, 369.56, 396, 25.73 ], "formula_id": "formula_1", "formula_text": "K labels ŷi = {ŷ i j } K j=1 , M detect queries q d = {q d j } M j=1" }, { "formula_coordinates": [ 4, 216.55, 435.15, 288.12, 31.58 ], "formula_id": "formula_2", "formula_text": "σ(i,l) d = arg min σ (i,l) d ∈S (i,l) d M j=1 L d (i,l) j , ŷ(i,l) σ (i,l) d (j) ,(1)" }, { "formula_coordinates": [ 4, 157.92, 475.15, 38.96, 14.3 ], "formula_id": "formula_3", "formula_text": "(i,l) d , d (i,l) j" }, { "formula_coordinates": [ 4, 256.73, 559.36, 247.93, 13.38 ], "formula_id": "formula_4", "formula_text": "σ i t = Ψ(σ i-1 t , σ(i-1,L) d ),(2)" }, { "formula_coordinates": [ 4, 173.78, 588.23, 4.15, 6.12 ], "formula_id": "formula_5", "formula_text": "d" }, { "formula_coordinates": [ 5, 178.91, 376.26, 325.76, 30.32 ], "formula_id": "formula_6", "formula_text": "L σ i p = P j=1 ω j • L(d i j , ỹi σ i p (j) ), ω j = c e , if ỹi σ i p (j) ̸ = ∅ 0.5, else ,(3)" }, { "formula_coordinates": [ 5, 187.59, 615.42, 313.53, 31.34 ], "formula_id": "formula_7", "formula_text": "aij =    1, if i < M + N and j > M + N ; 1, if i ≥ M + N and ⌊ i-(M +N ) N ⌋ ̸ = ⌊ j-(M +N ) N ⌋; 0, otherwise. (4" }, { "formula_coordinates": [ 5, 501.12, 628.34, 3.48, 7.77 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 5, 237.97, 698.31, 266.63, 26.84 ], "formula_id": "formula_9", "formula_text": "L clip = T i=1 (L σ i r + L σ i p + L σ i g )/Oi,(5)" } ]
10.18653/v1/2020.acl-main.708
2023-11-07
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b19", "b36", "b66", "b46", "b13", "b36", "b38", "b35", "b21", "b29", "b45", "b9", "b51", "b62", "b52", "b65", "b36", "b14", "b63", "b22", "b40", "b8", "b18", "b61", "b50", "b32", "b35", "b21", "b29", "b45", "b35", "b21", "b29", "b45", "b35" ], "table_ref": [ "tab_3" ], "text": "In the era of data-driven decision-making, tabular data plays a crucial role in facilitating data analysis, serving as a concise and structured representation of information (Kukich, 1983;Pasupat and Liang, 2015;Chen et al., 2020c;Zhu et al., 2021;Zhao et al., 2022a;Tang et al., 2023). People often consult tables to extract valuable insights and make informed decisions. For example, sales managers typically explore large tables with specific business questions to gain insights about clients and processes. Sports coaches will analyze performance Figure 1: An example of QTSUMM. Given the numerous data points in the table, different users may be interested in various aspects for their own informationseeking or decision-making purposes. The system needs to perform human-like reasoning and analysis over relevant table regions to generate a tailored table summary.\ntables containing various statistics to develop game strategies and make team adjustments. However, effectively accessing and comprehending the information contained within a large and complex table can be time-consuming for users (Hurst, 2000;Pasupat and Liang, 2015;Pujara et al., 2021;Nan et al., 2022a). Text generation systems that can accurately summarize a provided table according to users' information needs have the potential to greatly enhance data analysis and expedite the process of obtaining data insights.\nExisting work and datasets on table-to-text generation (Parikh et al., 2020;Chen et al., 2020a;Cheng et al., 2022b;Lebret et al., 2016;Moosavi et al., 2021;Suadaa et al., 2021) have mainly focused on converting tabular data into coherent statements, aiming to present the structured data in a humanreadable format. However, these approaches have overlooked the fundamental goal of addressing users' information-seeking purposes. Table -to-text generation systems should adopt a more flexible and interactive approach that allows people to obtain a user-customized summary tailored to their information needs (Dang, 2006;Xu and Lapata, 2020;Zhong et al., 2021;Xu and Lapata, 2022;Zhou et al., 2023), as illustrated in Figure 1. While table question answering (QA) (Pasupat and Liang, 2015;Iyyer et al., 2017;Zhong et al., 2018;Chen et al., 2020c;Nan et al., 2022b) has made significant progress in answering fact-based questions, the primary focus of their approaches is on extracting relevant facts or entities from the table and composing short-form answers. Nevertheless, in real-world scenarios, users often have more complex and diverse information needs that extend beyond simple fact retrieval. They expect models to perform human-like reasoning and provide trustworthy explanations or analyses that accompany the extracted insights.\nWith comprehensive consideration of the realworld information needs of users when consulting tabular data, we propose a new task, query-focused table summarization. In this task, the model is required to generate a user-customized summary given the table and user query. To enable research in this area, we construct a human-annotated tableto-text generation dataset named QTSUMM1 , that contains 7,111 query-summary pairs over 2,934 Wikipedia tables covering diverse topics. Table 1 compares QTSUMM with previous table-to-text generation datasets. To the best of our knowledge, QTSUMM is the first dataset that tackles tasks of generating user-customized table summaries based on real-world scenarios.\nWe provide a comprehensive evaluation of current state-of-the-art models, including text generation (Lewis et al., 2020;Raffel et al., 2020;Chung et al., 2022), table-to-text generation (Liu et al., 2022b;Zhao et al., 2022b;Jiang et al., 2022), and large language models (Touvron et al., 2023a,b;Zheng et al., 2023;Jiang et al., 2023a;Xu et al., 2023;OpenAI, 2023). Our results and analysis from different perspectives reveal that the existing models struggle in solving this new task, highlighting the challenges the models face when performing human-like reasoning and analysis to generate summary tailored to users' information needs.\nTo improve both text generation systems for QT-SUMM, we propose REFACTOR. Given a user query, REFACTOR can retrieve and reason over query-relevant facts from the source table to generate multiple data insights in natural language sentences. Our results illustrate that directly concatenating the original input sequence with REFAC-TOR's generation can bring effective improvements to state-of-the-art baseline systems.\nWe conclude our main contributions as follows:\n• We propose a new query-focused table summarization task, and construct a large-scale benchmark, QTSUMM, comprising 7,111 querysummary pairs collected in real-world situations.\nStrict quality control measures are employed to ascertain the high quality of the dataset.\n• We conduct a systematic study of state-of-the-art models on QTSUMM, and illustrate that they are still far behind expert performance, motivating future research on this new table-to-text task.\n• We present REFACTOR for the efficient retrieval and reasoning of query-relevant facts from tables.\nIt demonstrates significant enhancements pertaining to state-of-the-art text generation baselines. (Chen et al., 2020a;Parikh et al., 2020;Cheng et al., 2022b;Liu et al., 2022a), or a generic summarization task (Lebret et al., 2016;Moosavi et al., 2021;Suadaa et al., 2021). In the single-sentence generation task (Parikh et al., 2020;Chen et al., 2020a;Cheng et al., 2022b), the focus is on generating fluent and faithful descriptions using provided table regions as a control for text generation. Nevertheless, using table regions for controlling text generation does not align with real-world scenarios, where people refer to tabular data for information-seeking purposes. The generic table summarization tasks (Lebret et al., 2016;Moosavi et al., 2021;Suadaa et al., 2021) aim to create concise and informative summaries based on the content of a given domainspecific table (i.e., sports or scientific). In contrast, the tables in QTSUMM cover diverse topics. Furthermore, considering the numerous data points in the table, various users may be interested in different aspects for their own information-seeking (Parikh et al., 2020) statements into questions and uses the same statements as the answers. In comparison with FeTaQA, the queries in QTSUMM were annotated under realworld scenarios, making them more natural and better-reflecting users' actual information needs." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b36", "b14", "b63", "b12", "b18", "b10", "b49", "b64", "b27", "b9", "b51", "b44", "b52", "b65", "b62" ], "table_ref": [], "text": "Reasoning Over Tabular Data Enhancing the table reasoning capabilities of models is essential for a variety of tasks related to tables, such as table question answering (Pasupat and Liang, 2015;Iyyer et al., 2017;Zhong et al., 2018;Zhao et al., 2023d), table fact verification (Chen et al., 2020b), and table-to-text generation (Chen et al., 2020a;Cheng et al., 2022b). One prevalent approach is pre-training models with table-text joint reasoning data (Herzig et al., 2020;Liu et al., 2022b;Zhao et al., 2022b;Liu et al., 2022a;Jiang et al., 2022;Dong et al., 2022;Cheng et al., 2022a;Xie et al., 2022). Nevertheless, these models generate text in an end-to-end manner, resulting in reduced explainability and difficulties in handling more complex reasoning, such as arithmetic calculation. Therefore, we propose REFACTOR, which can retrieve and generate query-relevant facts from tables as intermediate results for model input (Zhou et al., 2022;Zhao et al., 2023b), mitigating the implicit reasoning processes of text generation models.\nQuery-Focused Summarization Initially formulated as a document summarization task, QFS aims to generate summaries from documents that are tailored to specific user queries (Dang, 2006). Despite its potential real-world applications, QFS remains a challenging task due to the lack of large-scale training data. Existing works have attempted to address this issue by leveraging distant NLP resources, including question answering (Xu and Lapata, 2020) and paraphrase identification (Su et al., 2020), and generic summarization (Xu and Lapata, 2022;Zhou et al., 2023). Recently, Zhong et al. (2021) adopted QFS for meeting summarization and proposed a human-annotated benchmark over meeting transcripts. Similar to text, effectively accessing and comprehending the information contained within a large and complex table can be time-consuming for users, while QFS remains unexplored in tableto-text generation. In this work, we extend QFS to this new modality for more effective informationseeking and decision-making purposes.\n3 Query-Focused \nY = argmax n i=1 P (y i |y <i , Q, T ; θ),(1)\nwhere θ denotes the parameters of a neural text generation model, and y i denotes the i-th tokens in the generated summary." }, { "figure_ref": [], "heading": "Data Collection Principles", "publication_ref": [], "table_ref": [], "text": "At a high level, the goal of the data collection process is to obtain high-quality user queries and corresponding paragraph-long summaries grounded on the tabular data. We outline our key criteria for designing a benchmark to thoroughly evaluate the table-to-text summarization capabilities of models.\nTo achieve this, we first design three principles for annotating a good query-summary pair:\n• Comprehensiveness: The tailored summary should provide enough details and analysis of the source table to respond to the user query, fulfilling user's information need.\n• Attributablity & Faithfulness: The query should be answerable using only information from the source table. The summary should be grounded on the source table, and not contain any unfaithful or nonsensical text.\n• Fluency: Both the user query and its corresponding table summary should be coherent and fluent." }, { "figure_ref": [], "heading": "QTSUMM Annotation Pipeline", "publication_ref": [ "b35" ], "table_ref": [], "text": "To ensure that QTSUMM annotation fulfills the aforementioned principles, we carefully design an annotation pipeline consisting of following steps:\nSource Table Collection QTSUMM uses tables from LOGICNLG (Chen et al., 2020a) and TOTTO (Parikh et al., 2020) datasets as source tables, as these tables are crwaled from Wikipedia and covers diverse domains and topics. We filter out tables that are 1) too large or too small, 2) with only string-type columns, or 3) with hierarchical structures (e.g., containing more than one table header). Then we randomly sample 2,000 candidate tables from LOGICNLG and TOTTO, respectively, for the query-summary annotation.\nUser Query Annotation Given a table, the annotators are required to read its content, and determine whether the table is informative and intelligible to common web users. Then they were asked to come up with two or three queries, assuming they are users seeking certain information from the table. We require each query to be answerable using information only from the as query responses, we avoid queries that can be answered in a short sentence (e.g., \"Which country held the 2022 FIFA World Cup?\").\nQuery-Focused Summary Annotation Given a table and user query, we ask another annotator to use only information from the source table to write a paragraph-long summary that satisfies the user's information need. We encourage annotators to produce sophisticated summaries that 1) contain as much information from the table as possible, and 2) involve more types of reasoning over multiple relevant table regions. To further encourage high quality annotations, we adopt the \"two channel collection\" design (Chen et al., 2020b), in which the annotators would be paid 60% more if their summaries are manually verified to exhibit adequate complexity. We also require the annotators to annotate the row indices of relevant table regions that are referenced in the written summary, allowing future researchers to quantify how well the summaries are grounded in the table in their work." }, { "figure_ref": [], "heading": "Multi-Round Validation", "publication_ref": [], "table_ref": [], "text": "We conduct a multiround validation protocol to ensure that the annotated data fulfills the aforementioned annotation principles. We first assign query annotators to validate each summary against their corresponding queries, and fix the mistakes if there are any. Then we check 1) whether a query-summary pair contain adequate information and complex aggregation by examining the length of the summary, and 2) whether the information in summary is essential in responding to the user query. We manually revise pairs that do not meet the above standard." }, { "figure_ref": [], "heading": "Annotation Quality Control", "publication_ref": [], "table_ref": [ "tab_6", "tab_8" ], "text": "Table 2 describes the basic statistics of QTSUMM.\nIn addition to the multi-round validation, we carefully design several quality control approaches, comprising expert annotation and numerous annotation de-biasing designs, to ensure the high quality of QTSUMM annotations.\nExpert Annotators To help improve the annotation process, five experts with professional experience in the text summarization tasks are invited to conduct the internal annotation. They are asked to provide feedback regarding the task instructions and the user experience of the annotation interface, based on which we iteratively modify the annotation guideline and interface design. In the stage of external annotation, we enroll 17 graduate students majoring in STEM fields (10 females, and 7 males).\nWe do not use the crowd-source annotation platform such as Mechanical Turk as our preliminary study indicates that annotators on MTurk fail to annotate high-quality query-summary data. Before starting the official annotation process, each annotator is given a two-hour training session to learn the annotation requirements and interface.\nAnnotation De-biasing We observed several kinds of annotation bias during our internal annotation, and we proposed countermeasures as follows for annotation de-biasing: Source Table Diversity: During internal annotation, we found that many tables in LOGICNLG have similar content. For example, there are around 200 tables describing the results of football games, with identical table headers. To ensure the diversity of source tables, we keep only one table for each unique table header.\nQuery Diversity: When annotating queries, annotators may prefer simpler ones, resulting in low query diversity. Therefore, we frequently monitor the diversity of queries for each annotator. Annotators are also encouraged to craft queries that are either creative or require complex reasoning in summarization, resulting in a doubled payment to compensate them for the extra time.\nSupporting Fact Position: We found that annotators prefer to raise queries regarding the first few rows of each table. To deal with such bias regarding supporting fact positions, we randomly highlight certain rows for each table in the annotation interface. We require the annotators to write queries whose summaries should cover at least two rows of the highlighted regions.\nWe also report the human evaluation scores and inter-evaluator agreements over 200 sampled querysummary pairs. QTSUMM has a high annotation quality and inter-annotator agreement (Table 3)." }, { "figure_ref": [], "heading": "QTSUMM Evaluation", "publication_ref": [ "b33", "b37", "b23", "b0", "b54", "b12", "b12" ], "table_ref": [], "text": "We develop a comprehensive approach for evaluating QTSumm, incorporating both automated and human evaluation. We adopt following popular automated evaluation metrics:\nBLEU (Papineni et al., 2002) computes the geometric average of the precision over output text's ngrams. We used SacreBLEU (Post, 2018) that produces comparable and reproducible BLEU scores.\nROUGE (Lin and Hovy, 2003) measures the word overlap between the candidate and reference summaries. We reported F1 score for ROUGE-L (longest common subsequences).\nMETEOR (Banerjee and Lavie, 2005) is based on a generalized concept of unigram matching between the machine-produced translation and human-produced reference translations.\nBERTScore (Zhang et al., 2020) computes the sim-ilarity between the reference and generated summary using contextual word embeddings.\nTAPAS-Acc (Herzig et al., 2020;Liu et al., 2022a) is a reference-free metric that uses TAPAS (Herzig et al., 2020) fine-tuned on the Tab-Fact dataset (Chen et al., 2020b) as the backbone to evaluate the faithfulness of generation.\nAutoACU (Liu et al., 2023a) is an interpretable and reference-based summarization evaluation system that exhibits better alignment with human judgements. The A2CU first extracts atomic content units (ACUs) from the generated summary and then evaluates them against reference. A3CU is an accelerated version of A2CU that directly computes the similarity between two text without extracting ACUs, but with the similar evaluation target. We use F1 score of A3CU for evaluation.\nFor human evaluation, the summaries from different models were evaluated by experts from three criteria (i.e., comprehensiveness, faithfulness, and fluency) that have been discussed in Section 3.2. Each summary was scored from 1 (worst) to 5 (best) for each criteria, with the final score averaged across different evaluators." }, { "figure_ref": [ "fig_0" ], "heading": "REFACTOR", "publication_ref": [ "b64", "b27" ], "table_ref": [], "text": "QTSUMM requires models to perform human-like reasoning in generating summaries that provide comprehensive and precise analysis of the source table to fulfill the user's information need. However, existing end-to-end text generation models rely on error-prone implicit reasoning processes for generating text, leading to diminished explainability and challenges in addressing user queries that necessitate complex human-like reasoning (Zhou et al., 2022;Zhao et al., 2023b). To address this, we present REFACTOR, to retrieve and reason over query-relevant information from tabular data to generate several NL data insights (i.e., facts) as explicit reasoning results. As shown in Figure 3, the generated facts is concatenated to the model input to mitigate the implicit reasoning issues, enhancing the comprehensiveness and faithfulness of generated summary. We next discuss the implementation of REFACTOR." }, { "figure_ref": [], "heading": "Fact Generation", "publication_ref": [], "table_ref": [], "text": "Given the user query and source table, REFACTOR will generate several candidate facts by executing various forms of human-like reasoning over the ta- Query: Which company earns the highest profit in the Oil and Gas industry, and how does it compare to the most profitable company overall?\nNational Petroleum earns the highest profit in the Oil and Gas industry, amounting to $4,575 million dollars. However, the most profitable company overall, Walmart, earns $7,306 million more profit than Sinopec Group." }, { "figure_ref": [], "heading": "Error-prone implicit reasoning", "publication_ref": [], "table_ref": [], "text": "Within the Oil and Gas industry, Sinopec Group earns the highest profit -$6,205 million. However, compared to the most profitable company overall, Apple, the profit earned by Sinopec Group is much lower. In fact, Apple earns $51,306 million more profit than Sinopec Group.\nExplicit and faithful reasoning by REFACTOR\nBaseline Models (e.g., Flan-T5, PLOG) ble. Specifically, we define 6 types of table reasoning operations (e.g., numerical operation, counting, and conjunction) that are necessary for the QT-SUMM task, as shown in Table 7 in the Appendix.\nFor each reasoning operation, the fact generator (adopted from Zhao et al. (2022b)) takes a table and a query as input. It produces multiple facts based on the fact template. Each fact template includes several placeholders that need to be filled with information retrieved from the table. Specifically, column col and cell value val are indexed to specify the column and cell name, respectively. Some templates also regulate that the selected column and cell value must be date or number type.\nOPERATOR corresponds to operators that are instantiated according to the specific reasoning reasoning. And CONDITION:i can be 1) a cell value from the i-th column; or 2) a number/temporal comparison statement if the i-th column is date or number type. After substituting all the placeholders in the provided template, the fact generator will programmatically return the executed_results and form one fact. Once facts for a {table, query} pair are collected from different fact generators, we pass them to the Fact Ranking process." }, { "figure_ref": [], "heading": "Fact Ranking", "publication_ref": [ "b42" ], "table_ref": [], "text": "Given the query and source table, each fact generator will be utilized to generate several queryrelevant facts, resulting in a large number of candidate facts in total. Therefore, we need to rank the generated facts to select the most relevant ones.\nWe use the QA encoding model (Reimers and Gurevych, 2019) to obtain the embedding of the query and each generated fact. Then, we select the top-n generated facts with the highest cosine similarity to the query embedding. In practice, we assign n as max( row num × column num 2 , 5), and ensure that the number of selected facts from each type of reasoning operation does not exceed 3. The selected facts, which are handy and readily available for end-to-end text generation systems, are then concatenated into the model input." }, { "figure_ref": [], "heading": "QTSUMM Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baseline Systems", "publication_ref": [], "table_ref": [], "text": "We evaluate the following three types of state-ofthe-art baseline systems2 on QTSUMM:" }, { "figure_ref": [], "heading": "Text Generation Models", "publication_ref": [ "b22", "b40", "b1", "b32" ], "table_ref": [], "text": "BART (Lewis et al., 2020) is a pre-trained denoising autoencoder with transformer-based architecture and shows effectiveness in NLG tasks.\nT5 (Raffel et al., 2020) demonstrates effectiveness in NLG tasks by treating all NL problems as textto-text tasks during pre-training stage.\nFlan-T5 (Chung et al., 2022) enhances T5 by scaling instruction fine-tuning and demonstrates better human-like reasoning abilities than the T5. GPT (Brown et al., 2020;OpenAI, 2023) is a powerful large language model which is capable of generating human-like text and performing a wide range of NLP tasks in a few-shot setting." }, { "figure_ref": [], "heading": "5.1.2", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b20" ], "table_ref": [], "text": "The specifics of input data serialization and LLM prompting examples are discussed in Appendix A. All experiments were conducted on an 8 NVIDIA RTX A6000 48GB cluster. We selected the large version for all fine-tuned baseline models, whose weights are publicly available at HuggingFace. For each fine-tuning experiment, we ran 15 epochs with a batch size of 128. The best fine-tuning checkpoints were selected according to the validation loss.\nThe experiments for open-sourced LLMs were conducted using vLLM framework (Kwon et al., 2023). We used gpt-3.5-turbo-0613 for GPT-3.5 and gpt-4-0613 for GPT-4 via the OpenAI APIs7 . For LLM hyperparameter settings, we set temperature as 1.0, Top P as 1.0, and maximum output length as 256." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_13", "tab_15" ], "text": "We draw following conclusions based on the automated and human evaluation results (Table 4 &6). Analyze the correlation between the size of the geographical area of a Gmina type and its population?" }, { "figure_ref": [], "heading": "Importance of table structure understanding", "publication_ref": [ "b53", "b17" ], "table_ref": [], "text": "REFACTOR employs the QA encoding model for fact ranking. However, it struggles to understand complex information needs from users, such as the \"correlation between A and B\", and might consequently rank irrelevant facts higher. backbones, demonstrating the importance of considering table structure for the QTSUMM task.\nImportance of reasoning and analysis Among text generation models, Flan-T5, which enhances T5 through scaled instruction fine-tuning, outperforms T5. Moreover, LLMs with improved reasoning capabilities (i.e., Llama-2-70B and GPT-4) also achieve better performance. These findings indicate the significance of reasoning and analytical skills in handling the QTSUMM task.\nMismatch between automated and human evaluation Despite receiving low scores in popular automated evaluation metrics such as BLEU and ROUGE, GPT-* exhibit better performance than state-of-the-art fine-tuned models in human evaluation. This finding underscores the need for future research to investigate the development of automated evaluation metrics for the QTSUMM task that better align with human judgments (Zhang and Bansal, 2021;Liu et al., 2023a;Jiang et al., 2023b).\nEffectiveness of REFACTOR As assessed by human evaluation, baseline systems employing REFACTOR typically yield better performance, especially in faithfulness-level. This suggests the efficacy of REFACTOR in enhancing the reasoning process in text generation." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [], "table_ref": [ "tab_19" ], "text": "For a deeper understanding of the query-focused table summarization task on QTSUMM, we conduct an error analysis to illustrate existing challenges.\nWe identify four common mistakes that current text generation models are likely to make (i.e., hallucination, factual incorrectness, user intent misunderstanding, and repetition), providing detailed examples and explanations for each type of common mistake in Table 8 in the Appendix." }, { "figure_ref": [], "heading": "REFACTOR Analysis", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "We also undertake a human evaluation to examine the efficacy of REFACTOR in generating queryrelevant facts from tabular data. Specifically, we randomly sample 200 examples from QTSUMM validation set, and ask two human evaluators to evaluate each fact generated by REFACTOR, determining its relevance to the query. 56.4% generated facts (528 out of 937) are labeled as \"relevant\", suggesting an adequate coverage of REFACTOR. To delve deeper into this, we also conduct a case study examining the failure cases, specifically those examples where less than two facts were annotated as \"relevant\". We identified three kinds of common failure cases: (1) difficulty in parsing cell values via rule-based methods, (2) complex user query causes difficulty in ranking related facts, and (3) unsupported reasoning operations. We provide detailed examples and explanations in Table 5." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper defines a new query-focused table summarization task, and constructs a large-scale benchmark, QTSUMM. We investigate a set of strong baselines, including text generation, table-to-text generation, and large language models. Experimental results and manual analysis reveal that the new task presents significant challenges in tableto-text generation. Moreover, we propose a novel approach named REFACTOR, to retrieve and reason over query-relevant information from tables, improving the faithfulness of generated summary." }, { "figure_ref": [], "heading": "Limitations and Future Work", "publication_ref": [ "b12", "b43", "b28", "b34", "b11", "b39", "b53", "b17" ], "table_ref": [], "text": "The baseline systems provided have a restricted maximum number of tokens they can accommodate (e.g., 1024 for all examined fine-tuned models), which prevents them from generating summaries for large tables that, when converted into a sequence, exceed the maximum number of tokens. To handle large tables (e.g., with more than 300 table cells), future work can apply neural models (Herzig et al., 2020;Liu et al., 2022b) to first filter out those query-irrelevant rows or columns. Moreover, this paper demonstrates the effectiveness of using intermediate results obtained from explicit reasoning operations to mitigate the implicit reasoning issues. However, the proposed REFAC-TOR utilizes template-based method to generate facts. Although such template-based approach can ensure the factual correctness of generated facts, as discussed in Section 5.5, it might not cover all crucial facts for some complex user query. We believe following directions warrant further exploration: (1) Complex query decomposition. Our case study reveals that the TAPEX-based fact ranking module struggles with comprehending complex questions. To address this, future research could investigate LLM chain-of-thought methods to break down complex questions into more understandable and actionable sub-questions. (2) Tool usage. The predefined and template-based execution modules in the REFACTOR fact generation phase have their limitations. Recent studies (Schick et al., 2023;Lu et al., 2023;Paranjape et al., 2023;Gou et al., 2023;Qiao et al., 2023) highlight the impressive abilities of LLMs in making and utilizing tools for problem-solving. It would be intriguing to explore if LLMs can produce executable programs from scratch to derive query-relevant insights. (3) Explainable automated evaluation. In Section 5.3, a discrepancy between automated and human evaluation results is observed. Such discrepancies are concerning, as developers might opt for suboptimal systems for real-world applications if they solely rely on automatic metrics for comparing and ranking different text generation systems. Therefore, a more reliable and explainable automated evaluation system is required (Zhang and Bansal, 2021;Liu et al., 2023a,b;Jiang et al., 2023b)." }, { "figure_ref": [], "heading": "Ethical Consideration", "publication_ref": [ "b35" ], "table_ref": [], "text": "The source tables in QTSUMM were collected from LOGICNLG (Chen et al., 2020a) and TOTTO (Parikh et al., 2020) datasets, which are publicly available under the MIT license 8 and CC BY-SA 3.0 license 9 , respectively. They both permit us to compose, modify, publish, and distribute additional annotations upon the original dataset.\nFor the external annotation of QTSUMM, we hired 17 graduate students majoring in STEM majors. We regard 1) creating three queries for one table, and validating the corresponding summaries annotated by others, and 2) composing a queryfocused summary response as a unit task. And we paid around $1.5 for each unit task. For creative annotation rewards, we paid additional $0.5 for a query, and $1.5 for a summary. Averagely, an annotator can finish 7 unit tasks per hour after training and practicing. And the hourly rates are in the range of $9 and $13 based on the different working speed (above the local average wage of similar jobs). We recommended that annotators complete a maximum of 30 unit tasks per day in order to reduce pressure and maintain a comfortable pace. In total, the approximate working hours to annotate QTSUMM dataset was 1,400 hours. The whole annotation work lasted about 40 days." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [ "b49" ], "table_ref": [], "text": "Input Data Serialization The input contains a user query, and corresponding table data. For text generation and large language models (Section 5.1.1 & 5.1.3), we followed recent works on table-to-text generation (Liu et al., 2022b;Xie et al., 2022;Zhao et al., 2023c,a) to flatten the table data as T=[HEADER]:h, [ROW]1:r 1 ,..., [ROW]n:r n , where h is table header, r i is the i-th table row. For text generation models, [HEADER] and [ROW] are special tokens indicating the region of table headers and rows respectively; while for LLMs, we set them as empty strings. We also separated headers or cells in different columns using a vertical bar |. In this way, the flattened table input can be fed directly into text generation models. For table-to-text generation models (Section 5.1.2), we followed their original data processing methods to input the query and table data. The difference between val:1 and val:2 in col is executed_results.\nThe difference between China and Canada in Gold is 16.\nTable 7: 6 reasoning operations, along with fact template and examples, defined for the fact generation process of REFACTOR. Variable names indicate permissible instantiations. col denotes a column name, val denotes a cell value, and executed_results denotes the execution results of the function. OPERATOR is instantiated according to the specific reasoning operation, e.g., for \"Numerical Operation\", OPERATOR is replaced with \"sum\" or \"average\"; CONDITION can be 1) a cell value from the i-th column, or 2) number/temporal comparison statement (e.g. \"later than 1967\") if the i-th column is of number or date type. The countries in East Asia with an HDI higher than 0.8 are Hong Kong (PRC), Japan, and South Korea. Hong Kong has an HDI of 0.898 and a population density per square kilometer of 6390. Japan has an HDI of 0.901 and a population density per square kilometer of 337. South Korea has an HDI of 0.897 and a population density per square kilometer of 500. All three countries have a comparatively high population density which is likely to be a factor in their high HDI.\nError Type: User Intent Misunderstanding" }, { "figure_ref": [], "heading": "Explanation:", "publication_ref": [], "table_ref": [], "text": "Include information that is irrelevant to the user question." }, { "figure_ref": [], "heading": "Analysis:", "publication_ref": [], "table_ref": [], "text": "The query does not ask for country with lowest HDI, or any country with HDI lower than 0.8. Error Type: Repetition Explanation: Generate repetitive information." }, { "figure_ref": [], "heading": "Analysis:", "publication_ref": [], "table_ref": [], "text": "The information of these buildings being the tallest in Portland, Oregon has been mentioned repetitively throughout the system output, while the system fail to also distinguish them (until which year each of them was the tallest respectively). " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to dedicate this paper to the memory of Dr. Dragomir Radev. Dr. Radev's leadership, guidance, and expertise were instrumental in shaping the direction and quality of this project. We appreciate the efforts of all annotators in constructing QTSUMM and conducting human evaluation. We are grateful to the Google TRC program for their support. We would also like to thank the anonymous reviewers and action editors for constructive discussions and feedback." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our data and code are publicly available at https: //github.com/" } ]
People primarily consult tables to conduct data analysis or answer specific questions. Text generation systems that can provide accurate table summaries tailored to users' information needs can facilitate more efficient access to relevant data insights. Motivated by this, we define a new query-focused table summarization task, where text generation models have to perform human-like reasoning and analysis over the given table to generate a tailored summary. We introduce a new benchmark named QTSUMM for this task, which contains 7,111 human-annotated query-summary pairs over 2,934 tables covering diverse topics. We investigate a set of strong baselines on QTSUMM, including text generation, table-to-text generation, and large language models. Experimental results and manual analysis reveal that the new task presents significant challenges in table-totext generation for future research. Moreover, we propose a new approach named REFAC-TOR, to retrieve and reason over query-relevant information from tabular data to generate several natural language facts. Experimental results demonstrate that REFACTOR can bring improvements to baselines by concatenating the generated facts to the model input.
QTSUMM: Query-Focused Summarization over Tabular Data
[ { "figure_caption": "Figure 3 :3Figure 3: Enhancing fine-tuned models with the proposed REFACTOR. After generating and selecting the top-n query-relevant facts obtained through various reasoning operations (e.g., numerical comparison, counting), these facts are concatenated with query and table data as the model input in both fine-tuning and inference stage. REFACTOR can mitigate the error-prone implicit reasoning issues of end-to-end text generation systems.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example of LLM zero-shot prompt prefix wo. REFACTOR for the QTSUMM task.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example of LLM zero-shot prompt prefix w. REFACTOR for the QTSUMM task.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "# Tables / Statements# Words / StatementExplicit ControlRich in Analysis & Reasoning?Single-sentence Table-to-TextTOTTO (Parikh et al., 2020)Wikipedia83,141 / 83,14117.4 Table region✗LOGICNLG (Chen et al., 2020a)Wikipedia7,392 / 36,96014.2 Table regions✓HiTab (Cheng et al., 2022b)Statistics web3,597 / 10,67216.4 Table regions & reasoning operator✓Generic Table SummarizationROTOWIRE (Lebret et al., 2016)NBA games4,953 / 4,953337.1 ✗✗SciGen (Moosavi et al., 2021)Sci-Paper1,338 / 1,338116.0 ✗✓NumericNLG (Suadaa et al., 2021) Sci-Paper1,355 / 1,35594.2 ✗✓Table Question AnsweringFeTaQA (Nan et al., 2022b)Wikipedia10,330 / 10,33018.9 Queries rewritten from TOTTO✗Query-Focused Table SummarizationQTSUMMWikipedia2,934/ 7,11168.0 Queries from real-world scenarios✓", "figure_id": "tab_2", "figure_label": "Source", "figure_type": "table" }, { "figure_caption": "Comparison between QTSUMM and existing table-to-text generation datasets.", "figure_data": "", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Basic statistics of QTSUMM dataset.", "figure_data": "", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Amazon, ... 2. The Company Name, with Industry is Oil and gas, ordered by Profit ($ Million) is Sinopec Group, National Petroleum. 3. The sum of Profit with Industry is Oil and gas is 10780. 4. The difference between Apple and Sinopec Group in Profit is 51306. 5. ....", "figure_data": "REFACTORFact GenerationFact RankingInput data serialization", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_11", "figure_label": "Title", "figure_type": "table" }, { "figure_caption": "", "figure_data": "5.1.3 Large Language ModelsLlama-2 3 (Touvron et al., 2023a,b) is an open-source large language model trained on large-scaleand publicly available datasets.Vicuna 4 (Zheng et al., 2023) is tuned from Llama-1with instruction-following data, exhibiting betterinstruction-following capabilities.Mistral 5 (Jiang et al., 2023a) is a 7-billion-parameter LLM that outperforms Llama-2-13Bacross most of popular evaluated benchmarks.Lemur 6 (Xu et al., 2023) is tuned from Llama-2with instruction-following data, exhibiting compet-itive natural language and coding capabilities.to-Text Generation ModelsTAPEX (Liu et al., 2022b) continues pre-trainingthe BART model by using a large-scale corpus ofsynthetic SQL query execution data. It shows bettertable understanding and reasoning abilities.ReasTAP (Zhao et al., 2022b) enhances the tableunderstanding and reasoning abilities of BART bypre-training on a synthetic Table QA corpus.OmniTab (Jiang et al., 2022) uses the same back-bone as TAPEX, and is further pre-trained on col-lected natural and synthetic Table QA examples.", "figure_id": "tab_12", "figure_label": "-", "figure_type": "table" }, { "figure_caption": "Automated evaluation results on the QTSUMM test set, involving three types of baseline systems with and without REFACTOR. We used chat or instruct version for each type of LLMs. Within each experimental setting, we used A3CU (F-score) as the ranking indicator of model performance. Due to the budget constraints, for all LLM w. REFACTOR experiments, we randomly selected 200 samples.", "figure_data": "Table-to-text generation models achieve better per-formance than their corresponding text-generation", "figure_id": "tab_13", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Case study on REFACTOR's failure cases.", "figure_data": "13 / 200 Unsupported rea-Who are the top three coachesThe table only contains \"wins\" and \"overall games\"soning operationswith the highest win percent-columns. Models must compute the winning per-ages? Analyze their perfor-centages independently. However, REFACTOR doesmance in the 2019-2020 season.not support such rate calculations5 / 200 Other errors141 / 200 Successful cases", "figure_id": "tab_14", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Human evaluation results (Likert Scale Scoring) of selected baselines on the test set. Five experts are enrolled to evaluate 50 predictions for each model.", "figure_data": "", "figure_id": "tab_15", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Game have Attendance greater than 10,235. The sum of Earning with Point is greater than 140 is 430,027.", "figure_data": "ReasoningExample of Fact TemplatesExample of FactConjunctionThe col that have CONDTION areThe Player Name that have Country is Canadaexecuted_results.areCorey Conners, Nick Taylor,Adam Svensson.Counting 2 Temporal or executed_results col:1 have col:2 CONDITION:2. The col:1 ordered by col:3 are The Company ordered by Sales are Apple,Numericalexecuted_results.Nvidia, Google, (...abbreviate...)OrderThe col:1, with col:2 CONDITION:2,ordered by col:3 are executed_results.Temporal orThe col:1 that col:2 CONDITION:2 areThe institutions that Founded year is ear-Numericalexecuted_results.lier than 1860 are Adrian College,ComparisonMichigan State University.NumericalThe OPERATOR of col:1 with col:2OperationCONDITION:2 is executed_results.(Sum, Avg)NumericalOperation(Diff)", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" }, { "figure_caption": "This was the only race which was decided by less than 10 percentage points, with Hudson winning 54.1% of the vote to Kissell's 45.9%. The close margin of the race demonstrates that both candidates were competitive, and it was an intense contest until the very end.The race in the North Carolina 7th district was the most competitive, as the Democratic incumbent Mike McIntyre won by a slim margin, with only a 0.2% difference between him and his Republican challenger David Rouzer. Furthermore, this election was the only one among all North Carolina districts in 2012 that resulted in a margin of victory within less than 1%.", "figure_data": "QuerySystem outputReferenceError analysisQ: Which driver per-Paul Tracy performed betterPaul Tracy is more competitive.Error Type:formed better, Tracy orthan Mario Haberfeld, as ev-Tracy finished the race in sec-HallucinationHaberfeld, and why isidenced in their results fromond place with a time of +7.446that?the 2004 Centrix Financialseconds, whereas Haberfeld fin-Explanation:Grand Prix of Denver. In theished in eighth place with aMention facts that are notrace, Tracy finished in secondtime of +24.141 seconds. Thisshown in the table.place, while Haberfeld placed indemonstrates that Tracy waseighth. Additionally, Tracy be-able to navigate the track muchAnalysis:longs to Forsythe Racing, whichfaster than Haberfeld and placeThere is no information indicat-has had a more successful trackhigher in the race. Additionally,ing that Forsythe Racing has arecord compared to Haberfeld'sTracy earned 28 points whilemore successful track recordWalker Racing. Additionally,Haberfeld earned only 15 points,compared to Walker Racing,Tracy was able to maintain hiswhich confirms that Tracy isand the model incorrectly statesposition in the race for the fullstronger competitor. In conclu-that Haberfeld completed only90 laps, whereas Haberfeld wassion, Tracy is the more competi-86 laps instead of the correctonly able to make it to 86 laps.tive driver.90.Based on these results, it is clearthat Tracy was more competitivecompared to Mario Haberfeld.Q: According to the vot-ing result, which rep-resentative election in North Carolina districts was the most competi-tive, and why?The most competitive represen-tative election in North Carolina districts was the one for the 8th district, in which incumbent Larry Kissell (D) lost re-election to Richard Hudson (R). Error Type: Factual Incorrectness Explanation: Present information that contra-dicts the source table.Analysis:The most competitive repre-sentative election should bethe North Carolina 7th district,rather than the 8th district, inwhich the difference betweenthe two rivals was only 0.2Q: Analyse the coun-The countries in East Asiatries in East Asia withwith Human Development IndexHuman Development In-(HDI) higher than 0.8 are Hongdex (HDI) higher thanKong, Japan, Macau, Mongolia,0.8.and South Korea. Hong Konghas the highest HDI at 0.898, fol-lowed by Japan at 0.901. Macauhas the lowest HDI at 0.653, andNorth Korea has the highest at0.897. All of these countrieshave populations greater than1.8 million, with the highest pop-ulation density being in Japan at337 people per km square.", "figure_id": "tab_17", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Case study for common errors made by Flan-T5-large wo. REFACTOR. The colored text highlights problematic parts of the system output.", "figure_data": "", "figure_id": "tab_19", "figure_label": "8", "figure_type": "table" } ]
Yilun Zhao; Zhenting Qi; Linyong Nan; Boyu Mi; Yixin Liu; Weijin Zou; Simeng Han; Ruizhe Chen; Xiangru Tang; Yumo Xu; Dragomir Radev; Arman Cohan
[ { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Wenhu Chen; Jianshu Chen; Yu Su; Zhiyu Chen; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Logical natural language generation from open-domain tables", "year": "2020" }, { "authors": "Wenhu Chen; Hongmin Wang; Jianshu Chen; Yunkai Zhang; Hong Wang; Shiyang Li; Xiyou Zhou; William Yang; Wang ", "journal": "", "ref_id": "b4", "title": "Tabfact : A large-scale dataset for table-based fact verification", "year": "2020" }, { "authors": "Wenhu Chen; Hanwen Zha; Zhiyu Chen; Wenhan Xiong; Hong Wang; William Wang", "journal": "", "ref_id": "b5", "title": "Hybridqa: A dataset of multi-hop question answering over tabular and textual data", "year": "2020" }, { "authors": "Zhoujun Cheng; Haoyu Dong; Ran Jia; Pengfei Wu; Shi Han; Fan Cheng; Dongmei Zhang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "FORTAP: Using formulas for numerical-reasoningaware table pretraining", "year": "2022" }, { "authors": "Zhoujun Cheng; Haoyu Dong; Zhiruo Wang; Ran Jia; Jiaqi Guo; Yan Gao; Shi Han; Jian-Guang Lou; Dongmei Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "HiTab: A hierarchical table dataset for question answering and natural language generation", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; S Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Dasha Chowdhery; Sharan Valter; Gaurav Narang; Adams Wei Mishra; Vincent Yu; Yanping Zhao; Andrew M Huang; Hongkun Dai; Slav Yu; Ed Petrov; Jeff Huai Hsin Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b8", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Trang Hoa; Dang", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "DUC 2005: Evaluation of question-focused summarization systems", "year": "2006" }, { "authors": "Haoyu Dong; Zhoujun Cheng; Xinyi He; Mengyu Zhou; Anda Zhou; Fan Zhou; Ao Liu; Shi Han; Dongmei Zhang", "journal": "International Joint Conferences on Artificial Intelligence Organization. Survey Track", "ref_id": "b10", "title": "Table pre-training: A survey on model architectures, pre-training objectives, and downstream tasks", "year": "2022" }, { "authors": "Zhibin Gou; Zhihong Shao; Yeyun Gong; Yelong Shen; Yujiu Yang; Nan Duan; Weizhu Chen", "journal": "", "ref_id": "b11", "title": "Critic: Large language models can self-correct with tool-interactive critiquing", "year": "2023" }, { "authors": "Jonathan Herzig; Krzysztof Pawel; Thomas Nowak; Francesco Müller; Julian Piccinno; Eisenschlos", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "TaPas: Weakly supervised table parsing via pre-training", "year": "2020" }, { "authors": "F Matthew; Hurst", "journal": "", "ref_id": "b13", "title": "The interpretation of tables in texts", "year": "2000" }, { "authors": "Mohit Iyyer; Wen-Tau Yih; Ming-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Search-based neural structured learning for sequential question answering", "year": "2017" }, { "authors": "Alon Jacovi; Avi Caciularu; Omer Goldman; Yoav Goldberg", "journal": "", "ref_id": "b15", "title": "Stop uploading test data in plain text: Practical strategies for mitigating data contamination by evaluation benchmarks", "year": "2023" }, { "authors": "Alexandre Albert Q Jiang; Arthur Sablayrolles; Chris Mensch; Devendra Bamford; Diego Singh Chaplot; Florian De Las Casas; Gianna Bressand; Guillaume Lengyel; Lucile Lample; Saulnier", "journal": "Mistral", "ref_id": "b16", "title": "", "year": "2023" }, { "authors": "Dongfu Jiang; Yishan Li; Ge Zhang; Wenhao Huang; Bill Yuchen Lin; Wenhu Chen", "journal": "", "ref_id": "b17", "title": "Tigerscore: Towards building explainable metric for all text generation tasks", "year": "2023" }, { "authors": "Zhengbao Jiang; Yi Mao; Pengcheng He; Graham Neubig; Weizhu Chen", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "OmniTab: Pretraining with natural and synthetic data for few-shot tablebased question answering", "year": "2022" }, { "authors": "Karen Kukich", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Design of a knowledge-based report generator", "year": "1983" }, { "authors": "Woosuk Kwon; Zhuohan Li; Siyuan Zhuang; Ying Sheng; Lianmin Zheng; Cody Hao Yu; Joseph E Gonzalez; Hao Zhang; Ion Stoica", "journal": "", "ref_id": "b20", "title": "Efficient memory management for large language model serving with pagedattention", "year": "2023" }, { "authors": "Rémi Lebret; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Neural text generation from structured data with application to the biography domain", "year": "2016" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Chin-Yew Lin; Eduard Hovy", "journal": "", "ref_id": "b23", "title": "Automatic evaluation of summaries using n-gram co-occurrence statistics", "year": "2003" }, { "authors": "Ao Liu; Haoyu Dong; Naoaki Okazaki; Shi Han; Dongmei Zhang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "PLOG: Table-to-logic pretraining for logical table-to-text generation", "year": "2022" }, { "authors": "Qian Liu; Bei Chen; Jiaqi Guo; Morteza Ziyadi; Zeqi Lin; Weizhu Chen; Jian-Guang Lou", "journal": "", "ref_id": "b25", "title": "TAPEX: Table pre-training via learning a neural SQL executor", "year": "2022" }, { "authors": "Yixin Liu; Alexander R Fabbri; Pengfei Liu; Yilun Zhao; Linyong Nan; Ruilin Han; Simeng Han; Shafiq Joty; Chien-Sheng Wu; Caiming Xiong; Dragomir Radev", "journal": "", "ref_id": "b26", "title": "Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation", "year": "2023" }, { "authors": "Yixin Liu; Alexander R Fabbri; Yilun Zhao; Pengfei Liu; Shafiq Joty; Chien-Sheng Wu; Caiming Xiong; Dragomir Radev", "journal": "", "ref_id": "b27", "title": "Towards interpretable and efficient automatic reference-based summarization evaluation", "year": "2023" }, { "authors": "Pan Lu; Baolin Peng; Hao Cheng; Michel Galley; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Jianfeng Gao", "journal": "", "ref_id": "b28", "title": "Chameleon: Plug-and-play compositional reasoning with large language models", "year": "2023" }, { "authors": "Nafise Sadat Moosavi; Andreas Rücklé; Dan Roth; Iryna Gurevych", "journal": "", "ref_id": "b29", "title": "Scigen: a dataset for reasoning-aware text generation from scientific tables", "year": "2021" }, { "authors": "Linyong Nan; Lorenzo Jaime Flores; Yilun Zhao; Yixin Liu; Luke Benson; Weijin Zou; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "a. R2D2: Robust data-to-text with replacement detection", "year": "2022" }, { "authors": "Linyong Nan; Chiachun Hsieh; Ziming Mao; Xi Victoria Lin; Neha Verma; Rui Zhang; Wojciech Kryściński; Hailey Schoelkopf; Riley Kong; Xiangru Tang; Mutethia Mutuma; Ben Rosand; Isabel Trindade; Renusree Bandaru; Jacob Cunningham; Caiming Xiong; Dragomir Radev; Dragomir Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b31", "title": "FeTaQA: Free-form table question answering", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b32", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Bhargavi Paranjape; Scott Lundberg; Sameer Singh; Hannaneh Hajishirzi; Luke Zettlemoyer; Marco Tulio; Ribeiro ", "journal": "", "ref_id": "b34", "title": "Art: Automatic multistep reasoning and tool-use for large language models", "year": "2023" }, { "authors": "Ankur Parikh; Xuezhi Wang; Sebastian Gehrmann; Manaal Faruqui; Bhuwan Dhingra; Diyi Yang; Dipanjan Das", "journal": "", "ref_id": "b35", "title": "ToTTo: A controlled table-to-text generation dataset", "year": "2020" }, { "authors": "Panupong Pasupat; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Compositional semantic parsing on semi-structured tables", "year": "2015" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Jay Pujara; Pedro Szekely; Huan Sun; Muhao Chen", "journal": "Association for Computing Machinery", "ref_id": "b38", "title": "From tables to knowledge: Recent advances in table understanding", "year": "2021" }, { "authors": "Shuofei Qiao; Honghao Gui; Huajun Chen; Ningyu Zhang", "journal": "", "ref_id": "b39", "title": "Making language models better tool learners with execution feedback", "year": "2023" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b40", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Randolph Justus", "journal": "Advances in Data Analysis and Classification", "ref_id": "b41", "title": "Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss' fixed-marginal multirater kappa", "year": "2005" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b43", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Dan Su; Yan Xu; Tiezheng Yu; Farhad Bin Siddique; Elham Barezi; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "CAiRE-COVID: A question answering and query-focused multi-document summarization system for COVID-19 scholarly information management", "year": "2020" }, { "authors": "Lya Hulliyyatus Suadaa; Hidetaka Kamigaito; Kotaro Funakoshi; Manabu Okumura; Hiroya Takamura", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Towards table-to-text generation with numerical reasoning", "year": "2021" }, { "authors": "Xiangru Tang; Yiming Zong; Jason Phang; Yilun Zhao; Wangchunshu Zhou; Arman Cohan; Mark Gerstein", "journal": "", "ref_id": "b46", "title": "Struc-bench: Are large language models really good at generating complex structured data?", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Armand Aur'elien Rodriguez; Edouard Joulin; Guillaume Grave; Lample", "journal": "", "ref_id": "b47", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin R Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Daniel M Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony S Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel M Kloumann; A V Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; R Subramanian; Xia Tan; Binh Tang; Ross Taylor; Adina Williams; Jian Xiang Kuan; Puxin Xu; Zhengxu Yan; Iliyan Zarov; Yuchen Zhang; Angela Fan; Melanie Kambadur; Sharan Narang; Aurelien Rodriguez; Robert Stojnic; Sergey Edunov; Thomas Scialom", "journal": "", "ref_id": "b48", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Tianbao Xie; Chen Henry Wu; Peng Shi; Ruiqi Zhong; Torsten Scholak; Michihiro Yasunaga; Chien-Sheng Wu; Ming Zhong; Pengcheng Yin; I Sida; Victor Wang; Bailin Zhong; Chengzu Wang; Connor Li; Ansong Boyle; Ziyu Ni; Dragomir Yao; Caiming Radev; Lingpeng Xiong; Rui Kong; Noah A Zhang; Luke Smith; Tao Zettlemoyer; Yu", "journal": "", "ref_id": "b49", "title": "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "year": "2022" }, { "authors": "Yiheng Xu; Hongjin Su; Chen Xing; Boyu Mi; Qian Liu; Weijia Shi; Binyuan Hui; Fan Zhou; Yitao Liu; Tianbao Xie; Zhoujun Cheng; Siheng Zhao; Lingpeng Kong; Bailin Wang; Caiming Xiong; Tao Yu", "journal": "", "ref_id": "b50", "title": "Lemur: Harmonizing natural language and code for language agents", "year": "2023" }, { "authors": "Yumo Xu; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Coarse-to-fine query focused multi-document summarization", "year": "2020" }, { "authors": "Yumo Xu; Mirella Lapata", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b52", "title": "Document summarization with latent queries", "year": "2022" }, { "authors": "Shiyue Zhang; Mohit Bansal", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "Finding a balanced degree of automation for summary evaluation", "year": "2021" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b54", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Yilun Zhao; Yunxiang Li; Chenying Li; Rui Zhang; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data", "year": "2022" }, { "authors": "Yilun Zhao; Boyu Mi; Zhenting Qi; Linyong Nan; Minghao Guo; Arman Cohan; Dragomir Radev; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "OpenRT: An open-source framework for reasoning over tabular data", "year": "2023" }, { "authors": "Yilun Zhao; Linyong Nan; Zhenting Qi; Rui Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "ReasTAP: Injecting table reasoning skills during pre-training via synthetic reasoning examples", "year": "2022" }, { "authors": "Yilun Zhao; Zhenting Qi; Linyong Nan; Lorenzo Jaime Flores; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "LoFT: Enhancing faithfulness and diversity for table-to-text generation via logic form control", "year": "2023" }, { "authors": "Yilun Zhao; Haowei Zhang; Shengyun Si; Linyong Nan; Xiangru Tang; Arman Cohan", "journal": "", "ref_id": "b59", "title": "Large language models are effective table-to-text generators, evaluators, and feedback providers", "year": "2023" }, { "authors": "Yilun Zhao; Chen Zhao; Linyong Nan; Zhenting Qi; Wenlin Zhang; Xiangru Tang; Boyu Mi; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "RobuT: A systematic study of table QA robustness against human-annotated adversarial perturbations", "year": "2023" }, { "authors": "Lianmin Zheng; Wei-Lin Chiang; Ying Sheng; Siyuan Zhuang; Zhanghao Wu; Yonghao Zhuang; Zi Lin; Zhuohan Li; Dacheng Li; Eric Xing", "journal": "", "ref_id": "b61", "title": "Judging llm-as-a-judge with mt-bench and chatbot arena", "year": "2023" }, { "authors": "Ming Zhong; Da Yin; Tao Yu; Ahmad Zaidi; Mutethia Mutuma; Rahul Jha; Ahmed Hassan Awadallah; Asli Celikyilmaz; Yang Liu; Xipeng Qiu; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "QMSum: A new benchmark for querybased multi-domain meeting summarization", "year": "2021" }, { "authors": "Victor Zhong; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b63", "title": "Seq2SQL: Generating structured queries from natural language using reinforcement learning", "year": "2018" }, { "authors": "Fan Zhou; Mengkang Hu; Haoyu Dong; Zhoujun Cheng; Fan Cheng; Shi Han; Dongmei Zhang", "journal": "", "ref_id": "b64", "title": "TaCube: Pre-computing data cubes for answering numerical-reasoning questions over tabular data", "year": "2022" }, { "authors": "Yijie Zhou; Kejian Shi; Wencai Zhang; Yixin Liu; Yilun Zhao; Arman Cohan", "journal": "", "ref_id": "b65", "title": "Odsum: New benchmarks for open domain multi-document summarization", "year": "2023" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Youcheng Huang; Chao Wang; Shuo Zhang; Jiancheng Lv; Fuli Feng; Tat-Seng Chua", "journal": "Association for Computational Linguistics", "ref_id": "b66", "title": "TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 333.78, 743.25, 191.36, 33.71 ], "formula_id": "formula_0", "formula_text": "Y = argmax n i=1 P (y i |y <i , Q, T ; θ),(1)" } ]
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b22", "b7", "b45", "b19", "b42", "b6", "b18", "b30", "b27" ], "table_ref": [], "text": "Point cloud learning has attracted increasing attention since it benefits 3D scene understanding. It can be divided into grid-based methods and point-based methods based on the different representations. This work aims to alleviate the applicability limitations of point-based neural architectures by presenting a lightweight yet powerful point sampler.\nSampling is a critical operation in most existing pointbased networks. Existing samplers can be summarized into learning-based methods [49, 23,18,46,9,7,20] and learning-free methods [30,43,27,19,45]. It keeps a subset with fewer surviving points as the network goes deeper and wider to reduce memory and computation. Since a subset of preceding layer is fed to successor layer by layer, sampling has a significant impact on the final task performance. So how to extract a representative subset from raw point clouds while maintaining high efficiency is a key problem, especially for real-time applications of large-scale point clouds.\nAlthough remarkable progress has been made in sampling, it is hard to balance the efficiency and effectiveness when it comes to large-scale point clouds, especially for the first downsampling layer. For the sake of higher performance, FPS is the primary method used in many point cloud tasks such as detection [35] and segmentation [30]. However, it takes up to half the inference time when dealing with 10 5 points as shown in Figure 1. And for 10 6 points, the output of a 32-beam LiDAR is infeasible due to its quadratic complexity and iterative algorithm. Therefore, low-latency random point sampling (RPS) is adopted in scenarios where efficiency is critical but suffers from low sampling quality. The learning-based approaches cannot be used in the first downsampling layer due to the lack of sufficient features and the huge sampling probability tensor.\nTo tackle this problem, we first explore how FPS can benefit the task network to achieve better final performance and then design a more economical algorithm to approximate it. By comparing various algorithms, we discover that the even spacing between points in the subset is the key to the success of FPS, rather than the uniform density of points presented in previous work [30,31], as inverse density sampling [28] (IDS) also results in poor performance. The uniform distance protects the points in the subset from scattering and clustering with each other, making it less messy (see Section 4.6). This feature allows the geometric information of the original scene to be preserved as much as possible, which is important for downstream detection and segmentation tasks that more focus on shapes. It can be imagined that if we can obtain this property in an efficient way, the performance should be close to FPS. And fortunately, the grid naturally satisfies that because the distance between cells in the grid is constant.\nGiven the observation mentioned above, we design an adaptive voxel-guided point sampler that hierarchically selects points closest to their grid centers. Our approach has linear time complexity, can be run in parallel, and is both permutation invariant and deterministic. In summary, our main contributions are as follows:\n• We revisit several existing sampling techniques and reveal that the key to achieving high-quality sampling is to obtain a uniformly distributed subset.\n• Our proposed HAVSampler has a linear time complexity and is parallelizable, making it suitable for realtime applications. Additionally, it is easy to integrate into existing models, making them run up to 5× faster.\n• Our experimental results on various datasets and tasks demonstrate that our method is comparable to or even better than the state-of-the-art method FPS at 10 2 ∼ 10 4 × faster. This breaks the efficiency bottleneck when using point-based models in large point clouds." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b46", "b39", "b0", "b30", "b31", "b23", "b42", "b15", "b49", "b20", "b9", "b50", "b14", "b35", "b13", "b30", "b46", "b27", "b22", "b4", "b18", "b14", "b27", "b22", "b6", "b5", "b7", "b45", "b19", "b19" ], "table_ref": [], "text": "Point-based Neural Architectures aim to learn permutation invariant features from unordered point clouds. Serving as feature extractors, point-based models are widely adopted in many downstream tasks, i.e., point cloud detection [5,35,47,49], motion segmentation [38], tracking [52,40] and registration [1]. The pioneering work PointNet [30] proposes using pointwise MLPs and max pooling to extract robust global features. Succeeding work PointNet++ [31] designs a canonical paradigm that hierarchically samples subsets, groups neighboring points and aggregates contexts to obtain local features. Due to the success of this paradigm, most of the following works keep this process, such as pointwise models [32,24], graph-based method [43,16,50,41], point convolution [42, 21,22] and recent transformer-based methods [10,51]. Obviously, as a first step, sampling has a significant impact on the performance of following feature extraction and task-specific network. This work benefits point cloud applications by presenting an effective and efficient sampling strategy.\nLearning-free Point Cloud Samplers can be used for filtering, pre-processing or as a basic component of pointbased networks. RPS [15,33,36] is mainly adopted in large-scale point cloud or real-time applications attributed to its high efficiency. IDS [14,45] tends to probabilistically select the points with lower densities in order to preserve sparse points. Normal space sampling [8] can preserve the edge of point clouds well. Random voxel sampling (RVS) divides points into voxels with a fixed grid and preserves the centroid of the voxels. As one of the most advanced samplers, FPS [30,31] is the first choice for many state-of-theart models. It iteratively selects the point furthest away from the sampled point and updates subset, providing more complete scene coverage. Subsequent works enhance FPS by considering features [47], density [28,23] and semantic [5].\nAlthough great progress has been made in this area, the trade-off between latency and performance is still a challenge. For object-level point clouds, FPS is the best choice to minimize performance degradation caused by sampling. When confronted with scene-level points, its latency is non-negligible and even dominant, making it unacceptable. Some works [19,49] attempt to alleviate this computationally inefficient problem by sorting points or dividing scenes into partitions. However, they do not change the mechanics of the sequential FPS and still cannot be executed in parallel. While employing RPS [15] can significantly reduce time consumption, it has a poor performance. Because RPS misses most of the points in the sparse region, it does not capture enough information. IDS [28,45] can preserve sparse points but conducts time-consuming k-nearest neighbor (KNN) to estimate density. Efficient voxel sampling can smooth uneven density, but the performance is also not ideal. This work solves this difficult balance that hinders the use of point-based networks in many applications.\nLearnable Point Cloud Samplers. Recent research has explored how to adaptively sample the points that are critical for downstream tasks in a learnable way to improve performance. Some works elaborately enhance learning-free sampler with learnable weights [49, 23,27] or task-oriented loss [6]. Some methods [9,18] generate subsets directly from MLPs. Others [46,20] predict a probability matrix with shape of target number × source number, and make sampling step differentiable by Gumbel-Softmax trick.\nHowever, learnable fashion is not suitable for us. Careful initialisation [20] of these samplers is required, otherwise all points are invisible to subsequent networks at the early training. And the huge probability matrix will occupy all memories when intake large-scale point cloud. More importantly, we want to address the latency/performance tradeoff, especially for first-layer sampling in large point clouds. But we lack sufficient features to sample from the source because we sample exactly for extracting finer features. " }, { "figure_ref": [], "heading": "HAVSampler", "publication_ref": [], "table_ref": [], "text": "We first formulate point sampler and revisit previous work to explore the key for sampling. Next, we describe the overall design of HAVSampler, including the center-closest sampling strategy and adaptive voxel searching module." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Problem Formulation and Revisiting Study", "publication_ref": [], "table_ref": [], "text": "Point sampling is an essential component of point cloud reasoning, combined with grouping and aggregation to form a basic block. It progressively downsamples source points to build deeper networks and acts as a group center to aggregate local features. For a given set of source points\nP = {p i ∈ R 3 } n i=1\n, where each point consists of three coordinates, the point sampler S is expected to extract a subset P s = {p j ∈ R 3 } m j=1 from P. The low-quality sampled result will damage the following feature extractor and block all the other downstream task heads. Therefore, we aim to design an efficient yet powerful sampler for the real-time application of large-scale point clouds.\nWe revisit the superior FPS and make an in-depth qualitative analysis to find what property makes it stronger. FPS divides the source set P into two subsets, unselected points P us and selected points P s . In the beginning, a point is selected at random from the source set to initialize the selected set, and the rest of the source points are considered to be the unselected part. Next step, the point-to-set distance d k = min {∥p k -p j ∥ | p j ∈ P s } is calculated for each unselected point p k ∈ P us to the selected set P s , and the point with the furthest distance is transferred from the unselected set to the selected set. The above procedure is repeated until the selected subset contains enough m points. Let P (i) us denote the unselected set at i-th iteration, then P s can be formulated as\nP s = argmax p k d k | p k ∈ P (i) us m i=1\n.\n(\n)1\nSince we select the furthest points in P to form P s by Eq.1, the distances among points in the subset P s have a lower bound (see supplementary materials for proof), which can be seen in Figure 3. Besides, the point spacing of FPS results has a higher mean value. Larger mean spacing means more scattered sampling results and wider coverage, which maintains a more complete geometry of the raw scene. This complete geometry is critical for the performance of detection and segmentation. And RVS has a similar distribution to FPS, except for a smaller lower bound. In contrast, the spacing distribution of the points in RPS results mostly falls in smaller values, making sampled points cluster together and breaking the raw scene.\nOverall, more scattered sampling leads to higher performance. If a similar spacing distribution of points in the subset could be achieved in a more efficient way, we can guess that it would have the same high performance as FPS. Fortunately, the spacing between the grids is naturally constant. As shown in Figure 3, grid-based RVS has a more even spacing than RPS." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Voxel-guided Sampling", "publication_ref": [ "b24" ], "table_ref": [], "text": "Inspired by the above analysis, we design an efficient voxel-guided sampler with even point spacing and high quality. Since point clouds scanned in 3D space are very sparse with typically only 0.1% of the voxels containing points [25], we use a sparse approach to voxelize the point cloud to reduce overhead. After that, we select one point from each non-empty voxel to form the final result. Sparse Voxelization via Voxel-Hashing. Formally, for a given input point cloud P, expected sampling number m and voxel size v = (v x , v y , v z ), we compute the voxel coordinate g i for each p i = (x i , y i , z i ) ∈ P as shown in\ng i = (u, v, w) = ⌊ x i v x ⌋, ⌊ y i v y ⌋, ⌊ z i v z ⌋ .(2)\nAfter that, we need to find the all non-empty voxels to perform sampling. Some works [39] filter out the duplicate elements in {g i } by the unique operation that with a computational complexity of at least O(n log n). Instead of unique operation, we use hashtable T to implement the integer coordinates de-duplication with linear complexity. For each p i , it fills the table at slot c i = hash (g i ).\nFinally we obtain the all non-empty voxels by traversing the T . Note that the above steps can be executed in parallel.\nCenter-closest Point Sampling. One problem remains to be solved here: which point in a non-empty voxel should we choose to represent that voxel? Previous works mainly select the average, center, or random points of a voxel, as shown in Figure 4. We observe three drawbacks to these:\n(1) the average value or the center of the voxel is a fake point that does not belong to the original set; (2) random selection or average points will reduce the spacing between points; (3) the use of centers introduces quantization errors. Taking these three aspects into account, we propose selecting the point closest to the voxel center as the representative. Formally, we extend T to store pairs of point index and the distance (i, d i ). For each p i ∈ P, its distance to corresponding voxel centre is computed by d i = ∥p i -v ⊙ (g i + 0.5)∥, where ⊙ is element-wise multiplication. To finger out the closest points, we update T (c i ) with (i, d i ) pair if d i is smaller than the existing distance value at slot c i , which means p i is closer to the center than other points in the same voxel. As illustrated in Figure 4, the points sampled by our strategy are included in the source set and have no quantification errors. Besides, by being closest to the grid center, the spacing between sampled points is scattered as much as possible. Therefore, our strategy can approximate the point distribution of FPS in an efficient manner and achieves high performance. " }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Adaptive Voxel Initialization Strategy", "publication_ref": [ "b43", "b47" ], "table_ref": [], "text": "Although we can extract a subset with a scattered point distribution, there is still a hyper-parameter, the voxel size v, that needs to be tuned. Most previous works [44,17,48] use a fixed voxel size across different scenes. However, the scale of different point clouds can vary greatly, resulting in too many or too few non-empty voxels and sampled points. As shown in Figure 5(a), the most suitable voxel size, i.e., the number of non-empty voxels matches the expected sampling number, can widely range between 0.2∼0.5 m for different scenes. Sampling with the mean of the optimal voxel, Figure 5(b) shows that there will be about 20% scenes with 25% undersampling rate and up to 50% scenes with 50% oversampling rate. Therefore, the sampled points need to be resampled or cut off, resulting in subsets containing redundant points or missing information.\nWe recognize the need to select the voxel size for each scene adaptively. This strategy helps to retain more of the information present in the raw scene. Typically, the smaller the voxel size, the more non-empty voxels in a scene, so more points are sampled. Thanks to this monotonically decreasing property, we can determine the most suitable voxel size for a given scene by performing a binary search. To this end, we design an adaptive voxel searching (AVS) module to find a suitable voxel size v = AVS(P, m) for the given point cloud P and expected sampling number m. we compute the number of non-empty voxels m s for a given searched voxel size v s by implementing a voxel counter m s = Cnt (P, v s ) in a voxel-hashing manner similar to Section 3.2. During the binary search, the output voxel size will be updated iteratively until the number of non-empty voxels falls into the interval [m, (1 + σ)m] with tolerance factor σ, or the iteration times exceeds t max ." }, { "figure_ref": [ "fig_1" ], "heading": "Hierarchical Paradigm", "publication_ref": [], "table_ref": [], "text": "So far, we can obtain high-quality sampling results with the above components. However, only sampling with a single granularity lacks the ability to capture various scene structures differently. For example, some simple geometries like floors and walls only require coarse-grained sampling, whereas pedestrians and cyclists require finer grain.\nTherefore, we introduce a hierarchical point cloud sampling framework that samples the scene at different scales from coarse to fine. The overall pipeline is illustrated in Figure 2. Our framework has k layers, each consisting of an adaptive voxel searching module AVS and center-closest point sampling S. For layer ℓ, we obtain ℓ-th subset by\nP ′ ℓ = S (P ℓ , AVS (P ℓ , m ℓ )) ,(3)\nwhere P ℓ and m ℓ are the input and sampling number of ℓ layer, respectively. Then we mask the points in P ℓ as sampled state with P ′ ℓ to form the input of next layer P ℓ+1 . As for m ℓ , we set it to be 4m ℓ-1 and satisfy m = k ℓ=1 m ℓ . Finally, we gather all subsets P ′ ℓ to generate the output P ′ ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We conduct extensive experiments for the proposed HAVSampler in this section. We first introduce the experimental settings in Section 4.1. Then the comparison of runtime shown in Section 4.2 indicates the efficiency of our method. We further evaluate the proposed method on various tasks, including large-scale detection (Section 4.3) and segmentation (Section 4.4) in both indoor and outdoor scenes, to show the effectiveness. Finally, we conduct an ablation study in Section 4.5 to evaluate each component." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b12", "b3", "b36", "b2" ], "table_ref": [], "text": "We compare outdoor point cloud object detection based on the OpenPCDet [39] toolbox. For scene segmentation and indoor object detection, we adapt our sampler for the official implementation of each model. All experiments are run on an i7-6700 CPU and a single RTX 2080Ti GPU.\nDatasets. The large-scale KITTI [13] and nuscenes [4] datasets are used for efficiency comparison and outdoor point cloud object detection. And SUN RGB-D [37] dataset is used to compare point cloud object detection in indoor scenes. We conduct outdoor and indoor scene semantic segmentation experiments based on SemanticKITTI [3] dataset and S3DIS [2] dataset, respectively. The splits, setup and pre-processing of each dataset follow previous work.\nImplementation Details. For all experiments except ablation study, we employ a 2-layer structure for the proposed HAVSampler. For AVS, the tolerance σ and max iteration times t max are set to 0.05 and 20, respectively. We implement parallel voxel-hashing on the GPU via cuHash [12].\nMethod RPS RVS FPS Ours Point O (N ) O (N ) O N 2 O (N ) Space O (1) O (N ) O (N ) O (N )\nTable 1. The time and space complexity of different methods. We sample from N points with a typical ratio of 4." }, { "figure_ref": [], "heading": "Efficiency", "publication_ref": [], "table_ref": [], "text": "In this section, we empirically compare the proposed HAVSampler with RPS, RVS and FPS to get an intuitive feel for the efficiency of our method." }, { "figure_ref": [], "heading": "Complexity Analysis", "publication_ref": [], "table_ref": [], "text": "We first analyze the complexity of these methods and summarise them in Table 1. There is no doubt that RPS has the lowest time and space complexity. Conversely, the best FPS has the highest time complexity. For ours, sparse voxelization based on voxel-hashing has O (N ) complexity. The center-closest sampling strategy accesses and updates non-empty voxels individually for each point with O (1) complexity. Similarly, we takes O (N ) to perform voxel counter Cnt(•, •) for one iteration in AVS. While it takes O (log N ) iterations to find a certain value by binary search, we only need at most ⌈log 1 σ ⌉ iterations to converge within a given tolerance σ. Summarily, our method has a time complexity of O(N ).\nLatency Comparison Figure 6 shows the results of our latency comparison. We conduct experiments on 3 different input scales to compare the runtimes in different application scenarios: (1) small-scale scenarios with 2 14 points such as the KITTI dataset; (2) 2 16 -point medium-sized scenes like the nuSCenes dataset; (3) large-scale 2 18 points, the same order of magnitude as the output of 64-beam LiDAR. And all the methods run in parallel on the GPU.\nUnsurprisingly, RPS is the fastest method, while the heavy FPS has the highest latency due to the quadratic complexity and difficulty of parallelism. FPS takes more than 9 seconds to sample from 2 18 points to 2 17 points. This means that it cannot be used to directly process the original point cloud output from LiDAR. It can be noticed that for all experimental configurations, our method can run within 1.47ms. Moreover, our HAVSampler is even fast than RVS, which is also based on the voxels. Experimental results show that our algorithm has efficiency on the same level as RPS and RVS, and is 10 2 to 10 4 times faster than FPS." }, { "figure_ref": [], "heading": "Point Cloud Object Detection", "publication_ref": [ "b8" ], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_2" ], "text": "In this section, We evaluate the impact of the proposed HAVSampler on 3D object detection task. For a fair comparison, we test all detectors on the same platform.\nOutdoor Scenes We first evaluate our HAVSampler on KITTI val set and report the results in Table 2. To compare the effect of samplers on detector performance, we replace the first layer of samplers in the original model with different sampling methods. Comparing the first and second rows in Table 2, When we directly replace the FPS with RPS in models and test without retraining, the 3D AP at both R11 and R40 drops significantly for all models. Especially, SASA has a performance degradation with 6.73 AP and 6.90 AP at R11 and R40, respectively. However, we observe that the detection performance is unchanged or even slightly higher than FPS when tested directly test with our HAVSampler. It should be noted that the follow-up networks of samplers are sensitive to the distribution of sampled points after training, so performance will commonly degrade when tested with different samplers. The similar performance between our method and FPS means that the proposed HAVSampler has a distribution close to that of FPS, which verifies our observations in Section 3.1. In addition, our method can greatly reduce detector runtimes by up to 56.7%, meaning that our method is highly efficient. We also retrain two lightweight models, SASA and IA-SSD, with our HAVSampler. Comparing the first and last rows of them, the performance improved after retraining. Most surprisingly, our approach makes the IA-SSD improve by a large margin of 5.05% AP@R11 and can infer at a speed of 50 Hz.\nWe also experiment on larger scenes dataset, nuScenes, and report the results at the bottom of Table 2. With similar results on KITTI dataset, RPS has the worst performance. Our method yields a slight performance gain after retrain- ing. Notably, the efficiency advantage of our method is even more remarkable when dealing with larger point clouds, saving 80.7% of time consumption. In summary, our method can be adapted to large-scale outdoor 3D object detection tasks. As well as saving time, HAVSampler has the potential to boost performance.\nIndoor Scenes Instead of LiDAR, indoor data sets are captured by RGB-D cameras. We evaluated our method on the SUN RGB-D dataset to verify its applicability to different sensor data and show results in Table 3. We report the more strict mAP calculated at IoU threshold of 0.5. In VoteNet [29], our sampling strategy has a performance degradation of 0.5 mAP. However, an increase of 1.28 mAP is obtained for transformer-based 3D detector 3DETR [26]. This subsection manifests again that our samplers have a similar performance with FPS on detection tasks." }, { "figure_ref": [], "heading": "Recall Analysis", "publication_ref": [], "table_ref": [], "text": "We downsampled the points to 1/4 of the input and analyzed the recall of different samplers on the KITTI dataset. Point recall is the ratio of points belonging to foreground objects existed in the subset. Instance recall refers to the percentage of instances that contain more than 1 point in the subset. Although RPS retains the most foreground points, it has the lowest instance recall due to the loss of distant points, which can be visualized in Figure 7. Compared to FPS, our method has a higher point recall while still keeping more instances after downsampling, which is important for detection tasks." }, { "figure_ref": [], "heading": "Point Cloud Scene Segmentation", "publication_ref": [ "b35" ], "table_ref": [ "tab_4", "tab_4" ], "text": "The scene segmentation task densely predicts per-point semantics for large-scale point clouds, requiring greater efficiency and sampling quality than detection.\nOutdoor Scenes The results at the top of table 5 sampling strategy outperforms both RPS and FPS by a large margin, improving by 4.1% and 3.1% mIoU after retraining, respectively. For BAF-LAC, however, there is no noticeable change compared to RPS. We guess it is mainly because the backward attention fusing mechanism in BAF-LAC [36] complements the points lost by RPS with skip connections.\nIndoor Scenes The rest of Table 5 shows the indoor segmentation results. I find that our samplers have less performance improvement indoors. In all models, it brings no more than 0.3% mIoU change. We suspect that the main reason for this is that RGB-D captures denser point clouds indoors than outdoor LiDAR. So there has not been much change in either the replacement of FPS or RPS." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We use IA-SSD detector and KITTI dataset for quick experiments. The results of car detection are shown in Table 6. From groups 1-4, we can see that the random selection of a representative point in voxels has the worst performance. Besides, when using adaptive voxels, our most center-closest point selection helps the detector to obtain higher APs. For all point selection strategies, adaptive voxels bring major performance improvements. Group 5 investigates how the number of layers in our architecture affects performance. The experimental results indicate that a 2layer structure can achieve the best performance." }, { "figure_ref": [], "heading": "Qualitative Results and Discussion", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Figure 7 shows examples of point clouds subset from different samplers. In line with Table 4, RPS retains most of the points close to the sensor, but loses most points further away. In contrast, RVS can prevent the distant points from being abandoned but has messy sampling results at the near place. From the last two columns, it can be observed that both FPS and our HAVSampler well deal with the point cloud at all distances and preserve the skeleton of source point clouds when downsampling." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although the performance of efficient HAVSampler is competitive with or even higher than the state-of-the-art FPS on object detection and scene segmentation. We find that compared to indoor RGB-D point clouds, HAVSampler performs better for outdoor LiDAR point clouds. In addition, our experiments on point cloud classification and partition segmentation show that there is a counter-intuitive drop in performance when replacing the FPS with ours, which can be seen in the supplementary material. But this performance degradation is still much smaller than using RPS. However, the sampler latency in these object-level tasks is ignorable and so is not a concern for us." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present HAVSampler, an highly efficient yet powerful sampler for real-time applications in large-scale point clouds. We reveal that evenly spacing the points in the subset is crucial for optimizing the performance of downstream tasks Based on this conclusion, our design of grid-guided paradigm, adaptive voxelization, and hierarchical architecture results in outstanding performance while reducing inference time by 20-80%. This breakthrough in efficiency addresses the bottleneck of the sampling step in real-time applications. We release our source code and hope that it can serve as a solid component to inspire and encourage the point cloud learning community. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "//github.com/OuyangJunyuan/" } ]
While point-based neural architectures have demonstrated their efficacy, the time-consuming sampler currently prevents them from performing real-time reasoning on scene-level point clouds. Existing methods attempt to overcome this issue by using random sampling strategy instead of the commonly-adopted farthest point sampling (FPS), but at the expense of lower performance. So the effectiveness/efficiency trade-off remains under-explored. In this paper, we reveal the key to high-quality sampling is ensuring an even spacing between points in the subset, which can be naturally obtained through a grid. Based on this insight, we propose a hierarchical adaptive voxel-guided point sampler with linear complexity and high parallelization for real-time applications. Extensive experiments on large-scale point cloud detection and segmentation tasks demonstrate that our method achieves competitive performance with the most powerful FPS, at an amazing speed that is more than 100 times faster. This breakthrough in efficiency addresses the bottleneck of the sampling step when handling scene-level point clouds. Furthermore, our sampler can be easily integrated into existing models and achieves a 20∼80% reduction in runtime with minimal effort.
Hierarchical Adaptive Voxel-guided Sampling for Real-time Applications in Large-scale Point Clouds
[ { "figure_caption": "Figure 1 .1Figure 1. Statistical results of sampling latency on the KITTI and nuScenes datasets for detection tasks. Blue bars indicate that the sampler dominates most of the runtime. Green bars show that our approach can significantly break this efficiency bottleneck.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Illustration of the overall architecture. HAVSampler contains k layers with different voxel sizes, each consisting of an adaptive voxel searching (AVS) module (see Section 3.2) and center-closest selection strategy (see Section 3.3). The leftmost is the original point cloud of a car. The point cloud on the rightmost side is the result of sampling. We show a 2D slice of a car for easier understanding.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The distance distributions of points in different sampled subsets. The red line marks the minimum spacing of FPS results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison of different point selections. Deep and light blue grids indicate non-empty and empty voxels, respectively. Black dots is grid center. Orange circles highlights the selections.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Statistical results on KITTI dataset. (a) Distribution of optimum voxel size when sampling 4096 points. (b) Red curve shows the distribution of the number of non-empty voxels, i.e., the number of sampled points, when fixing the voxel size to [0.34, 0.34, 0.30] m, mean of the optimum voxel in (a); The orange curve is the normalized area under the red curve.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "3d siamese tracking: A motion-centric paradigm for 3d single object tracking in point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8111-8120, 2022. 2", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Performance of outdoor 3D object detection. The effect of samplers is evaluated by replacing the first layer sampler of the original models. We report the 3D AP calculated by both 11 and 40 recall positions for KITTI. For nuScenes, we follow the standard protocol[4] by reporting NDS and mAP. Besides, the latency measured with test with different batch size is also shown.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance of indoor 3D object detection on SUN RGN-D dataset. We replace the first layer sampler of the original models. The reported mAP is calculated at the more strict IoU threshold of 0.5. *: the original configuration.", "figure_data": "DetectorSampler train [email protected] tablessofachair toilet desk dresser nightstand bookshelf bathtubVoteNet [29] * FPS FPS33.3247.98 19.44 42.35 53.78 63.975.2915.735.864.5445.24FPS ours32.8246.87 20.25 42.93 53.17 56.845.1214.9337.184.8946.04improve-0.50-1.12 +0.81 +0.58 -0.61 -6.12 -0.17-0.80+1.32+0.35+0.813DETR [26] * FPS FPS30.7348.13 18.23 40.74 44.65 66.897.9312.3331.134.9632.32FPS ours32.0151.50 20.00 39.17 45.61 64.485.9014.6734.525.6638.88improve+1.28+3.37 +1.77 -1.57 +0.96 -2.41 -2.03+2.34+3.39+0.70+6.56", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "show the advantages of our method in scene segmentation. Our Recall analysis on KITTI dataset.", "figure_data": "MethodRPSRVSFPSOursPoint recall 24.99% 19.87% 18.16% 18.56%Instance recall 98.18% 99.90% 99.94% 99.96%", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of both outdoor and indoor point cloud scene segmentation. We report the mean IoU on SemanticKITTI val set. As for S3DIS, we compute the mean IoU by the 6-fold cross validation.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effects of different strategies of layer, point selection and voxel initialisation for HAVSampler. ME: mean. RA: random. CE: centre. CC: center-closest. Fix: Fixed voxel size. Ada: Adaptive voxel size. The red row is baseline RVS. The row in blue is the best configuration for the proposed method.", "figure_data": "PedestrianCyclistCar(a) source(b) RPS(c) RVS(d) FPS(e) Ours# LPoint section strategy ME RA CE CC Fix Ada AP@R11 AP@R40 Voxel Moderate Car11 1✓ ✓✓✓79.29 83.8183.04 84.4921 1✓ ✓✓✓79.03 79.4582.69 83.2431 1✓ ✓✓✓79.27 83.8182.99 84.4941 1✓ ✓✓✓79.24 84.0882.93 84.6752 3✓ ✓✓ ✓84.27 79.2085.04 83.04", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Junyuan Ouyang; Xiao Liu; Haoyao Chen
[ { "authors": "Yasuhiro Aoki; Hunter Goforth; Rangaprasad Arun Srivatsan; Simon Lucey", "journal": "", "ref_id": "b0", "title": "Pointnetlk: Robust & efficient point cloud registration using pointnet", "year": "2019" }, { "authors": "Iro Armeni; Sasha Sax; Silvio Amir R Zamir; Savarese", "journal": "", "ref_id": "b1", "title": "Joint 2d-3d-semantic data for indoor scene understanding", "year": "2017" }, { "authors": "Jens Behley; Martin Garbade; Andres Milioto; Jan Quenzel; Sven Behnke; Cyrill Stachniss; Jurgen Gall", "journal": "", "ref_id": "b2", "title": "Semantickitti: A dataset for semantic scene understanding of lidar sequences", "year": "2019" }, { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b3", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Chen Chen; Zhe Chen; Jing Zhang; Dacheng Tao", "journal": "", "ref_id": "b4", "title": "Sasa: Semantics-augmented set abstraction for point-based 3d object detection", "year": "2022" }, { "authors": "Huixian Cheng; Xianfeng Han; Hang Jiang; Dehong He; Guoqiang Xiao", "journal": "", "ref_id": "b5", "title": "Pcb-randnet: Rethinking random sampling for lidar semantic segmentation in autonomous driving scene", "year": "2022" }, { "authors": "Ta-Ying Cheng; Qingyong Hu; Qian Xie; Niki Trigoni; Andrew Markham", "journal": "Springer", "ref_id": "b6", "title": "Meta-sampler: Almost-universal yet task-oriented sampling for point clouds", "year": "2022" }, { "authors": "Yago Diez; Joan Martí; Joaquim Salvi", "journal": "Pattern Recognition Letters", "ref_id": "b7", "title": "Hierarchical normal space sampling to speed up point cloud coarse matching", "year": "2012" }, { "authors": "Oren Dovrat; Itai Lang; Shai Avidan", "journal": "", "ref_id": "b8", "title": "Learning to sample", "year": "2019" }, { "authors": "Nico Engel; Vasileios Belagiannis; Klaus Dietmayer", "journal": "IEEE Access", "ref_id": "b9", "title": "Point transformer", "year": "2021" }, { "authors": "Qiulei Siqi Fan; Fenghua Dong; Yisheng Zhu; Peijun Lv; Fei-Yue Ye; Wang", "journal": "", "ref_id": "b10", "title": "Scf-net: Learning spatial contextual features for large-scale point cloud segmentation", "year": "2021" }, { "authors": "David Farrell", "journal": "", "ref_id": "b11", "title": "Simplegpuhashtable", "year": "2020" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "IEEE", "ref_id": "b12", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Fabian Groh; Patrick Wieschollek; Hendrik Pa Lensch", "journal": "Springer", "ref_id": "b13", "title": "Flex-convolution: Million-scale point-cloud learning beyond grid-worlds", "year": "2018" }, { "authors": "Qingyong Hu; Bo Yang; Linhai Xie; Stefano Rosa; Yulan Guo; Zhihua Wang; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b14", "title": "Randla-net: Efficient semantic segmentation of large-scale point clouds", "year": "2020" }, { "authors": "Loic Landrieu; Martin Simonovsky", "journal": "", "ref_id": "b15", "title": "Large-scale point cloud semantic segmentation with superpoint graphs", "year": "2018" }, { "authors": "Alex H Lang; Sourabh Vora; Holger Caesar; Lubing Zhou; Jiong Yang; Oscar Beijbom", "journal": "", "ref_id": "b16", "title": "Pointpillars: Fast encoders for object detection from point clouds", "year": "2019" }, { "authors": "Itai Lang; Asaf Manor; Shai Avidan", "journal": "", "ref_id": "b17", "title": "Samplenet: Differentiable point cloud sampling", "year": "2020" }, { "authors": "Jingtao Li; Jian Zhou; Yan Xiong; Xing Chen; Chaitali Chakrabarti", "journal": "IEEE", "ref_id": "b18", "title": "An adjustable farthest point sampling method for approximately-sorted point cloud data", "year": "2022" }, { "authors": "Luyang Li; Ligang He; Jinjin Gao; Xie Han", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b19", "title": "Psnet: Fast data structuring for hierarchical deep learning on point cloud", "year": "2022" }, { "authors": "Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Xinhan Di; Baoquan Chen", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Pointcnn: Convolution on x-transformed points", "year": "2018" }, { "authors": "Yongcheng Liu; Bin Fan; Shiming Xiang; Chunhong Pan", "journal": "", "ref_id": "b21", "title": "Relation-shape convolutional neural network for point cloud analysis", "year": "2019-06" }, { "authors": "Ze Liu; Zheng Zhang; Yue Cao; Han Hu; Xin Tong", "journal": "", "ref_id": "b22", "title": "Group-free 3d object detection via transformers", "year": "2021" }, { "authors": "Xu Ma; Can Qin; Haoxuan You; Yun Haoxi Ran; Fu", "journal": "", "ref_id": "b23", "title": "Rethinking network design and local geometry in point cloud: A simple residual mlp framework", "year": "2022" }, { "authors": "Jiageng Mao; Yujing Xue; Minzhe Niu; Haoyue Bai; Jiashi Feng; Xiaodan Liang; Hang Xu; Chunjing Xu", "journal": "", "ref_id": "b24", "title": "Voxel transformer for 3d object detection", "year": "2021-10" }, { "authors": "Ishan Misra; Rohit Girdhar; Armand Joulin", "journal": "", "ref_id": "b25", "title": "An end-toend transformer model for 3d object detection", "year": "2021" }, { "authors": "Ehsan Nezhadarya; Ehsan Taghavi; Ryan Razani; Bingbing Liu; Jun Luo", "journal": "", "ref_id": "b26", "title": "Adaptive hierarchical down-sampling for point cloud classification", "year": "2020" }, { "authors": "Jingmei Ning; Feipeng Da; Shaoyan Gai", "journal": "IEEE Sensors Journal", "ref_id": "b27", "title": "Density aware 3d object single stage detector", "year": "2021" }, { "authors": "Or Charles R Qi; Kaiming Litany; Leonidas J He; Guibas", "journal": "", "ref_id": "b28", "title": "Deep hough voting for 3d object detection in point clouds", "year": "2019" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b29", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Abed; Al Kader Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "", "ref_id": "b31", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Saeed Shi Qiu; Nick Anwar; Barnes", "journal": "", "ref_id": "b32", "title": "Semantic segmentation for real point cloud scenes via bilateral augmentation and adaptive fusion", "year": "2021" }, { "authors": "Shaoshuai Shi; Chaoxu Guo; Li Jiang; Zhe Wang; Jianping Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b33", "title": "Pv-rcnn: Pointvoxel feature set abstraction for 3d object detection", "year": "2020" }, { "authors": "Shaoshuai Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b34", "title": "Pointrcnn: 3d object proposal generation and detection from point cloud", "year": "2019" }, { "authors": "Hui Shuai; Xiang Xu; Qingshan Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b35", "title": "Backward attentive fusing network with local aggregation classifier for 3d point cloud semantic segmentation", "year": "2021" }, { "authors": "Shuran Song; Jianxiong Samuel P Lichtenberg; Xiao", "journal": "", "ref_id": "b36", "title": "Sun rgb-d: A rgb-d scene understanding benchmark suite", "year": "2015" }, { "authors": "Yuxiang Sun; Weixun Zuo; Huaiyang Huang; Peide Cai; Ming Liu", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b37", "title": "Pointmoseg: Sparse tensor-based end-to-end moving-obstacle segmentation in 3-d lidar point clouds for autonomous driving", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b38", "title": "Openpcdet: An opensource toolbox for 3d object detection from point clouds", "year": "2020" }, { "authors": "Sukai Wang; Yuxiang Sun; Chengju Liu; Ming Liu", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b39", "title": "Pointtracknet: An end-to-end network for 3-d object detection and tracking from point clouds", "year": "2020" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "Acm Transactions On Graphics (tog)", "ref_id": "b40", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Wenxuan Wu; Zhongang Qi; Li Fuxin", "journal": "", "ref_id": "b41", "title": "Pointconv: Deep convolutional networks on 3d point clouds", "year": "2019-06" }, { "authors": "Qiangeng Xu; Xudong Sun; Cho-Ying Wu; Panqu Wang; Ulrich Neumann", "journal": "", "ref_id": "b42", "title": "Grid-gcn for fast and scalable point cloud learning", "year": "2020" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b43", "title": "Second: Sparsely embedded convolutional detection", "year": "2018" }, { "authors": "Honghui Yang; Zili Liu; Xiaopei Wu; Wenxiao Wang; Wei Qian; Xiaofei He; Deng Cai", "journal": "Springer", "ref_id": "b44", "title": "Graph r-cnn: Towards accurate 3d object detection with semantic-decorated local graph", "year": "2022" }, { "authors": "Jiancheng Yang; Qiang Zhang; Bingbing Ni; Linguo Li; Jinxian Liu; Mengdie Zhou; Qi Tian", "journal": "", "ref_id": "b45", "title": "Modeling point clouds with self-attention and gumbel subset sampling", "year": "2019" }, { "authors": "Zetong Yang; Yanan Sun; Shu Liu; Jiaya Jia", "journal": "", "ref_id": "b46", "title": "3dssd: Point-based 3d single stage object detector", "year": "2020" }, { "authors": "Xingyi Tianwei Yin; Philipp Zhou; Krahenbuhl", "journal": "", "ref_id": "b47", "title": "Centerbased 3d object detection and tracking", "year": "2021" }, { "authors": "Yifan Zhang; Qingyong Hu; Guoquan Xu; Yanxin Ma; Jianwei Wan; Yulan Guo", "journal": "", "ref_id": "b48", "title": "Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds", "year": "2022" }, { "authors": "Hengshuang Zhao; Li Jiang; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b49", "title": "Pointweb: Enhancing local neighborhood features for point cloud processing", "year": "2019-06" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Vladlen Philip Hs Torr; Koltun", "journal": "", "ref_id": "b50", "title": "Point transformer", "year": "2021" }, { "authors": "Chaoda Zheng; Xu Yan; Haiming Zhang; Baoyuan Wang; Shenghui Cheng; Shuguang Cui; Zhen Li", "journal": "", "ref_id": "b51", "title": "Beyond", "year": "" } ]
[ { "formula_coordinates": [ 3, 50.11, 398.72, 78.86, 12.32 ], "formula_id": "formula_0", "formula_text": "P = {p i ∈ R 3 } n i=1" }, { "formula_coordinates": [ 3, 82.75, 655.84, 156.09, 29.08 ], "formula_id": "formula_1", "formula_text": "P s = argmax p k d k | p k ∈ P (i) us m i=1" }, { "formula_coordinates": [ 3, 278.62, 666.59, 7.74, 8.64 ], "formula_id": "formula_2", "formula_text": ")1" }, { "formula_coordinates": [ 4, 87.7, 259.41, 198.67, 23.22 ], "formula_id": "formula_3", "formula_text": "g i = (u, v, w) = ⌊ x i v x ⌋, ⌊ y i v y ⌋, ⌊ z i v z ⌋ .(2)" }, { "formula_coordinates": [ 5, 111.05, 194.46, 175.31, 12.69 ], "formula_id": "formula_4", "formula_text": "P ′ ℓ = S (P ℓ , AVS (P ℓ , m ℓ )) ,(3)" }, { "formula_coordinates": [ 5, 340.15, 78.05, 173.68, 32.07 ], "formula_id": "formula_5", "formula_text": "Method RPS RVS FPS Ours Point O (N ) O (N ) O N 2 O (N ) Space O (1) O (N ) O (N ) O (N )" } ]
10.18653/v1/S17-2006
2024-03-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b31", "b37", "b12", "b24", "b28", "b11", "b29" ], "table_ref": [], "text": "Instruction fine-tuning (Ouyang et al., 2022) has facilitated transfer learning for Large Language Models (LLMs) to unseen tasks at scale. To leverage LLMs as versatile natural language processors, there is an immediate effort to ascertain their zeroshot performance on challenging tasks. Social media analysis is an active area of research with a number of complex, domain-specific tasks which can be utilised for harm reduction (Waseem et al., 2017) and preventing the spread of misinformation (Zubiaga et al., 2018). LLMs have great potential to assist with such computational social science (CSS) tasks, both in automatic data annotation and social media analysis (Kuzman et al., 2023;Reiss, 2023;Törnberg, 2023). Hence, it is important to understand the capabilities and limitations of the latest instruction fine-tuned LLMs for addressing such CSS tasks. In this paper, we are primarily focusing on answering the following research questions (RQ): 1\n• (RQ 1) What level of zero-shot performance can LLMs achieve in social media classification tasks? How does zero-shot LLM performance compare against smaller state-of-theart language models fine-tuned to the specific analysis task?\n• (RQ 2) What are the most effective LLM 1 Accepted at LREC-COLING 2024.\nprompt strategies for social media classification tasks in a zero-shot setting?\n• (RQ 3) Was the pre-training corpus of the large model already inclusive of these datasets prior to the experiment (i.e., data leakage issues)?\nTo answer those research questions, we conduct a series of controlled experiments to investigate the zero-shot performance of two off-the-shelf instruction fine-tuned large language models using different prompting strategies. Namely, we experiment with GPT-3.5-turbo (GPT),2 the most widely used proprietary instruction fine-tuned large language model; and OpenAssistant-LLaMA (LLaMA-OA) (Köpf et al., 2023), an open source LLM instruction fine-tuned based LLaMA (Touvron et al., 2023). We use six social media analysis NLP tasks to evaluate the classification performance of LLMs using different prompt complexity levels (including providing few-shots examples and publication information of benchmark datasets in the prompt). The findings are also compared against baselines employing standard techniques such as fine-tuning BERT.\nIt must be noted that the scope of this paper is on evaluating the performance of off-the-shelf, instruction fine-tuned language models on social media classification tasks, in a zero-shot setting. The evaluation of foundation language models without instruction fine-tuning is out of the scope of this paper.\nOur main findings are:\n• (i) Task-specific fine-tuned models still generally tend to outperform LLMs in most zeroshot settings, even when the fully fine-tuned model (e.g., BERT-large model) is significantly smaller.\n• (ii) Using prompting ensemble methods (e.g., on synonyms) can increase the performance and robustness of LLMs.\n• (iii) Detailed and complex prompting strategies are not necessary." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b21", "b33", "b25", "b15", "b36", "b14", "b12", "b8", "b13", "b28", "b35", "b16", "b26", "b36" ], "table_ref": [], "text": "Both models evaluated in this work, GPT (also referred to as ChatGPT) and LLaMA-OA, have been trained using Reinforcement Learning with Human Feedback (RLHF) in conjunction with instruction tuning, as first explored in Ouyang et al. (2022).\nInstruction tuning is the fine-tuning of language models on NLP tasks rephrased as instructions and prior work has shown that it is an effective way of training LLMs to perform zero-shot on unseen tasks. (Wei et al., 2021;Sanh et al., 2021) Longpre et al. (2023) carried out a detailed ablation study on non-RLHF instruction tuning methods across the general NLP tasks in the Flan 2022 collection and found that T5 instruction tuned on the Flan performed surprisingly well on heldout tasks when compared to models directly finetuned on said task. Tuning with human feedback could be the next step in improving instruction tuning in this area. Ziems et al. (2023) sets a roadmap for employing LLMs as data annotators by establishing prompting best practices and an evaluation of the zero-shot performance of 13 language models on 24 tasks in computational social sciences. In the financial domain, (Li et al., 2023) reveal that Chat-GPT and GPT-4 outperform the performance of supervised models, which have been fine-tuned with domain-specific data, in several financial benchmarks.\nTo evaluate the zero-shot performance of Chat-GPT for text classification, Kuzman et al. (2023) compares against a fine-tuned XLM-RoBERTa model for the task of automatic genre classification in English and Slovenian. They show that Chat-GPT outperforms the baseline on unseen datasets and that there is no drop in performance when provided with Slovenian examples. Ganesan et al. (2023) use Facebook posts to classify user personality traits, based on openness, conscientiousness, extroversion, agreeableness, and neuroticism. They find that GPT-3 performs poorly on binary and worse yet on tertiary ranking for each trait.\nLLMs have also been applied in mental health applications. Lamichhane (2023) evaluate Chat-GPT's ability to classify stress, depression, and suicidal inclination from Reddit posts. Although ChatGPT significantly outperforms their baseline, the baseline consisted of a simple prediction of the majority class.\nFor toxicity detection, Wang and Chang (2022) analyse GPT-3's generative and discriminative zero-shot capabilities, finding that performance is only slightly better than a random baseline. However, the authors argue that the generative task allows for nuanced distinction of toxicity in the, somewhat subjective, binary setting. Törnberg (2023) find that ChatGPT-4 outperforms non-expert annotators in identifying the political affiliation of Democratic or Republican party members based on their tweets during the 2020 US election. Wu et al. (2023) use ChatGPT to rank the conservatism of representatives in the 116th US Congress through a series of pairwise match ups, showing a high correlation with DW-NOMINATE scores.\nAs LLMs improve their performance on language generation tasks, the risk of misinformation and propaganda increases. Mitchell et al. (2023) propose DetectGPT, a perturbation-based zeroshot method for identifying machine-generated passages. (Su et al., 2023) further develop this approach with DetectLLM-LRR and -NPR, achieving improved efficiency and improved performance respectively.\nNote that our work is distinct from previous research (Ziems et al., 2023); we evaluate Large Language Models (LLMs) on a different set of benchmarks and experiment with various prompt modification strategies, including replacing original labels with synonyms and incorporating arXiv paper titles." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Prompting Strategies", "publication_ref": [ "b1", "b36" ], "table_ref": [ "tab_0" ], "text": "Following the prompting approaches described by Child et al. (2019); Ziems et al. (2023), we develop prompts by (i) adding instructions after the context (e.g., task description) and (ii) using constraints (e.g., 'Only reply with Bragging' or 'Not Bragging.') at the end. We observe that using constraints can effectively avoid cases of model uncertainty (e.g., 'As an AI model, I cannot answer this question.') and guide models to generate the expected outputs.\nFor consistency, we use the same prompts for both GPT and LLaMA-OA. Examples of different 1. To examine the zero-shot predictive performance of LLMs, we carry out a comprehensive set of experiments using four different prompting strategies." }, { "figure_ref": [], "heading": "Basic Instruction (Basic):", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We only provide a basic instruction without including detailed task and label descriptions. For example, for the bragging detection task, our prompt is: 'Identify whether or not a tweet includes a bragging statement. + Constraints + Text'. Two possible configurations are tested, namely adding the prompt before or after the text respectively.\nTask and Label Description (T/L Desc): Building upon the Basic Instruction Round, we provide additional information in the prompt by including task and label descriptions (see Table 1). Note that we use the labels and task descriptions detailed in the original papers on the respective datasets.\nThe format of the prompts used for the Task and Label Description Round is: 'Basic Instruction + Task and Label Descriptions + Constraints + Text'." }, { "figure_ref": [], "heading": "Few-sample Prompting (Few-sample):", "publication_ref": [], "table_ref": [], "text": "We also test a few-sample prompting strategy by adding one example selected from the training set for each label. The prompt designed for the few-sample experiments is: 'Basic Instruction + Few-shot Examples + Constraints + Text'. Note that using few-sample as input is still a type of zero-shot setup, as we do not fine-tune the model." }, { "figure_ref": [], "heading": "Memory Recall (Recall):", "publication_ref": [], "table_ref": [], "text": "We observe that both GPT and LLaMA-OA can recall papers published before September 2021. Since arXiv papers are part of the training data of the LLMs, we also include the title of the source paper in the prompt when evaluating the model's zero-shot performance. For example, we include paper information by using this prompt: 'Recall this paper [Paper Title] + Basic Instruction + Constraints + Text'. For such recall prompts, we only perform experiments on datasets published before September 2021. For reference, we examine the variations in performance across different checkpoints to assess whether instruction fine-tuning might influence the efficacy of the classification task." }, { "figure_ref": [], "heading": "Synonyms", "publication_ref": [], "table_ref": [], "text": "LLMs might generate different outputs when using prompts which are semantically similar (e.g., synonyms3 ). To test the generalisability of LLMs, we substitute the names of each class with words that have the same or similar meaning. For example, we test the synonyms 'hateful', 'toxic', and 'abusive' to replace the original category 'offensive'. We also use two ensemble learning approaches to improve predictive performance by combining the outputs from all synonyms settings for each dataset:\n• Ensemble Majority: We select the category that has been selected the most times across all synonym experiments.\n• Ensemble All Agreed: We also experiment with a stricter setting that considers only model outputs that are in the same category (i.e., Complaint, Criticism, dissatisfaction, etc.) using all synonyms. For example, we consider the LLM that uses all synonyms predicted as complaints, otherwise they are considered non-complaints. We only report this metric for datasets with binary classes." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b18", "b2", "b17", "b2", "b10", "b3", "b7", "b32" ], "table_ref": [ "tab_2" ], "text": "In order to ensure a comprehensive evaluation of LLM performance, we select six datasets that cover a wide range of computational social science tasks and different time spans. In particular, some of them were created before September 2021, while others were collected after the release of the LLMs used in this paper. All datasets are in English with manually annotated class labels. We detail dataset specifications and statistics in Table 2:\n• Complaint This task aims to identify whether a tweet expresses a complaint, which is defined as 'a negative mismatch between reality and expectations in a particular situation' (e.g., customer complaints on Twitter) (Olshtain and Weinbach, 1987). We use a dataset developed by Preoţiuc-Pietro et al.\n(2019) consisting of 3,449 English tweets annotated with one of two categories, i.e., complaints or not complaints.\n• Vaccine Stance This task aims to automatically predict the stance of tweets towards COVID-19 vaccination (Cotfas et al., 2021;Mu et al., 2023). The dataset developed by (Cotfas et al., 2021) provides 2,792 tweets belonging to one of three stance categories: provaccine, anti-vaccine, or neutral.\n• Bragging This task aims to classify whether a tweet is bragging or not bragging. We evaluate on a dataset developed by Jin et al. (2022) which contains 6,696 tweets labelled as either bragging or not bragging.\n• Rumour Stance We use the RumorEval 2017 dataset which is developed by Derczynski et al. (2017).\nHere, we use the dataset for 4-way rumour stance classification, i.e., determining the stance of a reply towards a given source post (i.e. rumour) as either supporting, denying, questioning, or commenting.\n• Sarcasm The sarcasm detection task is to identify whether a given tweet is intended to be sarcastic or not. We evaluate the task on the Semeval-2022 Task 6 dataset (Farha et al., 2022), which contains 4,868 tweets labelled as either sarcasm or nonsarcasm.\n• Hate Speech The task of hate speech detection aims to study anti-social behaviours, e.g., racism and sexism in social media. We evaluate on a dataset developed by Waseem and Hovy (2016) with a binary classification setup, i.e., offensive or non-offensive." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Large Language Models", "publication_ref": [ "b29", "b11", "b5" ], "table_ref": [], "text": "Our experiments are conducted using two publicly accessible large language models: GPT-3.5-turbo (GPT) 4 is an enhanced version of the GPT-3 language model with instruction finetuning. GPT can be employed for a wide range of NLP tasks, including machine translation, common sense reasoning, and question answering.\nThe experiments use the GPT model via the official OpenAI API.5 \nLLaMA-OA We employ the LLaMA-OA model developed by LAIONAI,6 which fine-tunes the vanilla LLaMA (Touvron et al., 2023) 30B model using the OpenAssistant dataset (Köpf et al., 2023). Since the original LLaMA models are not allowed to be shared by individuals, LAIONAI could not release the weights for LLaMA-OA on huggingface but released xor (i.e., 'Exclusive Or') weights7 applied to the original LLaMA weights and the check sum calculations performed to validate the conversion. In order to be able to run the experiments locally under hardware constraints, we applied 8-bit quantisation at model load time via Bit-sAndBytes (Dettmers et al., 2021) to decrease the inference memory requirements." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "The zero-shot classification performance of the two LLMs is compared against a weak Logistic Regression baseline and a strong fully fine-tuned BERT-large baseline:" }, { "figure_ref": [], "heading": "Logistic Regression", "publication_ref": [], "table_ref": [], "text": "We represent the text using TF-IDF and consider tokens that appear more than 5 times. " }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "BERT-large", "publication_ref": [ "b6" ], "table_ref": [], "text": "We fine-tune BERT-large 8 (Devlin et al., 2019) by adding a linear classifier on top of the 24-layer transformer blocks. The special token '[CLS]' is used as the representation of each text." }, { "figure_ref": [], "heading": "Data Splits", "publication_ref": [], "table_ref": [], "text": "For each benchmark task, we divide the dataset into training (80%) and test (20%) sets using stratified random splits 9 . The training set is used for supervised fine-tuning, and is further sub-divided into a training and a validation subsets (in a 3:1 ratio) for hyperparameter tuning (e.g., early stopping) purposes. Subsequently, we evaluate the performance of the fine-tuned baselines and zero-shot LLMs on the 20% test set." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "Performance results are reported using two evaluation metrics: 1) Accuracy which consists of a direct comparison between the model predictions and the ground truth label; and 2) F1-macro scores are reported for situations where accuracy may not provide an adequate representation of performance, particularly for certain imbalanced datasets, such as Bragging and Rumour Stance." }, { "figure_ref": [], "heading": "Hyper-parameters", "publication_ref": [], "table_ref": [], "text": "During initial explorations, we observed that using a higher temperature (e.g., 0.8 for GPT and 2 for LLaMA-OA) results in inadequate classification performance, as it introduces more randomness in the model outputs. This suggests that higher temperature settings can cause the model outputs to be non-reproducible. Therefore in this study, we use a low temperature (i.e., 0.2) 10 for GPT to make the model more focused and deterministic.\nFor LLaMA-OA, we follow the 'precise hyperparameter setup' 11 For BERT-large, we set the learning rate as 2e-5, the batch size as 16, and the maximum sequence length as 256. We run all baseline models three times with different random seeds and report average results. We fine-tune BERT-large on an Nvidia RTX Titan GPU with 24GB memory and run LLaMA-OA on an Nvidia A100 GPU with 40GB memory. The inference rates of LLaMA-OA and GPT are approximately 1,200 and 3,000 samples per hour respectively." }, { "figure_ref": [], "heading": "Reproducibility of LLM Output", "publication_ref": [], "table_ref": [], "text": "As noted above, to ensure a consistent output, we utilise low temperature values of 0.2 and 0.1 for both GPT and LLaMA-OA. To evaluate the reproducibility of the models' output, we execute the basic prompt setting of the Complaint dataset five times for each language model. Our observations reveal that LLaMA-OA consistently generates identical outputs, whereas GPT achieves approximately 99% similarity in its outputs. Note that we consistently run LLaMA-OA on our own servers with identical hardware described in Section 5.5." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_6", "tab_5" ], "text": "The experimental results are shown in Table 3 andTable 4. Next we discuss them in relation to each of our three research questions." }, { "figure_ref": [], "heading": "(RQ 1) What level of zero-shot performance can LLMs achieve on social media classification tasks? How does zero-shot LLM performance compare against smaller state-of-the-art language models fine-tuned on the specific analysis task?", "publication_ref": [ "b22" ], "table_ref": [ "tab_5" ], "text": "In general, LLMs (GPT and LLaMA-OA) with zero-shot settings are able to achieve better results than the simple supervised Logistic Regression model. However, the traditional smaller finetuned language model (BERT-large) still outperforms the two LLMs on the majority of the tasks (4 out of 6 tasks). Furthermore, we observe that GPT consistently outperforms LLaMA-OA across all prompt settings and tasks when considering only the F1-macro measure. However, our results show that the accuracy of LLaMA-OA is better than that of GPT on some imbalanced datasets, such as 'Bragging' and 'Sarcasm'. This may be due to LLaMA-OA defaulting to the neutral class (labels without any specific speech act, such as 'Not Bragging' and 'Not Sarcastic').\nGPT achieves the best predictive performance on two speech act detection downstream tasks, namely Complaint (89.7 accuracy and 88.7 F1macro) and Sarcasm (62.1 F1-macro). This suggests that LLMs can be employed as strong baseline models for zero-shot classification tasks.\nWith respect to prompts, when the results of T/L Desc and Memory Recall are compared against Basic Instruction, it is observed that using a more complex prompt (e.g., adding label and paper information) does not necessarily improve model performance and may even introduce additional noise, leading to a degradation in performance. This indicates that adding complexity to the prompt might lead to the LLM not fully focusing on the human instructions.\nFor speech act detection tasks such as Complaint and Bragging, the accuracy of LLMs exceeds 85%, indicating that LLMs can potentially be used for data annotation as a way to reduce human annotation costs. Standard data annotation tasks typically rely on at least two annotators in the first round, so one of them could be replaced by an LLM. According to the annotation details 12 the vaccine stance task (Poddar et al., 2022), the agreement rate between the two annotators is approximately 62%.\n(RQ 2) What are the most effective LLM prompt strategies for social media classification tasks in a zero-shot setting? prompts add additional noise to the model. We also note that adding a few examples to the prompt actually damages classification performance, for both GPT and LLaMA-OA. We hypothesise that the longer prompt is affecting the model interpretation of instructions.\nTable 4 shows all zero-shot results when synonyms are used in prompts for all six datasets. We observe that revising prompts with synonyms can substantially improve the zero-shot performance of LLaMA-OA, except for the Bragging dataset. It is worth noting that the Sarcasm dataset is the only one where the prompt using the original categories performs worse. This suggests that replacing original labels with synonyms allows the LLaMA-OA model to better understand the task requirements. The variation in the training example distribution for both GPT and LLaMA-OA could account for the observed behaviours of the models. For example, the LLaMA-OA model might be fine-tuned on a dataset like: '[Text including offensive language] + [Category: Abusive]'. Therefore, we believe that it is important to test similar words in place of the original labels when designing instructions as well as use ensemble methods." }, { "figure_ref": [], "heading": "(RQ 3) Was the pre-training corpus of the large model already inclusive of these datasets prior to the experiment (i.e., data leakage issues)?", "publication_ref": [ "b3" ], "table_ref": [], "text": "To answer this question, we test different prompting strategies (e.g., by asking about the authors and task details of each paper) to explore whether the LLMs have been exposed to the dataset beforehand. In Table 7, we present two examples of our testing approach by directly incorporating the titles of the RumourEval (Derczynski et al., 2017) and Sarcasm (Farha et al., 2022) datasets into the prompts. Considering that LLMs are capable of recalling task details when provided with the title of an arXiv paper (i.e., memory recall), we speculate that these LLMs might be trained on these source papers, incorporating some examples alongside their corresponding labels. However, due to the opaque nature of the training corpus utilised for these LLMs, it is uncertain to what extent these datasets were included in the training data." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [ "b36" ], "table_ref": [ "tab_8" ], "text": "better understand the limitations of LLMs, we conduct an error analysis focusing on shared errors across all synonym settings following (Ziems et al., 2023). We manually check these wrong predictions and observe that some unanimous errors (Ziems et al., 2023) (i.e., when the model agreed on an incorrect answer using different synonyms) are caused by incorrect or controversial ground truth labels. We summarise the number of wrong predictions from the synonyms experiments on GPT in Table 5.\nOn the other hand, we observe that LLaMA-OA often defaults to the majority category, such as 'not a bragging' and 'not sarcasm', which leads to higher accuracy but a lower macro-F1 measure. However, considering the high accuracy of LLM zero-shot classification performance, LLMs can still be utilised as data annotation tools (combined with human efforts) for NLP downstream tasks in CSS. We can utilise LLMs for data annotation and also to identify incorrect annotations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper explored a number of prompting strategies for the application of Large Language Models (LLMs) in computational social science tasks. It presented a range of controlled experiments that establish the efficacy of different prompt strategies on six publicly available datasets. Our main findings are summarised as follows:\n• Task-specific fine-tuned models generally tend to outperform LLMs in zero-shot settings.\n• More detailed and complex prompts (e.g, by adding arXiv paper title and few-samples) do not necessarily enhance classification performance.\n• The selection of specific words or phrases as the class label can considerably affect classification outcomes." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research is supported by a UKRI grant EP/W011212/1 (\"XAIvsDisinfo: eXplainable AI Methods for Categorisation and Analysis of COVID-19 Vaccine Disinformation and Online Debates\" 15 ) and an EU Horizon 2020 grant (agreement no.871042) (\"So-BigData++: European Integrated Infrastructure for Social Mining and BigData Analytics\" 16 )." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The cost of running GPT-3.5 for each task.\nWe therefore argue that developing prompts for zero-shot classification presents a significant challenge and recommend testing different prompt configurations before proceeding with experiments, while keeping in mind the time constraints 13 and financial costs associated with LLMs (see Table 6)." }, { "figure_ref": [], "heading": "Limitations and Future Work", "publication_ref": [ "b9", "b0", "b34", "b27", "b36" ], "table_ref": [], "text": "In this paper, we assess the zero-shot text classification performance of popular LLMs (i.e., GPT and LLaMA-OA) on six downstream tasks in CSS. We acknowledge that further experiments on other fine-grained CSS tasks (such as Ques-tion&Answering and Named Entity Recognition) would be beneficial in future work. Currently, several state-of-the-art LLMs, such as GPT-4 14 and Mistral (Jiang et al., 2023), achieve better predictive performance than those evaluated in our work. Therefore, we plan to extend our experiments in the future to include more state-of-the-art LLMs of various sizes. Since our focus is primarily on out-of-the-box performance, we experiment with simple alterations of the prompts. Other research, e.g. Arora et al. (2022), has considered prompt agas well as using LLMs to auto-generate prompts. We also do not explore advanced methods such as chain-of-thought prompting, which improves LM performance by encouraging it to output its intermediate reasoning steps (Wei et al., 2022;Suzgun et al., 2022).\nWe also tried to explore potential data leakage issues (Ziems et al., 2023) by testing various prompts to verify whether our test sets have been exposed to GPT and OA. However, due to the black box nature of the training datasets of these two LLMs, we are unable to confirm the presence of data leakage. Yes, I'm familiar with the paper titled SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. This paper is related to the SemEval-2017 workshop, specifically Task 8, which focused on rumour detection and veracity classification on social media platforms. Participants were provided with a dataset containing rumours along with associated tweets that either supported or denied the rumours. The task involved classifying the veracity of the rumours as true, false, or unverified, and also identifying the tweets that provided support or denial for the rumours." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "LLaMA-OA (Rumour) Yes, I am aware of the paper titled SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. This paper presents the results of the RumourEval task held as part of the 2017 edition of the SEMEVAL workshop. The task involved developing algorithms for determining the veracity of rumors and identifying sources that supported or refuted them using Twitter data." }, { "figure_ref": [], "heading": "GPT (Sarcasm)", "publication_ref": [], "table_ref": [], "text": "I apologise for the confusion, but as an AI language model, my training only goes up until September 2021, and I do not have access to specific details about SemEval-2022 or any subsequent tasks. The last SemEval workshop I'm aware of is SemEval-2021.\nLLaMA-OA (Sarcasm) Semeval-2022 Task 6: ISARCASMEVAL is a subtask focusing specifically on the identification of ironic and sarcastic expressions in English and Arabic text. Irony and sarcasm are common forms of expression found in both written and spoken communication, but they pose significant challenges for automatic sentiment analysis due to their subtle nature and dependence on contextual cues. " }, { "figure_ref": [], "heading": "Tasks Prompt with Task Description", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Complaint", "publication_ref": [], "table_ref": [], "text": "Complaining is a basic speech act used to express a negative mismatch between reality and expectations towards a state of affairs, product, organization or event. Key to the definition of complaints is the expression of the breach of expectations." }, { "figure_ref": [], "heading": "Vaccine Stance", "publication_ref": [], "table_ref": [], "text": "Pro-vaccine tweets express a positive opinion regarding vaccination. Anti-vaccine tweets express a negative opinion towards COVID-19 vaccination. Neutral includes news related to vaccine development, questions about the vaccine, or informative tweets concerning vaccination without a clear opinion." }, { "figure_ref": [], "heading": "Rumour Stance", "publication_ref": [], "table_ref": [], "text": "Support: the author of the response supports the veracity of the rumour. Deny: the author of the response denies the veracity of the rumour. Query: the author of the response asks for additional evidence in relation to the veracity of the rumour. Comment: the author of the response makes their own comment without a clear contribution to assessing the veracity of the rumour." }, { "figure_ref": [], "heading": "Hate Speech", "publication_ref": [], "table_ref": [], "text": "A tweet is offensive if it: 1. uses a sexist or racial slur. 2. attacks a minority. 3. seeks to silence a minority. 4. criticizes a minority (without a well founded argument). 5. promotes, but does not directly use, hate speech or violent crime. 6. criticizes a minority and uses a straw man argument. 7. blatantly misrepresents truth or seeks to distort views on a minority with unfounded claims. 8. shows support of problematic hash tags. E.g. \"#BanIslam\", \"#whoriental\", \"#whitegenocide\". 9. negatively stereotypes a minority. 10. defends xenophobia or sexism. 11. contains a screen name that is offensive, as per the previous criteria, the tweet is ambiguous (at best), and the tweet is on a topic that satisfies any of the above criteria." }, { "figure_ref": [], "heading": "Sarcasm", "publication_ref": [], "table_ref": [], "text": "Sarcasm is a form of verbal irony that occurs when there is a discrepancy between the literal and intended meanings of an utterance. Through this discrepancy, the speaker expresses their position towards a prior proposition, often in the form of surface contempt or derogation." }, { "figure_ref": [], "heading": "Bragging", "publication_ref": [], "table_ref": [], "text": "Bragging is a speech act which explicitly or implicitly attributes credit to the speaker for some 'good' (possession, accomplishment, skill, etc.) which is positively valued by the speaker and the potential audience.\nAs such, bragging includes announcements of accomplishments, and explicit positive evaluations of some aspect of self. A bragging statement should clearly express what the author is bragging about (i.e. the target of bragging). Support: @USER @USER @USER @USER yeah i feel really sorry for them Deny: @USER I never called uber PT . Everyone is having a go at Uber but not PT ... We own it , we shouldn't have to pay in desperate times Query: @USER @USER Ironic since all the i witnesses say the officer was white . Now it is the black officer Darren Wilson who shot ? ? Comment: @USER @USER Uber is covering the cost of all rides , Uber is still paying drivers higher fares to encourage them to do pickups." }, { "figure_ref": [], "heading": "Hate Speech", "publication_ref": [], "table_ref": [], "text": "Hateful: @USER Tell it to the 120 million Africans that Islam murdered. URL Not Hateful: @USER @USER doesn't look like I am." }, { "figure_ref": [], "heading": "Sarcasm", "publication_ref": [], "table_ref": [], "text": "Sarcastic: I love days when Rob works short call and is only at the hospital for *checks watch* 13 hours. Not Sarcastic: I got stop putting on glitter flowers I'd like to ad red." }, { "figure_ref": [], "heading": "Bragging", "publication_ref": [], "table_ref": [], "text": "Bragging: Come watch me and @USER face off in 2K best of 3 series #braggingrights @USER you next boiiii :flushed_face: :hot_face:. Not Bragging: I have completed survey on NaMo App. " } ]
Instruction-tuned Large Language Models (LLMs) have exhibited impressive language understanding and the capacity to generate responses that follow specific prompts. However, due to the computational demands associated with training these models, their applications often adopt a zero-shot setting. In this paper, we evaluate the zero-shot performance of two publicly accessible LLMs, ChatGPT and OpenAssistant, in the context of six Computational Social Science classification tasks, while also investigating the effects of various prompting strategies. Our experiments investigate the impact of prompt complexity, including the effect of incorporating label definitions into the prompt; use of synonyms for label names; and the influence of integrating past memories during foundation model training. The findings indicate that in a zero-shot setting, current LLMs are unable to match the performance of smaller, fine-tuned baseline transformer models (such as BERT-large). Additionally, we find that different prompting strategies can significantly affect classification accuracy, with variations in accuracy and F1 scores exceeding 10%.
Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science
[ { "figure_caption": "Desc Tweets that have been assigned to the class 'pro vaccine' express a positive opinion regarding the vaccination. Tweets belonging to the 'anti vaccine' class express a negative opinion towards COVID-19 vaccination. The 'neutral' class mainly includes news related to the development of vaccines, tweets that do not express a clear opinion, such as questions regarding the vaccine, informative tweets concerning vaccination. Prompt examples across different settings.", "figure_data": "TaskBasicBasic Instruction (i.e., Identify whether or not a tweet includes a bragging statement.)Bragging+ Constraints (i.e., Only reply (bragging) or (not bragging).) + Text (e.g., Tweet: Come watch me and @USER face off in 2K best of 3 series #braggingrights @USERyou next boiiii.)TaskBasic + T/L DescBasic InstructionVaccine+ T/L + Constraints + TextTaskFew-sampleBasic Instruction+ Few-samples (e.g., (i) Complaint: @USER @USER give the timeline by which I'll receive my cashbackComplaintwhich I should have received by 15th October 2017. (ii) Not Complaint: I just gave 5 stars to Nancy at @USERfor the great service I received!)+ Constraints + TextTaskMemory RecallBasic InstructionHate Speech+ arXiv Paper Title (i.e., Recall this paper: Hateful symbols or hateful people? predictive features for hate speech detection on twitter.)+ Constraints + Text", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Dataset Specifications.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "LLMs zero-shot classification results across all prompt settings. All datasets are evaluated with accuracy and macro-F1 scores. Green highlighted cells denote prompt settings where zero-shot LLMs beat the supervised baseline (i.e., Bert-large model fine-tuned on the training set). Bold text denotes the best result per task. OA 7 denotes the 'OpenAssistant/oasst-sft-7-llama-30b-xor' model.", "figure_data": "ModelComplaint Accuracy F1Vaccine Stance Accuracy F1Bragging AccuracyF1Logistic Regression81.479.772.873.188.658.8BERT-large89.488.681.581.391.376.1GPT Basic After84.984.165.565.881.162.7GPT Basic Before89.788.772.473.684.366.2GPT T/L Desc89.088.073.373.784.967.4GPT Memory Recall87.186.466.266.979.864.6GPT Few-sample85.685.268.269.477.361.8LLaMA-OA Basic After65.565.460.557.857.850.1LLaMA-OA Basic Before80.179.964.263.782.862.6LLaMA-OA Basic (OAT 7)83.983.466.465.964.142.0LLaMA-OA T/L Desc65.365.273.773.688.448.2LLaMA-OA Memory Recall82.682.164.263.888.146.8LLaMA-OA Memory Recall (OA 7)76.476.367.867.967.943.0OA Few-sample87.786.966.567.375.459.8ModelRumor Stance Accuracy F1Sarcasm AccuracyF1Hate Speech Accuracy F1Logistic Regression68.540.976.153.583.279.2BERT-large73.248.278.958.484.581.2GPT Basic After53.036.274.365.872.977.0GPT Basic Before51.533.362.959.770.469.1GPT T/L Desc59.245.761.357.976.972.1GPT Memory Recall40.230.952.851.771.769.6GPT Few-sample40.830.668.964.974.871.8LLaMA-OA Basic After61.729.341.641.656.055.9LLaMA-OA Basic Before46.127.964.454.869.868.2LLaMA-OA Basic (OAT 7)63.135.461.438.858.158.1LLaMA-OA T/L Desc56.229.075.949.975.573.3LLaMA-OA Memory Recall52.434.678.143.955.455.4LLaMA-OA Memory Recall (OA 7)48.833.171.942.958.758.7LLaMA-OA Few-sample28.320.771.342.670.068.4", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "LLMs zero-shot classification results using synonyms across all tasks. Green highlights are the original class names. Light grey highlighted cells denote where synonyms prompt settings beat the original label. Bold text denotes the best result per model per task.", "figure_data": "of", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "reasonably well. For GPT, adding task and la-bel descriptions typically achieves better results,i.e. these prompts achieved the best results on4 out of 6 datasets as compared to other GPTprompt strategies. On the other hand, LLaMA-OA achieves mixed results. On average, forcompares different prompt complexity,LLaMA-OA, simple prompts outperform complex counterparts. This may happen because complexand shows that the simple prompt strategy works", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Complaint6908943Vaxx Stance55914582Bragging1,340201160Rumor Stance1,114557475Sarcasm97419458Hate Speech3,380845302", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "We conduct further error analysis on the model outputs across all datasets. # of Unanimous Error denotes cases in which the LLM unanimously agrees on an incorrect answer while using different synonyms.", "figure_data": "", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" } ]
Yida Mu; Ben P Wu; William Thorne; Ambrose Robinson; Nikolaos Aletras; Carolina Scarton; Kalina Bontcheva; Xingyi Song
[ { "authors": "Simran Arora; Avanika Narayan; Laurel J Mayee F Chen; Neel Orr; Kush Guha; Ines Bhatia; Frederic Chami; Christopher Sala; Ré", "journal": "", "ref_id": "b0", "title": "Ask me anything: A simple strategy for prompting language models", "year": "2022" }, { "authors": "Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever", "journal": "", "ref_id": "b1", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "Liviu-Adrian Cotfas; Camelia Delcea; Ioan Roxin; Corina Ioanăş; Dana Simona Gherai; Federico Tajariol", "journal": "Ieee Access", "ref_id": "b2", "title": "The longest month: analyzing covid-19 vaccination opinions dynamics from tweets in the month following the first vaccine announcement", "year": "2021" }, { "authors": "Leon Derczynski; Kalina Bontcheva; Maria Liakata; Rob Procter; Geraldine Wong ; Sak Hoi; Arkaitz Zubiaga", "journal": "", "ref_id": "b3", "title": "SemEval-2017 task 8: RumourEval: Determining rumour veraci 15", "year": "2017" }, { "authors": "", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "year": "" }, { "authors": "Tim Dettmers; Mike Lewis; Sam Shleifer; Luke Zettlemoyer", "journal": "", "ref_id": "b5", "title": "8-bit optimizers via block-wise quantization", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ibrahim Abu Farha; Silviu Vlad Oprea; Steven Wilson; Walid Magdy", "journal": "", "ref_id": "b7", "title": "Semeval-2022 task 6: isarcasmeval, intended sarcasm detection in english and arabic", "year": "2022" }, { "authors": "Yash Adithya V Ganesan; August Kumar Lal; H Andrew Håkan Nilsson; Schwartz", "journal": "", "ref_id": "b8", "title": "Systematic evaluation of gpt-3 for zeroshot personality estimation", "year": "2023" }, { "authors": "Alexandre Albert Q Jiang; Arthur Sablayrolles; Chris Mensch; Devendra Bamford; Diego Singh Chaplot; Florian De Las Casas; Gianna Bressand; Guillaume Lengyel; Lucile Lample; Saulnier", "journal": "", "ref_id": "b9", "title": "Mistral 7b", "year": "2023" }, { "authors": "Mali Jin; Daniel Preoţiuc-Pietro; Nikolaos Doğruöz; Aletras", "journal": "", "ref_id": "b10", "title": "Automatic identification and classification of bragging in social media", "year": "2022" }, { "authors": "Andreas Köpf; Yannic Kilcher; Sotiris Dimitri Von Rütte; Zhi-Rui Anagnostidis; Keith Tam; Abdullah Stevens; Barhoum; Minh Nguyen; Oliver Duc; Richárd Stanley; Nagyfi", "journal": "", "ref_id": "b11", "title": "Openassistant conversations-democratizing large language model alignment", "year": "2023" }, { "authors": "Taja Kuzman; Nikola Ljubešić; Igor Mozetič", "journal": "", "ref_id": "b12", "title": "Chatgpt: Beginning of an end of manual annotation? use case of automatic genre identification", "year": "2023" }, { "authors": "Bishal Lamichhane", "journal": "", "ref_id": "b13", "title": "Evaluation of chatgpt for nlp-based mental health applications", "year": "2023" }, { "authors": "Xianzhi Li; Samuel Chan; Xiaodan Zhu; Yulong Pei; Zhiqiang Ma; Xiaomo Liu; Sameena Shah", "journal": "", "ref_id": "b14", "title": "Are chatgpt and gpt-4 generalpurpose solvers for financial text analytics? a study on several typical tasks", "year": "2023" }, { "authors": "Shayne Longpre; Le Hou; Tu Vu; Albert Webson; Hyung Won Chung; Yi Tay; Denny Zhou; V Quoc; Barret Le; Jason Zoph; Wei", "journal": "", "ref_id": "b15", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "Eric Mitchell; Yoonho Lee; Alexander Khazatsky; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b16", "title": "Detectgpt: Zero-shot machine-generated text detection using probability curvature", "year": "2023" }, { "authors": "Yida Mu; Mali Jin; Charlie Grimshaw; Carolina Scarton; Kalina Bontcheva; Xingyi Song", "journal": "", "ref_id": "b17", "title": "Vaxxhesitancy: A dataset for studying hesitancy towards covid-19 vaccination on twitter", "year": "2023" }, { "authors": "Elite Olshtain; Liora Weinbach", "journal": "", "ref_id": "b18", "title": "", "year": "1987" }, { "authors": "", "journal": "", "ref_id": "b19", "title": "complaints: A study of speech act behavior among native and non-native speakers of hebrew", "year": "" }, { "authors": "John Benjamins", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Soham Poddar; Mainack Mondal; Janardan Misra; Niloy Ganguly; Saptarshi Ghosh", "journal": "", "ref_id": "b22", "title": "Winds of change: Impact of covid-19 on vaccinerelated opinions of twitter users", "year": "2022" }, { "authors": "Daniel Preoţiuc-Pietro; Mihaela Gaman; Nikolaos Aletras", "journal": "", "ref_id": "b23", "title": "Automatically identifying complaints in social media", "year": "2019" }, { "authors": "V Michael; Reiss", "journal": "", "ref_id": "b24", "title": "Testing the reliability of chatgpt for text annotation and classification: A cautionary remark", "year": "2023" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Teven Le Scao; Arun Raja", "journal": "", "ref_id": "b25", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "Jinyan Su; Terry Yue Zhuo; Di Wang; Preslav Nakov", "journal": "", "ref_id": "b26", "title": "Detectllm: Leveraging log rank information for zero-shot detection of machinegenerated text", "year": "2023" }, { "authors": "Mirac Suzgun; Nathan Scales; Nathanael Schärli; Sebastian Gehrmann; Yi Tay; Hyung Won Chung; Aakanksha Chowdhery; V Quoc; Ed H Le; Denny Chi; Zhou", "journal": "", "ref_id": "b27", "title": "Challenging big-bench tasks and whether chainof-thought can solve them", "year": "2022" }, { "authors": "Petter Törnberg", "journal": "", "ref_id": "b28", "title": "Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b29", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Yau-Shian Wang; Yingshan Chang", "journal": "", "ref_id": "b30", "title": "Toxicity detection with generative prompt-based inference", "year": "2022" }, { "authors": "Zeerak Waseem; Thomas Davidson; Dana Warmsley; Ingmar Weber", "journal": "", "ref_id": "b31", "title": "Understanding abuse: A typology of abusive language detection subtasks", "year": "2017" }, { "authors": "Zeerak Waseem; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter", "year": "2016" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b33", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b34", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Joshua A Patrick Y Wu; Jonathan Tucker; Solomon Nagler; Messing", "journal": "", "ref_id": "b35", "title": "Large language models can be used to estimate the ideologies of politicians in a zero-shot learning setting", "year": "2023" }, { "authors": "Caleb Ziems; William Held; Omar Shaikh; Jiaao Chen; Zhehao Zhang; Diyi Yang", "journal": "", "ref_id": "b36", "title": "Can large language models transform computational social science?", "year": "2023" }, { "authors": "Arkaitz Zubiaga; Ahmet Aker; Kalina Bontcheva; Maria Liakata; Rob Procter", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b37", "title": "Detection and resolution of rumours in social media: A survey", "year": "2018" } ]
[]
2023-10-20
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Our world is inherently three-dimensional, where this nature highlights the importance of 3D applications in various fields, including architecture, product design, and scientific simulation. The capability of 3D content generation helps bridge the gap between physical and virtual domains, providing an engaging interaction within digital media. Furthermore, realistic 3D humans have vast practical value, especially in gaming, film, and animation. Despite enhancing the user experience, the customization of the character is crucial for creativity and scalability. Language is the most direct way of communication." }, { "figure_ref": [], "heading": "If a system follows the description and establishes", "publication_ref": [ "b36", "b19", "b32", "b60", "b48", "b29", "b2", "b41", "b13" ], "table_ref": [], "text": "Project website: https://text-3dh.github.io the 3D human model, it will significantly improve controllability and meet the considerable demand.\nWe thus introduce Text-guided 3D Human Generation (T3H) to generate a 3D human with the customized outfit, guided via the fashion description. Previous works (Kolotouros et al., 2019;Gao et al., 2022) depend on multi-view videos to learn 3D human modeling, but these data are difficult to obtain and are not language controllable. Text-to-3D (Jain et al., 2022;Poole et al., 2023) has shown attractive 3D generation results through the success of neural rendering (Mildenhall et al., 2020). However, these methods apply iterative inference optimization by external guidance, which is inefficient for usage.\nTo tackle these above issues, we propose Compositional Cross-modal Human (CCH) to learn T3H from 2D collections. CCH divides the human body into different parts and employs individual volume rendering, inspired by EVA3D (Hong et al., 2023).\nWe extract the fashion semantics from the description and adopt cross-modal attention to fuse body volumes with textual features, where each part can learn to perceive its correlated fashion patterns. To support various angles of view, CCH leverages the human prior (Bogo et al., 2016) to guide the geometry transformation for concrete human architecture. Then these compositional volumes can jointly render a 3D human with the desired fashion efficiently. The semantic discrimination further considers compositional distinguishment over each human part, which improves the fine-grained alignment with its description through adversarial training.\nWe perform experiments on DeepFashion (Liu et al., 2016;Jiang et al., 2022) and SHHQ (Fu et al., 2022a), which contain human images with diverse fashion descriptions. The patterns include various types of shapes (sleeveless, medium short, long, etc.), fabrics (denim, cotton, furry, etc.), and colors (floral, graphic, pure color, etc.) for the upper and lower clothing. To study the performance of T3H, we conduct a thorough evaluation from both visual and semantic aspects. We treat overall realism, geometry measure, and pose correctness to assess the quality of generated 3D humans. For the alignment with the assigned fashion, we apply text-visual relevance from CLIP and fine-grained accuracy by a trained fashion classifier.\nThe experiments indicate that language is necessary to make 3D human generation controllable. Our proposed CCH adopts cross-modal attention to fuse compositional neural rendering with textual fashion as 3D humans, and semantic discrimination further helps fine-grained consistency. In summary, our contributions are three-fold: " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b21", "b65", "b81", "b10", "b14", "b44", "b11", "b63", "b9", "b17", "b28", "b62", "b67", "b12", "b18", "b52", "b25", "b76", "b39", "b83", "b43", "b7", "b55", "b89", "b48", "b1", "b50", "b68", "b5", "b72", "b85", "b87", "b32", "b34", "b47", "b30", "b51", "b60", "b46", "b75", "b69", "b8", "b24", "b74", "b82", "b20", "b70", "b86", "b73", "b80", "b6", "b4", "b23", "b53", "b84", "b29" ], "table_ref": [], "text": "Text-guided Visual Generation. Using humanunderstandable language to guide visual generation can enhance controllability and benefit creative visual design. Previous works built upon adversarial training (Goodfellow et al., 2015;Reed et al., 2016) to produce images (Xu et al., 2018;El-Nouby et al., 2019;Fu et al., 2020Fu et al., , 2022c) ) or videos (Marwah et al., 2017;Li et al., 2018b;Fu et al., 2022b) conditioned on given descriptions. With sequential modeling from Transformer, vector quantization (Esser et al., 2021) can generate high-quality visual content as discrete tokens (Ramesh et al., 2021;Ding et al., 2021;Fu et al., 2023). The denoising diffusion framework (Ho et al., 2020;Ramesh et al., 2022;Saharia et al., 2022;Feng et al., 2023) gains much attention as its diversity and scalability via large-scale text-visual pre-training. Beyond images and videos, 3D content creation is more challenging due to the increasing complexity of the depth dimension and spatial consistency. In this paper, we consider text-guided 3D human generation (T3H), which has vast applications in animated characters and virtual assistants.\n3D Generation. Different representations have been explored for 3D shapes, such as mesh (Gao et al., 2019;Nash et al., 2020;Henderson et al., 2020), voxel grid (Tatarchenko et al., 2017;Li et al., 2017), point cloud (Li et al., 2018a;Yang et al., 2019;Luo et al., 2021), and implicit field (Chen and Zhang, 2019;Park et al., 2019;Zheng et al., 2022). Neural Radiance Field (NeRF) (Mildenhall et al., 2020;Barron et al., 2022;Muller et al., 2022) has shown remarkable results in novel view synthesis (Schwarz et al., 2021;Chan et al., 2021;Skorokhodov et al., 2023) and 3D reconstruction (Yariv et al., 2021;Zhang et al., 2021). With the differentiable neural rendering, NeRF can be guided by various objectives. Text-to-3D draws appreciable attraction these days, which adopts external textvisual alignments (Jain et al., 2022;Khalid et al., 2022;Michel et al., 2022;Wang et al., 2022a;Hong et al., 2022) and pre-trained text-to-image (Wang et al., 2022b;Nam et al., 2022;Poole et al., 2023;Metzer et al., 2023;Tang et al., 2023;Seo et al., 2023). However, existing methods take numerous iterations to optimize a NeRF model, which is timeconsuming for practical usage. Our CCH learns to extract fashion semantics with NeRF rendering and incorporates the human prior for a concrete human body, achieving effective and efficient T3H.\n3D Human Representation. To reconstruct a 3D human, early works (Collet et al., 2015;Guo et al., 2019;Su et al., 2020) count on off-the-shelf tools to predict the camera depth. As mitigating the costly hardware requirement, they estimate a 3D human texture (Xu and Loy, 2021;Gomes et al., 2022) via the UV mapping (Shysheya et al., 2019;Yoon et al., 2021). With the promising success of NeRF, recent works (Peng et al., 2021b,a;Su et al., 2021) adopt volume rendering for 3D humans from multi-view videos (Weng et al., 2022;Chen et al., 2022). Since the data are difficult to collect, the 3D-aware generation (Chan et al., 2022;Gu et al., 2022;Noguchi et al., 2022) learns 3D modeling from the collection of human images (Yang et al., 2022;Hong et al., 2023). In place of arbitrary outputs, we introduce the first controllable 3D human generation that also learns from a 2D collection, and the presented fashion patterns should align with the description.\n3 Text-guided 3D Human Generation" }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [], "table_ref": [], "text": "We present text-guided 3D human generation (T3H) to create 3D humans via fashion descriptions. For data efficiency, a 2D collection D={V, T } is provided, where V is the human image, and T is its fashion description. Our goal is to learn the neural rendering that maps T into an articulate 3D human with the fashion patterns of V." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b48", "b2" ], "table_ref": [], "text": "Neural Radiance Field (NeRF) (Mildenhall et al., 2020) defines implicit 3D as {c, σ}=F (x, d). The query point x in the viewing direction d holds the emitted radiance c and the volume density σ. To get the RGB value C(r) of certain rays r(t), volume rendering is calculated along a ray r from the near bound t n to the far bound t f :\nT (t) = exp(- t tn σ(r(s))ds), C(r) = t f tn T (t)σ(r(t))c(r(t), d)dt,(1)\nwhere T (t) stands for their accumulated transmittance. StyleSDF (Or-El et al., 2022) then replaces σ with single distance field (SDF) d(x) for a better surface, where σ(x) = α -1 sigmoid( -d(x) α ) and α is a learnable scalar that controls the tightness of the density around the surface boundary.\nSMPL (Bogo et al., 2016) defines the human body as {β, θ}, where β ∈ R 10 and θ ∈ R 3×23 control its shape and pose. We consider Linear Blend Skinning (LBS) as the transformation from the canonical into the observation space for the point x to K k=1 h k H k (θ, J)x, where h k ∈ R is the scalar of the blend weight and H k ∈ R 4×4 is the transformation matrix of the kth joint. Inverse LBS transforms the observation back to the canonical space as a similar equation but with an inverted H." }, { "figure_ref": [ "fig_1" ], "heading": "Compositional Cross-modal Human", "publication_ref": [ "b29", "b90", "b71", "b42", "b45", "b22" ], "table_ref": [], "text": "Following EVA3D (Hong et al., 2023), we split the human body into 16 parts. As shown in Fig. 2, each body part holds its own bounding box {o b min , o b max }. To leverage the human prior for a target pose θ, we transform these pre-defined bounding boxes with SMPL's transformation matrices H k . Ray r(t) is sampled for each pixel on the canvas. For a ray that intersects bounding boxes, we pick up its near and far bounds (t n and t f ) and sample N points as follows:\nt i ∼ U t n + i-1 N (t f -t n ), t n + i N (t f -t n ) .\nWe then transform these sampled points back to the canonical space with inverse LBS. For shape generalization, we consider not only pose transformation but also blend shapes (B P (θ) and B S (β)) (Zheng et al., 2021). N contains K nearest vertices v of the target SMPL mesh for the sample point ray r(t i ):\ng k = 1 ||r(t i ) -v k || , M k = K k=1 g k H k I B P k + B S k 0 I , x i 1 = v k ∈N g k v k ∈N g k (M k ) -1 r(t i ) 1 ,(2)\nwhere g k ∈ R is the inverse weight of the vertex v k and M k ∈ R 4×4 is the transformation matrix. The x i can be used for further volume rendering.\nCross-modal Attention. During rendering, if the canonical point x i with the viewing direction d i is inside the bth bounding box, it will be treated as:\nxb i = 2x i -(o b max + o b min ) o b max -o b min , f b i = Linear(x b i , d i ),(3)\nwhere a linear mapping is applied to acquire preliminary features f b ∈ R 16×8×128 . To exhibit the desired fashion in the final rendering, we extract the word features by the text encoder as {w l ∈ R 512 } from T . We then fuse the textual features with f k i via cross-modal attention:\np l = exp(f b i W b w T l ) L ι=1 exp(f b i W b w T ι ) , CA(f b i | {w}) = L l=1 p l w l ,(4)\nwhere L is the length of T and W b is the learnable matrix. In this way, each point can learn to perceive relevant textual guidance for the bth human body part and depict corresponding fashion patterns.\nEach body part has its individual volume rendering F b , which consists of stacked multilayer perceptrons (MLPs) with the SIREN activation (Sitzmann et al., 2020). Since the point x i may fall into multiple boxes B i , we follow EVA3D to apply the mixture function (Lombardi et al., 2021): {w l } ← extracted textual features of T 10:\n{c b i , σ b i } = F b (CA(x b i , d i | {w})), u b = exp(-m(x b i (x) n + xb i (y) n + xb i (z) n )), {c i , σ i } = 1 b∈B u b b∈B u b {c b i , σ b i },(5)\nCA ← fusion via cross-modal attention ▷ Eq. 4 11:\n{ci, σi} ← mixture radiance / density ▷ Eq. 5 12:\nR ← final rendering human ▷ Eq. 1 13: 14:\nS ← segmentation map of V 15:\nQ ← fashion map between S and T ▷ Eq. 6 16:\nLadv ← adversarial loss from D ▷ Eq. 8 17:\nLoff, Leik ← offset and derivation loss ▷ Eq. 9 18:\nLall ← overall training loss ▷ Eq. 10 19:\nUpdate G by minimizing Lall 20:\nUpdate D by maximizing Lall 21: end while where m and n are hyperparameters. With {c i , σ i }, we adopt Eq. 1 to render the RGB value of ray r(t). Through all sampled rays r, we then have our final human rendering R, where the overall process can be simplified as R = G(β, θ | T ). To summarize, CCH leverages the human prior and adopts inverse LBS to acquire the canonical space for the target pose. The human body is divided into 16 parts, and each of them fuses its correlated fashion semantics via cross-modal attention. Finally, compositional bodies jointly render the target 3D human.\nSemantic Discrimination. With the SMPL prior, our CCH contains robust geometry transformation for humans and can learn from 2D images without actual 3D guidance. For a ground-truth {V, T }, we parse the 2D human image as the segmentation S (MMHuman3D, 2021), which provides the reliable body architecture. To obtain its fashion map Q, we apply cross-modal attention between S and T :\n{e i,j } = Conv(S), Q i,j = L l=1 exp(e i,j W w T l ) L ι=1 exp(e i,j W w T ι ) w l , (6\n)\nwhere e is the same dimension as f , W is the learnable attention matrix, and Q perceives which human body part should showcase what fashion patterns. We concatenate the rendered human R (or the ground-truth V) with Q and feed them into our discriminator D to perform binary classification:\nD(R | T ) = BC([Conv(R), Q]).(7)\nIn consequence, D can provide alignments of both the human pose and fashion semantics, which improves the fine-grained consistency of our CCH.\nLearning of CCH. We include the non-saturating loss with R1 regularization (Mescheder et al., 2018) for adversarial learning over the ground-truth {V}:\nU (u) = -log(1 + exp(-u)), L adv = U (G(β, θ | T ) | T ) (8) + U (-D(V | T )) + λ|∇D(V | T )| 2 .\nFollowing EVA3D, we also append the minimum offset loss L off to maintain a plausible human shape as the template mesh. L eik penalizes the derivation of delta SDFs to zero and makes the estimated SDF physically valid (Gropp et al., 2020):\nL off = ||∆d(x)|| 2 2 , L eik = ||∇(∆d(x))|| 2 2 .(9)\nThe learning process of our CCH is also illustrated as Algo. 1, where the overall optimization can be: \nL all = L adv + 1.5 • L off + 0.5 • L eik ,(10) min" }, { "figure_ref": [], "heading": "Experiemnts", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b13", "b3", "b57", "b27", "b64", "b0", "b26", "b61", "b46", "b66", "b30", "b29", "b82", "b13", "b54", "b56" ], "table_ref": [], "text": "Datasets. We coduct experiments on DeepFashion (Jiang et al., 2022) and SHHQ (Fu et al., 2022a) for T3H. DeepFashion contains 12K human images with upper and lower clothing descriptions. Since there are no annotations in SHHQ, we first fine-tune GIT (Wang et al., 2022c) on DeepFashion and then label for 40K text-human pairs. We follow Open-Pose (Cao et al., 2019) and SMPLify-X (Pavlakos et al., 2019) to estimate the human keypoints and its SMPL parameters. The resolution is resized into 512x256 in our experiments. Note that all faces in datasets are blurred prior to training, and the model is not able to generate human faces.\nEvaluation Metrics. We apply metrics from both visual and semantic prospects. Following EVA3D, we adopt Frechet Inception Distance (FID) (Heusel et al., 2017) and Depth (Ranftl et al., 2020) to cal-culate visual and geometry similarity, compared to the ground-truth image. We treat Percentage of Correct Keypoints ([email protected]) (Andriluka et al., 2014) as the correctness of the generated pose. To investigate the textual relevance of T3H results, we follow CLIP Score (CLIP-S) (Hessel et al., 2021) for the text-visual similarity. We fine-tune CLIP (Radford et al., 2021) on DeepFashion for a more accurate alignment in this specific domain. To have the finegrained evaluation, we train a fashion classifier on DeepFashion labels1 and assess Fashion Accuracy (FA) of the generated human.\nBaselines. As a new task, we consider the following methods as the compared baselines.\n• Latent-NeRF (Metzer et al., 2023) brings NeRF to the latent space and guides its generation by the given object and a text-to-image prior. • TEXTure (Richardson et al., 2023) paints a 3D object from different viewpoints via leveraging the pre-trained depth-to-image diffusion model. • CLIP-O is inspired by AvatarCLIP (Hong et al., 2022), which customizes a human avatar from the description with CLIP text-visual alignment. We apply the guided loss to optimize a pre-trained EVA3D (Hong et al., 2023) for faster inference. • Texformer (Xu and Loy, 2021) estimates the human texture from an image. Text2Human (Jiang et al., 2022) predicts the target human image, and we treat Texformer to further build its 3D model. For a fair comparison, all baselines are re-trained on face-blurred datasets and cannot produce identifiable human faces.\nImplementation Detail. We divide a human body into 16 parts and deploy individual StyleSDF (Or-El et al., 2022) for each volume rendering, and two following MLPs then estimate SDF and RGB values. We adopt the same discriminator as StyleSDF over fashion maps to distinguish between fake rendered humans and real images. We sample N =28 points for each ray and set (m, n) to (4, 8) for mix- ture rendering. The text encoder is initialized from CLIP and subsequently trained with CCH. We treat Adam (Kingma and Ba, 2015) with a batch size of 1, where the learning rates are 2e-5 for G and 2e-4 for D. We apply visual augmentations by randomly panning, scaling, and rotating within small ranges. All trainings are done using PyTorch (Paszke et al., 2017) on 8 NVIDIA A100 GPUs for 1M iterations." }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "Table 1 shows the pose-guided T3H results on Deep-Fashion and SHHQ, where we feed the estimated human mesh as the input object into Latent-NeRF and TEXTure. Although Latent-NeRF can portray body shapes in multiple angles from its latent NeRF space, the rendering is clearly counterfeit (higher FID and Depth). For TEXTure, the human architecture is constructed well by the given mesh (higher PCK). However, the estimated texture is still spatially inconsistent and contains inevitable artifacts (still higher FID). From the semantic aspect, Latent-NeRF and TEXTure borrow those trained diffusion models and depict the assigned appearance in the description (higher CLIP-S than CLIP-O). CLIP-O relies on EVA3D to produce feasible 3D humans (lower FID). While the external CLIP loss attempts to guide the fashion, the global alignment is insufficient to demonstrate detailed patterns (lower FA). Without those above drawbacks, our CCH learns to extract fashion semantics along with the compositional human generation, leading to comprehensive superiority across all metrics.\nA similar trend can be found on SHHQ. Latent-NeRF and TEXTure exhibit related fashion patterns but are hard to present realistic humans (higher FID and Depth). CLIP-O produces a sharp human body with the correct pose, but not the assigned fashion (lower CLIP-S and FA) by the inexplicit alignment from CLIP. Table 2 presents the pose-free results.\nWith the guided 2D image, Texformer contains the assigned clothing in the text (higher FA than CLIP-O). But the 3D reconstruction is unable to handle spatial rendering, resulting in low-quality humans (higher FID). With cross-modal attention and semantic discrimination, CCH exceeds baselines in both visual and textual relevance, making concrete human rendering with the corresponding fashion." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b65", "b26", "b31" ], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_7" ], "text": "We study each component effect of CCH in Table 3. Without the guided description, the model lacks the target fashion and results in a poor FA. This further highlights the importance of textual guidance for controllable human generation. When applying the traditional training (Reed et al., 2016), conditional GAN is insufficient to extract fashion semantics for effective T3H (not good enough CLIP-S). On the other hand, our cross-modal attention constructs a better fusion between fashion patterns and volume rendering, facilitating a significant improvement in depicting the desired human appearance. Moreover, semantic discrimination benefits fine-grained alignment and leads to comprehensive advancement.\nFine-tune CLIP-S as Evaluator. CLIP has shown promising text-visual alignment, which can calculate feature similarity between the generated human and the given text as CLIP-S (Hessel et al., 2021). Since our T3H is in a specific fashion domain, we consider the larger-scaled trained checkpoint from OpenCLIP (Ilharco et al., 2021) and fine-tune it as a more precise evaluator. Table 4 presents text-tofashion retrieval results, where a higher recall leads to a better alignment. Whether the original CLIP or OpenCLIP, both result in poor performance and is insufficient for our evaluation. By perceiving Deep-Fashion, fine-tuning helps bring reliable alignment and is treated as the final evaluator.\nHuman Evaluation. Apart from automatic metrics, we conduct the human evaluation with aspects of 3D quality and fashion relevance. We randomly sample 75 T3H results and consider MTurk 2 to rank over baselines and our CCH. To avoid the potential ranking bias, we hire 3 MTurkers for each example.\nTable 5 shows the mean ranking score (from 1 to 4, the higher is the better). CLIP-O and CCH are built upon EVA3D, which provides an articulate human body for superior 3D quality. Even if Latent-NeRF and TEXTure take pre-trained diffusion models to acquire visual guidance, CCH exhibits more corresponding fashion via cross-modal fusion. This performance trend is similar to our evaluation, which supports the usage of CLIP-S and FA as metrics.\nInference Efficiency. In addition to T3H quality, 2 Amazon MTurk: https://www.mturk.com our CCH also contains a higher efficiency. Table 6 shows the inference time and GPU cost on a single NVIDIA TITAN RTX. All baselines take more than 100 seconds since they require multiple iterations to optimize the 3D model from an external alignment.\nIn contrast, we extract fashion semantics and carry out T3H in one shot. Without updating the model, we save the most GPU memory. In summary, CCH surpasses baselines on both quality and efficiency, leading to an effective and practical T3H." }, { "figure_ref": [ "fig_3", "fig_4", "fig_6", "fig_5" ], "heading": "Qualitative Results.", "publication_ref": [ "b88" ], "table_ref": [], "text": "We demonstrate the qualitative comparison of poseguided T3H in Fig. 3. Although Latent-NeRF can portray the 3D human based on the given mesh, it only presents inauthentic rendering. TEXTure generates concrete humans, but there are still obvious cracks and inconsistent textures from different angles of view. Moreover, both of them fail to capture \"three-point\", where the rendered lower clothing is incorrectly depicted as long pants. Because CLIP provides an overall but inexplicit alignment to the description, CLIP-O is limited and exhibits vague \"denim\" or \"long-sleeved\". This observation further indicates the flaw of CLIP in detailed fashion patterns, even if it has been fine-tuned on the target dataset. In contrast, our CCH adopts cross-modal attention with NeRF, contributing to high-quality T3H with fine-grained fashion controllability. Fig. 4 shows the pose-free results. Texformer relies on a 2D image to estimate its 3D texture. Despite containing the assigned fashion, it is still restricted by the capability of 3D reconstruction, resulting in a low-resolution rendering. By learning text-to-3D directly, CCH can produce textual-related humans from random poses with clear visual patterns.\nPose-control T3H. Since our CCH is generating 3D humans from given SMPL parameters, as illustrated in Fig. 6, we can further control T3H with a specific pose. Different fashion descriptions make a human body present diverse appearances; different poses then guide the character to express rich body language. This flexibility in controlling appearance and pose allows for better practical customization.\nAnimatable T3H. In addition to static poses, CCH can benefit from dynamic motions to achieve animatable T3H. Fig. 5 adopts MotionDiffuse (Zhang et al., 2022) to create the assigned action also from the text and apply it to our produced 3D models. In this way, we prompt them to \"raise arms\" or \"walk\" for favorable dynamic scenarios." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present text-guided 3D human generation (T3H) to create a 3D human by a fashion description. To learn this from 2D collections, we introduce Compositional Cross-modal Human (CCH). With crossmodal attention, CCH fuses compositional human rendering and textual semantics to build a concrete body architecture with the corresponding fashion. Experiments across various fashion attributes show that CCH effectively carries out T3H with high efficiency. We believe T3H helps advance a new field toward vision-and-language research.\nEthics Discussion and Limitation. Our work enhances the controllability of 3D human generation.\nTo prevent identity leakage, we blur out faces prior to training and avoid risks similar to DeepFake (Korshunov and Marcel, 2018). Because we depend on SMPL parameters, an inaccurate estimation causes a distribution shift and quality degradation. For the datasets, they reveal narrow viewing angles, which results in visible artifacts of 3D consistency." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. We appreciate the anonymous reviewers for constructive feedback. The research presented in this work was funded by Meta AI. The views expressed are those of the authors and do not reflect the official policy or position of the funding agency." } ]
3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to generate a 3D human, guided by the fashion description. There are two goals: 1) the 3D human should render articulately, and 2) its outfit is controlled by the given text. To address this T3H task, we propose Compositional Cross-modal Human (CCH). CCH adopts cross-modal attention to fuse compositional human rendering with the extracted fashion semantics. Each human body part perceives relevant textual guidance as its visual patterns. We incorporate the human prior and semantic discrimination to enhance 3D geometry transformation and fine-grained consistency, enabling this to learn from 2D collections for data efficiency. We conduct evaluations on DeepFashion and SHHQ with diverse fashion attributes covering the shape, fabric, and color of upper and lower clothing. Extensive experiments demonstrate that CCH achieves superior results for T3H with high efficiency.
Text-guided 3D Human Generation from 2D Collections
[ { "figure_caption": "Figure 1 :1Figure 1: Text-guided 3D Human Generation (T3H).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Compositional Cross-modal Human (CCH). CCH extracts fashion semantics from the description and adopts cross-modal attention in compositional body volumes for controllable 3D human rendering. The human prior (SMPL) provides robust geometry transformation, enabling CCH to learn from 2D collections for data efficiency. The semantic discrimination further helps find-grained consistency through adversarial training.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Qualitative comparison of pose-guided T3H.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitative comparison of pose-free T3H.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative examples of animatable T3H, where the motion is also controlled by the text.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Qualitative examples of pose-control T3H.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Qualitative examples of our fashion classifier, which provides fine-grained labels for real/fake humans.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Overall results of pose-guided T3H.", "figure_data": "DeepFashionSHHQMethodFID↓ Depth↓ PCK↑ CLIP-S↑FA↑FID↓ Depth↓ PCK↑ CLIP-S↑FA↑Latent-NeRF69.654 0.0298 74.21122.50065.88372.256 0.0381 73.40122.21067.427TEXTure37.058 0.0165 86.35423.38567.50848.618 0.0216 85.50224.45668.233CLIP-O25.488 0.0133 87.89221.88761.96434.212 0.0164 87.31221.40166.808CCH21.136 0.0121 88.35525.02372.03832.858 0.0165 87.62427.85576.194", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overall results of pose-free T3H.", "figure_data": "DeepFashionMethodFID↓ CLIP-S↑FA↑Texformer45.84420.54666.679CLIP-O25.57920.11261.298CCH21.35524.92070.771Ablation SettingsDeepFashionText CA SDFID↓ CLIP-S↑FA↑✗✗✗25.6719.63236.634✓✗✗24.62421.07969.173✓✓✗21.96624.10380.028✓✓✓21.27525.21180.776", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study (256x128) of Cross-modal Attention (CA) and Semantic Discrimination (SD).", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Text-to-Fashion retrieval (sample 500 pairs) by CLIP with different fine-tunings (FT.).", "figure_data": "DeepFashionTrainFT.R1R5 R10OpenAI-400M ✗4.2 20.4 28.0LAION-2B✗13.4 33.4 46.4LAION-2B✓45.0 83.0 93.8DeepFashionMethodQuality RelevanceLatent-NeRF1.822.37TEXTure2.382.51CLIP-O2.932.20CCH2.872.92", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Human evaluation for T3H with aspects of 3D quality and fashion relevance.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Time and GPU cost to perform T3H.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Tsu-Jui Fu; Wenhan Xiong; Yixin Nie; Jingyu Liu; Barlas Oguz; William Yang Wang
[ { "authors": "Mykhaylo Andriluka; Leonid Pishchulin; Peter Gehler; Bernt Schiele", "journal": "", "ref_id": "b0", "title": "2D Human Pose Estimation: New Benchmark and State of the Art Analysis", "year": "2014" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b1", "title": "Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields", "year": "2022" }, { "authors": "Federica Bogo; Angjoo Kanazawa; Christoph Lassner; Peter Gehler; Javier Romero; Michael J Black", "journal": "", "ref_id": "b2", "title": "Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image", "year": "2016" }, { "authors": "Zhe Cao; Gines Hidalgo; Tomas Simon; Shih-En Wei; Yaser Sheikh", "journal": "", "ref_id": "b3", "title": "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", "year": "2019" }, { "authors": "Eric R Chan; Connor Z Lin; Matthew A Chan; Koki Nagano; Boxiao Pan; Shalini De Mello; Orazio Gallo; Leonidas Guibas; Jonathan Tremblay; Sameh Khamis; Tero Karras; Gordon Wetzstein", "journal": "", "ref_id": "b4", "title": "Efficient Geometry-aware 3D Generative Adversarial Networks", "year": "2022" }, { "authors": "Eric R Chan; Marco Monteiro; Petr Kellnhofer; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b5", "title": "pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis", "year": "2021" }, { "authors": "Mingfei Chen; Jianfeng Zhang; Xiangyu Xu; Lijuan Liu; Yujun Cai; Jiashi Feng; Shuicheng Yan", "journal": "", "ref_id": "b6", "title": "Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural Human Rendering", "year": "2022" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b7", "title": "Learning Implicit Fields for Generative Shape Modeling", "year": "2019" }, { "authors": "Alvaro Collet; Ming Chuang; Pat Sweeney; Don Gillett; Dennis Evseev; David Calabrese; Hugues Hoppe; Adam Kirk; Steve Sullivan", "journal": "", "ref_id": "b8", "title": "High-quality Streamable Free-viewpoint Video", "year": "2015" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang; Jie Tang", "journal": "", "ref_id": "b9", "title": "CogView: Mastering Text-to-Image Generation via Transformers", "year": "2021" }, { "authors": "Alaaeldin El-Nouby; Shikhar Sharma; Hannes Schulz; Devon Hjelm; Layla El Asri; Samira Ebrahimi Kahou; Yoshua Bengio; Graham W Taylor", "journal": "", "ref_id": "b10", "title": "Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction", "year": "2019" }, { "authors": "Patrick Esser; Robin Rombach; Björn Ommer", "journal": "", "ref_id": "b11", "title": "Taming Transformers for High-Resolution Image Synthesis", "year": "2021" }, { "authors": "Weixi Feng; Xuehai He; Tsu-Jui Fu; Varun Jampani; Arjun Reddy Akula; Pradyumna Narayana; Sugato Basu; Xin ; Eric Wang; William Yang; Wang ", "journal": "", "ref_id": "b12", "title": "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis", "year": "2023" }, { "authors": "Jianglin Fu; Shikai Li; Yuming Jiang; Kwan-Yee Lin; Chen Qian; Chen Change Loy; Wayne Wu; Ziwei Liu", "journal": "", "ref_id": "b13", "title": "a. StyleGAN-Human: A Data-Centric Odyssey of Human Generation", "year": "2022" }, { "authors": "Tsu-Jui Fu; Xin ; Eric Wang; Scott Grafton; Miguel Eckstein; William Yang; Wang ", "journal": "", "ref_id": "b14", "title": "SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning", "year": "2020" }, { "authors": "Tsu-Jui Fu; Xin ; Eric Wang; Scott Grafton; Miguel Eckstein; William Yang; Wang ", "journal": "", "ref_id": "b15", "title": "M 3 L: Language-based Video Editing via Multi-Modal Multi-Level Transformer", "year": "2022" }, { "authors": "Tsu-Jui Fu; Xin ; Eric Wang; William Yang; Wang ", "journal": "", "ref_id": "b16", "title": "Language-Driven Artistic Style Transfer", "year": "2022" }, { "authors": "Tsu-Jui Fu; Licheng Yu; Ning Zhang; Cheng-Yang Fu; Jong-Chyi Su; William Yang; Wang ; Sean Bell", "journal": "", "ref_id": "b17", "title": "Tell Me What Happened: Unifying Textguided Video Completion via Multimodal Masked Video Generation", "year": "2023" }, { "authors": "Lin Gao; Jie Yang; Tong Wu; Yu-Jie Yuan; Hongbo Fu; Yu-Kun Lai; Hao Zhang", "journal": "", "ref_id": "b18", "title": "SDM-NET: Deep Generative Network for Structured Deformable Mesh", "year": "2019" }, { "authors": "Xiangjun Gao; Jiaolong Yang; Jongyoo Kim; Sida Peng; Zicheng Liu; Xin Tong", "journal": "", "ref_id": "b19", "title": "MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images", "year": "2022" }, { "authors": "L Thiago; Thiago M Gomes; Rafael Coutinho; Renato Azevedo; Erickson R Martins; Nascimento", "journal": "", "ref_id": "b20", "title": "Creating and Reenacting Controllable 3D Humans with Differentiable Rendering", "year": "2022" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b21", "title": "Generative Adversarial Networks", "year": "2015" }, { "authors": "Amos Gropp; Lior Yariv; Niv Haim; Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b22", "title": "Implicit Geometric Regularization for Learning Shapes", "year": "2020" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b23", "title": "StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image Synthesis", "year": "2022" }, { "authors": "Kaiwen Guo; Peter Lincoln; Philip Davidson; Jay Busch; Xueming Yu; Matt Whalen; Geoff Harvey; Sergio Orts-Escolano; Rohit Pandey; Jason Dourgarian; Matthew Duvall; Danhang Tang; Anastasia Tkach; Adarsh Kowdle; Emily Cooper; Mingsong Dou; Sean Fanellov; Graham Fyffe; Christoph Rhemannv; Jonathan Taylor; Paul Debevec; Shahram Izadi", "journal": "", "ref_id": "b24", "title": "The Relightables: Volumetric Performance Capture of Humans with Realistic Relighting", "year": "2019" }, { "authors": "Paul Henderson; Vagia Tsiminaki; Christoph H Lampert", "journal": "", "ref_id": "b25", "title": "Leveraging 2D Data to Learn Textured 3D Mesh Generation", "year": "2020" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b26", "title": "CLIPScore: A Reference-free Evaluation Metric for Image Captioning", "year": "2021" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b27", "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", "year": "2017" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b28", "title": "Denoising Diffusion Probabilistic Models", "year": "2020" }, { "authors": "Fangzhou Hong; Zhaoxi Chen; Yushi Lan; Liang Pan; Ziwei Liu", "journal": "", "ref_id": "b29", "title": "EVA3D: Compositional 3D Human Generation from 2D Image Collections", "year": "2023" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b30", "title": "Avatar-CLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars", "year": "2022" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt", "journal": "", "ref_id": "b31", "title": "OpenCLIP", "year": "2021" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b32", "title": "Zero-Shot Text-Guided Object Generation with Dream Fields", "year": "2022" }, { "authors": "Yuming Jiang; Shuai Yang; Haonan Qiu; Wayne Wu; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b33", "title": "Text2Human: Text-Driven Controllable Human Image Generation", "year": "2022" }, { "authors": "Mohammad Nasir; Tianhao Khalid; Eugene Xie; Tiberiu Belilovsky; Popa", "journal": "", "ref_id": "b34", "title": "CLIP-Mesh: Generating textured meshes from text using pretrained image-text models", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b35", "title": "Adam: A Method for Stochastic Optimization", "year": "2015" }, { "authors": "Nikos Kolotouros; Georgios Pavlakos; Michael J Black; Kostas Daniilidis", "journal": "", "ref_id": "b36", "title": "Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop", "year": "2019" }, { "authors": "Pavel Korshunov; Sebastien Marcel", "journal": "", "ref_id": "b37", "title": "Deep-Fakes: a New Threat to Face Recognition? Assessment and Detection", "year": "2018" }, { "authors": "Chun-Liang Li; Manzil Zaheer; Yang Zhang; Barnabas Poczos; Ruslan Salakhutdinov", "journal": "", "ref_id": "b38", "title": "Point Cloud GAN", "year": "2018" }, { "authors": "Jun Li; Kai Xu; Siddhartha Chaudhuri; Ersin Yumer; Hao Zhang; Leonidas Guibas", "journal": "", "ref_id": "b39", "title": "GRASS: Generative Recursive Autoencoders for Shape Structures", "year": "2017" }, { "authors": "Yitong Li; Martin Renqiang Min; Dinghan Shen; David Carlson; Lawrence Carin", "journal": "", "ref_id": "b40", "title": "Video Generation From Text", "year": "2018" }, { "authors": "Ziwei Liu; Ping Luo; Shi Qiu; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b41", "title": "DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations", "year": "2016" }, { "authors": "Stephen Lombardi; Tomas Simon; Gabriel Schwartz; Michael Zollhoefer; Yaser Sheikh; Jason Saragih", "journal": "", "ref_id": "b42", "title": "Mixture of Volumetric Primitives for Efficient Neural Rendering", "year": "2021" }, { "authors": "Andrew Luo; Tianqin Li; Wen-Hao Zhang; Tai Sing; Lee ", "journal": "", "ref_id": "b43", "title": "SurfGen: Adversarial 3D Shape Synthesis with Explicit Surface Discriminators", "year": "2021" }, { "authors": "Tanya Marwah; Gaurav Mittal; N Vineeth", "journal": "", "ref_id": "b44", "title": "Attentive Semantic Video Generation using Captions", "year": "2017" }, { "authors": "Lars Mescheder; Andreas Geiger; Sebastian Nowozin", "journal": "", "ref_id": "b45", "title": "Which Training Methods for GANs do actually Converge", "year": "2018" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b46", "title": "Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures", "year": "2023" }, { "authors": "Oscar Michel; Roi Bar-On; Richard Liu; Sagie Benaim; Rana Hanocka", "journal": "", "ref_id": "b47", "title": "Text2Mesh: Text-Driven Neural Stylization for Meshes", "year": "2022" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b48", "title": "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", "year": "2020" }, { "authors": " Mmhuman3d", "journal": "", "ref_id": "b49", "title": "OpenMMLab 3D Human Parametric Model Toolbox and Benchmark", "year": "2021" }, { "authors": "Thomas Muller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "", "ref_id": "b50", "title": "Instant Neural Graphics Primitives with a Multiresolution Hash Encoding", "year": "2022" }, { "authors": "Gimin Nam; Mariem Khlifi; Andrew Rodriguez; Alberto Tono; Linqi Zhou; Paul Guerrero", "journal": "", "ref_id": "b51", "title": "3D-LDM: Neural Implicit 3D Shape Generation with Latent Diffusion Models", "year": "2022" }, { "authors": "Charlie Nash; Yaroslav Ganin; Ali Eslami; Peter W Battaglia", "journal": "", "ref_id": "b52", "title": "PolyGen: An Autoregressive Generative Model of 3D Meshes", "year": "2020" }, { "authors": "Atsuhiro Noguchi; Xiao Sun; Stephen Lin; Tatsuya Harada", "journal": "", "ref_id": "b53", "title": "Unsupervised Learning of Efficient Geometry-Aware Neural Articulated Representations", "year": "2022" }, { "authors": "Roy Or-El; Xuan Luo; Mengyi Shan; Eli Shechtman; Jeong Joon Park; Ira Kemelmacher-Shlizerman", "journal": "", "ref_id": "b54", "title": "StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation", "year": "2022" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b55", "title": "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation", "year": "2019" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b56", "title": "Automatic differentiation in PyTorch", "year": "2017" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Ahmed A Timo Bolkart; A Osman; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b57", "title": "Expressive Body Capture: 3D Hands, Face, and Body from a Single Image", "year": "2019" }, { "authors": "Sida Peng; Junting Dong; Qianqian Wang; Shangzhan Zhang; Qing Shuai; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b58", "title": "Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies", "year": "2021" }, { "authors": "Sida Peng; Yuanqing Zhang; Yinghao Xu; Qianqian Wang; Qing Shuai; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b59", "title": "Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans", "year": "2021" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b60", "title": "DreamFusion: Text-to-3D using 2D Diffusion", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b61", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b62", "title": "Hierarchical Text-Conditional Image Generation with CLIP Latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b63", "title": "Zero-Shot Text-to-Image Generation", "year": "2021" }, { "authors": "René Ranftl; Katrin Lasinger; David Hafner; Konrad Schindler; Vladlen Koltun", "journal": "", "ref_id": "b64", "title": "Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer", "year": "2020" }, { "authors": "Scott Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee", "journal": "", "ref_id": "b65", "title": "Generative Adversarial Text to Image Synthesis", "year": "2016" }, { "authors": "Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b66", "title": "TEXTure: Text-Guided Texturing of 3D Shapes", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; S Sara Mahdavi; Rapha Gontijo Lopes; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b67", "title": "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding", "year": "2022" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b68", "title": "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis", "year": "2021" }, { "authors": "Junyoung Seo; Wooseok Jang; Min-Seop Kwak; Jaehoon Ko; Hyeonsu Kim; Junho Kim; Jin-Hwa Kim; Jiyoung Lee; Seungryong Kim", "journal": "", "ref_id": "b69", "title": "Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation", "year": "2023" }, { "authors": "Aliaksandra Shysheya; Egor Zakharov; Kara-Ali Aliev; Renat Bashirov; Egor Burkov; Karim Iskakov; Aleksei Ivakhnenko; Yury Malkov; Igor Pasechnik; Dmitry Ulyanov; Alexander Vakhitov; Victor Lempitsky", "journal": "", "ref_id": "b70", "title": "Textured Neural Avatars", "year": "2019" }, { "authors": " Vincent Sitzmann; N P Julien; Alexander W Martel; David B Bergman; Gordon Lindell; Wetzstein", "journal": "", "ref_id": "b71", "title": "Implicit Neural Representations with Periodic Activation Functions", "year": "2020" }, { "authors": "Ivan Skorokhodov; Aliaksandr Siarohin; Yinghao Xu; Jian Ren; Hsin-Ying Lee; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b72", "title": "3D generation on ImageNet", "year": "2023" }, { "authors": "Shih-Yang Su; Frank Yu; Michael Zollhoefer; Helge Rhodin", "journal": "", "ref_id": "b73", "title": "A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose", "year": "2021" }, { "authors": "Zhuo Su; Lan Xu; Zerong Zheng; Tao Yu; Yebin Liu; Lu Fang", "journal": "", "ref_id": "b74", "title": "RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera", "year": "2020" }, { "authors": "Junshu Tang; Tengfei Wang; Bo Zhang; Ting Zhang; Ran Yi; Lizhuang Ma; Dong Chen", "journal": "", "ref_id": "b75", "title": "Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior", "year": "2023" }, { "authors": "Maxim Tatarchenko; Alexey Dosovitskiy; Thomas Brox", "journal": "", "ref_id": "b76", "title": "Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs", "year": "2017" }, { "authors": "Can Wang; Menglei Chai; Mingming He; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b77", "title": "CLIP-NeRF: Textand-Image Driven Manipulation of Neural Radiance Fields", "year": "2022" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b78", "title": "Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation", "year": "2022" }, { "authors": "Jianfeng Wang; Zhengyuan Yang; Xiaowei Hu; Linjie Li; Kevin Lin; Zhe Gan; Zicheng Liu; Ce Liu; Lijuan Wang", "journal": "", "ref_id": "b79", "title": "GIT: A Generative Image-to-text Transformer for Vision and Language", "year": "2022" }, { "authors": "Chung-Yi Weng; Brian Curless; P Pratul; Jonathan T Srinivasan; Ira Barron; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b80", "title": "HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video", "year": "2022" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b81", "title": "AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks", "year": "2018" }, { "authors": "Xiangyu Xu; Chen Change Loy", "journal": "", "ref_id": "b82", "title": "3D Human Texture Estimation from a Single Image with Transformers", "year": "2021" }, { "authors": "Guandao Yang; Xun Huang; Zekun Hao; Ming-Yu Liu; Serge Belongie; Bharath Hariharan", "journal": "", "ref_id": "b83", "title": "Point-Flow: 3D Point Cloud Generation with Continuous Normalizing Flows", "year": "2019" }, { "authors": "Zhuoqian Yang; Shikai Li; Wayne Wu; Bo Dai", "journal": "", "ref_id": "b84", "title": "3DHumanGAN: Towards Photo-Realistic 3D-Aware Human Image Generation", "year": "2022" }, { "authors": "Lior Yariv; Jiatao Gu; Yoni Kasten; Yaron Lipman", "journal": "", "ref_id": "b85", "title": "Volume Rendering of Neural Implicit Surfaces", "year": "2021" }, { "authors": "Jae Shin Yoon; Lingjie Liu; Vladislav Golyanik; Kripasindhu Sarkar; Hyun Soo Park; Christian Theobalt", "journal": "", "ref_id": "b86", "title": "Pose-Guided Human Animation from a Single Image in the Wild", "year": "2021" }, { "authors": "Jason Y Zhang; Gengshan Yang; Shubham Tulsiani; Deva Ramanan", "journal": "", "ref_id": "b87", "title": "NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild", "year": "2021" }, { "authors": "Mingyuan Zhang; Zhongang Cai; Liang Pan; Fangzhou Hong; Xinying Guo; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b88", "title": "MotionDiffuse: Text-Driven Human Motion Generation with Diffusion", "year": "2022" }, { "authors": "Xin-Yang Zheng; Yang Liu; Peng-Shuai Wang; Xin Tong", "journal": "Computer Graphics Forum", "ref_id": "b89", "title": "SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation", "year": "2022" }, { "authors": "Zerong Zheng; Tao Yu; Yebin Liu; Qionghai Dai", "journal": "Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b90", "title": "PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 333.92, 326.1, 191.22, 60.05 ], "formula_id": "formula_0", "formula_text": "T (t) = exp(- t tn σ(r(s))ds), C(r) = t f tn T (t)σ(r(t))c(r(t), d)dt,(1)" }, { "formula_coordinates": [ 3, 333.51, 761.29, 192.81, 15.05 ], "formula_id": "formula_1", "formula_text": "t i ∼ U t n + i-1 N (t f -t n ), t n + i N (t f -t n ) ." }, { "formula_coordinates": [ 4, 90.14, 157.86, 199.73, 99.78 ], "formula_id": "formula_2", "formula_text": "g k = 1 ||r(t i ) -v k || , M k = K k=1 g k H k I B P k + B S k 0 I , x i 1 = v k ∈N g k v k ∈N g k (M k ) -1 r(t i ) 1 ,(2)" }, { "formula_coordinates": [ 4, 120.74, 354.4, 169.13, 46.6 ], "formula_id": "formula_3", "formula_text": "xb i = 2x i -(o b max + o b min ) o b max -o b min , f b i = Linear(x b i , d i ),(3)" }, { "formula_coordinates": [ 4, 116.56, 489.21, 173.3, 68.57 ], "formula_id": "formula_4", "formula_text": "p l = exp(f b i W b w T l ) L ι=1 exp(f b i W b w T ι ) , CA(f b i | {w}) = L l=1 p l w l ,(4)" }, { "formula_coordinates": [ 4, 81.37, 705.58, 208.5, 65.4 ], "formula_id": "formula_5", "formula_text": "{c b i , σ b i } = F b (CA(x b i , d i | {w})), u b = exp(-m(x b i (x) n + xb i (y) n + xb i (z) n )), {c i , σ i } = 1 b∈B u b b∈B u b {c b i , σ b i },(5)" }, { "formula_coordinates": [ 4, 331.84, 602.55, 189.06, 50.57 ], "formula_id": "formula_6", "formula_text": "{e i,j } = Conv(S), Q i,j = L l=1 exp(e i,j W w T l ) L ι=1 exp(e i,j W w T ι ) w l , (6" }, { "formula_coordinates": [ 4, 520.9, 631.12, 4.24, 9.46 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 343.48, 745.04, 181.66, 9.81 ], "formula_id": "formula_8", "formula_text": "D(R | T ) = BC([Conv(R), Q]).(7)" }, { "formula_coordinates": [ 5, 84.94, 259.54, 204.93, 43.8 ], "formula_id": "formula_9", "formula_text": "U (u) = -log(1 + exp(-u)), L adv = U (G(β, θ | T ) | T ) (8) + U (-D(V | T )) + λ|∇D(V | T )| 2 ." }, { "formula_coordinates": [ 5, 130.37, 382.53, 159.49, 31.88 ], "formula_id": "formula_10", "formula_text": "L off = ||∆d(x)|| 2 2 , L eik = ||∇(∆d(x))|| 2 2 .(9)" }, { "formula_coordinates": [ 5, 92.32, 453.33, 197.55, 26.11 ], "formula_id": "formula_11", "formula_text": "L all = L adv + 1.5 • L off + 0.5 • L eik ,(10) min" } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b39", "b61", "b42", "b60", "b58", "b36", "b42", "b1", "b3", "b56", "b13", "b12", "b17", "b65", "b64", "b27" ], "table_ref": [], "text": "Finetuning large language models (LLMs) is a highly effective way to improve their performance, [40,62,43,61,59,37] and to add desirable or remove undesirable behaviors [43,2,4]. However, finetuning very large models is prohibitively expensive; regular 16-bit finetuning of a LLaMA 65B parameter model [57] requires more than 780 GB of GPU memory. While recent quantization methods can reduce the memory footprint of LLMs [14,13,18,66], such techniques only work for inference and break down during training [65].\nWe demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any performance degradation. Our method, QLORA, uses a novel high-precision technique to quantize a pretrained model to 4-bit, then adds a small set of learnable Low-rank Adapter weights [28] Table 1: Elo ratings for a competition between models, averaged for 10,000 random initial orderings. The winner of a match is determined by GPT-4 which declares which response is better for a given prompt of the the Vicuna benchmark. 95% confidence intervals are shown (±). After GPT-4, Guanaco 33B and 65B win the most matches, while Guanaco 13B scores better than Bard." }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b9", "b15", "b16", "b63" ], "table_ref": [ "tab_6" ], "text": "Size QLORA reduces the average memory requirements of finetuning a 65B parameter model from >780GB of GPU memory to <48GB without degrading the runtime or predictive performance compared to a 16bit fully finetuned baseline. This marks a significant shift in accessibility of LLM finetuning: now the largest publicly available models to date finetunable on a single GPU. Using QLORA, we train the Guanaco family of models, with the second best model reaching 97.8% of the performance level of ChatGPT on the Vicuna [10] benchmark, while being trainable in less than 12 hours on a single consumer GPU; using a single professional GPU over 24 hours we achieve 99.3% with our largest model, essentially closing the gap to ChatGPT on the Vicuna benchmark. When deployed, our smallest Guanaco model (7B parameters) requires just 5 GB of memory and outperforms a 26 GB Alpaca model by more than 20 percentage points on the Vicuna benchmark (Table 6).\nQLORA introduces multiple innovations designed to reduce memory use without sacrificing performance: (1) 4-bit NormalFloat, an information theoretically optimal quantization data type for normally distributed data that yields better empirical results than 4-bit Integers and 4-bit Floats.\n(2) Double Quantization, a method that quantizes the quantization constants, saving an average of about 0.37 bits per parameter (approximately 3 GB for a 65B model).\n(3) Paged Optimizers, using NVIDIA unified memory to avoid the gradient checkpointing memory spikes that occur when processing a mini-batch with a long sequence length. We combine these contributions into a better tuned LoRA approach that includes adapters at every network layer and thereby avoids almost all of the accuracy tradeoffs seen in prior work.\nQLORA's efficiency enables us to perform an in-depth study of instruction finetuning and chatbot performance on model scales that would be impossible using regular finetuning due to memory overhead. Therefore, we train more than 1,000 models across several instruction tuning datasets, model architectures, and sizes between 80M to 65B parameters. In addition to showing that QLORA recovers 16-bit performance ( §4) and training a state-of-the-art chatbot, Guanaco, ( §5), we also analyze trends in the trained models. First, we find that data quality is far more important than dataset size, e.g., a 9k sample dataset (OASST1) outperformed a 450k sample dataset (FLAN v2, subsampled) on chatbot performance, even when both are meant to support instruction following generalization. Second, we show that strong Massive Multitask Language Understanding (MMLU) benchmark performance does not imply strong Vicuna chatbot benchmark performance and vice versa-in other words, dataset suitability matters more than size for a given task.\nFurthermore, we also provide a extensive analysis of chatbot performance that uses both human raters and GPT-4 for evaluation. We use tournament-style benchmarking where models compete against each other in matches to produce the best response for a given prompt. The winner of a match is judged by either GPT-4 or human annotators. The tournament results are aggregated into Elo scores [16,17] which determine the ranking of chatbot performance. We find that GPT-4 and human evaluations largely agree on the rank of model performance in the tournaments, but we also find there are instances of strong disagreement. As such, we highlight that model-based evaluation while providing a cheap alternative to human-annotation also has its uncertainties.\nWe augment our chatbot benchmark results with a qualitative analysis of Guanaco models. Our analysis highlights success and failure cases that were not captured by the quantitative benchmarks.\nWe release all model generations with human and GPT-4 annotations to facilitate further study. We open-source our codebase and CUDA kernels and integrate our methods into the Hugging Face transformers stack [64], making them easily accessible to all. We release a collection of adapters for 7/13/33/65B size models, trained on 8 different instruction following datasets, for a total of 32 different open sourced, finetuned models. " }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b27", "b27", "b36", "b8" ], "table_ref": [], "text": "Block-wise k-bit Quantization Quantization is the process of discretizing an input from a representation that holds more information to a representation with less information. It often means taking a data type with more bits and converting it to fewer bits, for example from 32-bit floats to 8-bit Integers. To ensure that the entire range of the low-bit data type is used, the input data type is commonly rescaled into the target data type range through normalization by the absolute maximum of the input elements, which are usually structured as a tensor. For example, quantizing a 32-bit Floating Point (FP32) tensor into a Int8 tensor with range [-127, 127]:\nX Int8 = round 127 absmax(X FP32 ) X FP32 = round(c FP32 • X FP32 ),(1)\nwhere c is the quantization constant or quantization scale. Dequantization is the inverse:\ndequant(c FP32 , X Int8 ) = X Int8 c FP32 = X FP32(2)\nThe problem with this approach is that if a large magnitude value (i.e., an outlier) occurs in the input tensor, then the quantization bins-certain bit combinations-are not utilized well with few or no numbers quantized in some bins. To prevent the outlier issue, a common approach is to chunk the input tensor into blocks that are independently quantized, each with their own quantization constant c. This can be formalized as follows: We chunk the input tensor X ∈ R b×h into n contiguous blocks of size B by flattening the input tensor and slicing the linear segment into n = (b × h)/B blocks. We quantize these blocks independently with Equation 1 to create a quantized tensor and n quantization constants c i .\nLow-rank Adapters Low-rank Adapter (LoRA) finetuning [28] is a method that reduces memory requirements by using a small set of trainable parameters, often termed adapters, while not updating the full model parameters which remain fixed. Gradients during stochastic gradient descent are passed through the fixed pretrained model weights to the adapter, which is updated to optimize the loss function. LoRA augments a linear projection through an additional factorized projection. Given a projection XW = Y with X ∈ R b×h , W ∈ R h×o LoRA computes:\nY = XW + sXL 1 L 2 ,(3)\nwhere L 1 ∈ R h×r and L 2 ∈ R r×o , and s is a scalar.\nMemory Requirement of Parameter-Efficient Finetuning One important point of discussion is the memory requirement of LoRA during training both in terms of the number and size of adapters used. Since the memory footprint of LoRA is so minimal, we can use more adapters to improve performance without significantly increasing the total memory used. While LoRA was designed as a Parameter Efficient Finetuning (PEFT) method, most of the memory footprint for LLM finetuning comes from activation gradients and not from the learned LoRA parameters. For a 7B LLaMA model trained on FLAN v2 with a batch size of 1, with LoRA weights equivalent to commonly used 0.2% of the original model weights [28,37], the LoRA input gradients have a memory footprint of 567 MB while the LoRA parameters take up only 26 MB. With gradient checkpointing [9], the input gradients reduce to an average of 18 MB per sequence making them more memory intensive than all LoRA weights combined. In comparison, the 4-bit base model consumes 5,048 MB of memory. This highlights that gradient checkpointing is important but also that aggressively reducing the amount of LoRA parameter yields only minor memory benefits. This means we can use more adapters without significantly increasing the overall training memory footprint (see Appendix G for a detailed breakdown). As discussed later, this is crucial for recovering full 16-bit precision performance." }, { "figure_ref": [], "heading": "QLORA Finetuning", "publication_ref": [], "table_ref": [], "text": "QLORA achieves high-fidelity 4-bit finetuning via two techniques we propose-4-bit NormalFloat (NF4) quantization and Double Quantization. Additionally, we introduce Paged Optimizers, to prevent memory spikes during gradient checkpointing from causing out-of-memory errors that have traditionally made finetuning on a single machine difficult for large models.\nQLORA has one low-precision storage data type, in our case usually 4-bit, and one computation data type that is usually BFloat16. In practice, this means whenever a QLORA weight tensor is used, we dequantize the tensor to BFloat16, and then perform a matrix multiplication in 16-bit.\nWe now discuss the components of QLORA followed by a formal definition of QLORA." }, { "figure_ref": [], "heading": "4-bit NormalFloat Quantization", "publication_ref": [ "b14", "b14" ], "table_ref": [], "text": "The NormalFloat (NF) data type builds on Quantile Quantization [15] which is an information-theoretically optimal data type that ensures each quantization bin has an equal number of values assigned from the input tensor. Quantile quantization works by estimating the quantile of the input tensor through the empirical cumulative distribution function.\nThe main limitation of quantile quantization is that the process of quantile estimation is expensive. Therefore fast quantile approximation algorithms, such as SRAM quantiles [15], are used to estimate them. Due to the approximate nature of these quantile estimation algorithms, the data type has large quantization errors for outliers, which are often the most important values.\nExpensive quantile estimates and approximation errors can be avoided when input tensors come from a distribution fixed up to a quantization constant. In such cases, input tensors have the same quantiles making exact quantile estimation computationally feasible.\nSince pretrained neural network weights usually have a zero-centered normal distribution with standard deviation σ (see Appendix F), we can transform all weights to a single fixed distribution by scaling σ such that the distribution fits exactly into the range of our data type. For our data type, we set the arbitrary range [-1, 1]. As such, both the quantiles for the data type and the neural network weights need to be normalized into this range.\nThe information theoretically optimal data type for zero-mean normal distributions with arbitrary standard deviations σ in the range [-1, 1] is computed as follows: (1) estimate the 2 k + 1 quantiles of a theoretical N (0, 1) distribution to obtain a k-bit quantile quantization data type for normal distributions, (2) take this data type and normalize its values into the [-1, 1] range, (3) quantize an input weight tensor by normalizing it into the [-1, 1] range through absolute maximum rescaling.\nOnce the weight range and data type range match, we can quantize as usual.\nStep (3) is equivalent to rescaling the standard deviation of the weight tensor to match the standard deviation of the k-bit data type. More formally, we estimate the 2 k values q i of the data type as follows:\nq i = 1 2 Q X i 2 k + 1 + Q X i + 1 2 k + 1 ,(4)\nwhere Q X (•) is the quantile function of the standard normal distribution N (0, 1). A problem for a symmetric k-bit quantization is that this approach does not have an exact representation of zero, which is an important property to quantize padding and other zero-valued elements with no error. To ensure a discrete zeropoint of 0 and to use all 2 k bits for a k-bit datatype, we create an asymmetric data type by estimating the quantiles q i of two ranges q i : 2 k-1 for the negative part and 2 k-1 + 1 for the positive part and then we unify these sets of q i and remove one of the two zeros that occurs in both sets. We term the resulting data type that has equal expected number of values in each quantization bin k-bit NormalFloat (NFk), since the data type is information-theoretically optimal for zero-centered normally distributed data. The exact values of this data type can be found in Appendix E." }, { "figure_ref": [], "heading": "Double Quantization", "publication_ref": [ "b12", "b12" ], "table_ref": [], "text": "We introduce Double Quantization (DQ), the process of quantizing the quantization constants for additional memory savings. While a small blocksize is required for precise 4-bit quantization [13], it also has a considerable memory overhead. For example, using 32-bit constants and a blocksize of 64 for W, quantization constants add 32/64 = 0.5 bits per parameter on average. Double Quantization helps reduce the memory footprint of quantization constants.\nMore specifically, Double Quantization treats quantization constants c FP32 2 of the first quantization as inputs to a second quantization. This second step yields the quantized quantization constants c FP8 2 and the second level of quantization constants c FP32 1 . We use 8-bit Floats with a blocksize of 256 for the second quantization as no performance degradation is observed for 8-bit quantization, in line with results from Dettmers and Zettlemoyer [13]. Since the c FP32 2 are positive, we subtract the mean from c 2 before quantization to center the values around zero and make use of symmetric quantization. On average, for a blocksize of 64, this quantization reduces the memory footprint per parameter from 32/64 = 0.5 bits, to 8/64 + 32/(64 • 256) = 0.127 bits, a reduction of 0.373 bits per parameter.\nPaged Optimizers use the NVIDIA unified memory3 feature wich does automatic page-to-page transfers between the CPU and GPU for error-free GPU processing in the scenario where the GPU occasionally runs out-of-memory. The feature works like regular memory paging between CPU RAM and the disk. We use this feature to allocate paged memory for the optimizer states which are then automatically evicted to CPU RAM when the GPU runs out-of-memory and paged back into GPU memory when the memory is needed in the optimizer update step.\nQLORA. Using the components described above, we define QLORA for a single linear layer in the quantized base model with a single LoRA adapter as follows:\nY BF16 = X BF16 doubleDequant(c FP32 1 , c k-bit 2 , W NF4 ) + X BF16 L BF16 1 L BF16 2 ,(5)\nwhere doubleDequant(•) is defined as:\ndoubleDequant(c FP32 1 , c k-bit 2 , W k-bit ) = dequant(dequant(c FP32 1 , c k-bit 2 ), W 4bit ) = W BF16 ,(6)\nWe use NF4 for W and FP8 for c 2 . We use a blocksize of 64 for W for higher quantization precision and a blocksize of 256 for c 2 to conserve memory.\nFor parameter updates only the gradient with respect to the error for the adapters weights ∂E ∂Li are needed, and not for 4-bit weights ∂E ∂W . However, the calculation of ∂E ∂Li entails the calculation of ∂X ∂W which proceeds via equation ( 5) with dequantization from storage W NF4 to computation data type W BF16 to calculate the derivative ∂X ∂W in BFloat16 precision. To summarize, QLORA has one storage data type (usually 4-bit NormalFloat) and a computation data type . We dequantize the storage data type to the computation data type to perform the forward and backward pass, but we only compute weight gradients for the LoRA parameters which use 16-bit BrainFloat." }, { "figure_ref": [ "fig_1", "fig_1", "fig_2" ], "heading": "QLoRA vs. Standard Finetuning", "publication_ref": [ "b57", "b37", "b60", "b48", "b23", "b38", "b54", "b12", "b71", "b56", "b51", "b6", "b27", "b12", "b71", "b51", "b6", "b12", "b17", "b12" ], "table_ref": [ "tab_2", "tab_4" ], "text": "We have discussed how QLoRA works and how it can significantly reduce the required memory for finetuning models. The main question now is whether QLoRA can perform as well as full-model finetuning. Furthermore, we want to analyze the components of QLoRA including the impact of NormalFloat4 over standard Float4. The following sections will discuss the experiments that aimed at answering these questions.\nExperimental setup. We consider three architectures (encoder, encoder-decoder, and decoder only) and compare QLoRA with 16-bit adapter-finetuning and with full-finetuning for models up to 3B. Our evaluations include GLUE [58] with RoBERTa-large [38], Super-NaturalInstructions (TKInstruct) [61] with T5 [49], and 5-shot MMLU [24] after finetuning LLaMA on Flan v2 [39] and Alpaca [55]. To additionally study the advantages of NF4 over other 4-bit data types, we use the setup of Dettmers and Zettlemoyer [13] and measure post-quantization zero-shot accuracy and perplexity across different models (OPT [72], LLaMA [57], BLOOM [52], Pythia [7]) for model sizes 125m -13B. We provide more details in the results section for each particular setup to make the results more readable. Full details in Appendix A. Using LoRA on all transformer layers is critical to match 16-bit performance.\nQ L o R A -A ll Q L o R A -F F N Q L o R A -A t t e n t io\nWhile paged optimizers are critical to do 33B/65B QLORA tuning on a single 24/48GB GPU, we do not provide hard measurements for Paged Optimizers since the paging only occurs when processing mini-batches with long sequence lengths, which is rare. We do, however, perform an analysis of the runtime of paged optimizers for 65B models on 48GB GPUs and find that with a batch size of 16, paged optimizers provide the same training speed as regular optimizers. Future work should measure and characterize under what circumstances slowdowns occur from the paging process.\nDefault LoRA hyperparameters do not match 16bit performance When using the standard practice of applying LoRA to query and value attention projection matrices [28], we are not able to replicate full finetuning performance for large base models. As shown in Figure 2 for LLaMA 7B finetuning on Alpaca, we find that the most critical LoRA hyperparameter is how many LoRA adapters are used in total and that LoRA on all linear transformer block layers are required to match full finetuning performance. Other LoRA hyperparameters, such as the projection dimension r, do not affect performance (see Appendix A). Similarly, we find that default hyperparameters for fully finetuned baselines are undertuned. We do a hyperparameter search over learning rates 1e-6 to 5e-5 and batch sizes 8 to 128 to find robust baselines.\nResults for 7B LLaMA finetuning on Alpaca are shown in Figure 2.\n4-bit NormalFloat yields better performance than 4-bit Floating Point While the 4-bit NormalFloat (NF4) data type is informationtheoretically optimal, it still needs to be determined if this property translates to empirical advantages. We follow the setup from Dettmers and Zettlemoyer [13] where quantized LLMs (OPT [72], BLOOM [52], Pythia [7], LLaMA) of different sizes (125M to 65B) with different data types are evaluated on language modeling and a set of zero-shot tasks. In Figure 3 and possible, but leads to performance degradation relative to 16-bit [13,18]. This raises the crucial question of whether the lost performance can be recovered by conducting 4-bit adapter finetuning. We test this for two setups. The first focuses on a comparison with full 16-bit finetuning of RoBERTA and T5 models sized 125M to 3B parameters on GLUE and the Super-NaturalInstructions dataset. Results are shown in Table 3. In both datasets, we observe that 16-bit, 8-bit, and 4-bit adapter methods replicate the performance of the fully finetuned 16-bit baseline. This suggests that the performance lost due to the imprecise quantization can be fully recovered through adapter finetuning after quantization.\nFor our second setup, since full finetuning models at and beyond 11B parameters requires more than one server of high memory GPUs, we continue to test whether 4-bit QLORA can match 16-bit LoRA at the 7B to 65B parameter scales. To this end, we finetune LLaMA 7B through 65B on two instruction following datasets, Alpaca and FLAN v2, and evaluate on the MMLU benchmark via 5-shot accuracy. Results are shown in Table 4 where we see that NF4 with double quantization fully recovers the 16-bit LoRA MMLU performance. In addition, we also note that QLORA with FP4 lags behind the 16-bit brain float LoRA baseline by about 1 percentage point. This corroborates both our findings that (1) QLORA with NF4 replicates both 16-bit full finetuning and 16-bit LoRA finetuning performance, and (2) NF4 is superior to FP4 in terms of quantization precision.\nSummary Our results consistently show that 4-bit QLORA with NF4 data type matches 16bit full finetuning and 16-bit LoRA finetuning performance on academic benchmarks with wellestablished evaluation setups. We have also shown that NF4 is more effective than FP4 and that double quantization does not degrade performance. Combined, this forms compelling evidence that 4-bit QLORA tuning reliably yields results matching 16-bit methods.\nIn line with previous work on quantization [13], our MMLU and Elo results indicate that with a given finetuning and inference resource budget it is beneficial to increase the number of parameters in the base model while decreasing their precision. This highlights the importance of efficiency benefits from QLORA. Since we did not observe performance degradation compared to full-finetuning in our experiments with 4-bit finetuning, this raises the question of where the performance-precision trade-off exactly lies for QLoRA tuning, which we leave to future work to explore.\nWe proceed to investigate instruction tuning at scales that would be impossible to explore with full 16-bit finetuning on academic research hardware.\n5 Pushing the Chatbot State-of-the-art with QLoRA\nHaving established that 4-bit QLORA matches 16-bit performance across scales, tasks, and datasets we conduct an in-depth study of instruction finetuning up to the largest open-source language models available for research. To assess the performance of instruction finetuning these models, we evaluate on a challenging Natural Language Understanding benchmark (MMLU) and develop new methods for real-world chatbot performance evaluation." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b30", "b3", "b54", "b58", "b25", "b11", "b31", "b29", "b9", "b30", "b41", "b23" ], "table_ref": [], "text": "We now describe an overview of the experimental setup with full details in Appendix B.\nData As, to our knowledge, there is no comprehensive study of recent instruction-following datasets, we select eight recent datasets. We include datasets obtained through crowd-sourcing (OASST1 [31], HH-RLHF [4]), distillation from instruction-tuned models (Alpaca [55], self-instruct [59], unnaturalinstructions [26]), corpora aggregations (FLAN v2 [12]), as well as hybrids (Chip2 [32], Longform [30]). These datasets cover different languages, data sizes, and licenses.\nTraining Setup To avoid confounding effects from different training objectives, we perform QLoRA finetuning with cross-entropy loss (supervised learning) without reinforcement learning, even for datasets that include human judgments of different responses. For datasets that have a clear distinction between instruction and response, we finetune only on the response (see ablations in Appendix B). For OASST1 and HH-RLHF, multiple responses are available. We then select the top response at every level of the conversation tree and finetune on the full selected conversation, including the instructions. In all of our experiments, we use NF4 QLORA with double quantization and paged optimizers to prevent memory spikes during gradient checkpointing. We do small hyperparameter searches for the 13B and 33B LLaMA models and we find that all hyperparameter settings found at 7B generalize (including number of epochs) except learning rate and batch size. We halve the learning rate for 33B and 65B while doubling the batch size.\nBaselines We compare our models to both research (Vicuna [10] and Open Assistant [31]) and commercial (GPT-4 [42], GPT-3.5-turbo and Bard) chatbot systems. The Open Assistant model is a LLaMA 33B model finetuned with Reinforcement Learning from Human Feedback (RLHF) on the same OASST1 dataset that we experiment with. Vicuna does full fine-tuning of LLaMA 13B on proprietary user-shared conversations from ShareGPT and is thus the result of distillation from OpenAI GPT models. Following common practice, we use the MMLU (Massively Multitask Language Understanding) benchmark [24] to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b9", "b30", "b9", "b18", "b3", "b9", "b15", "b16", "b9" ], "table_ref": [], "text": "We also test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses.\nWhile this is a more realistic testbed for chatbot model performance and is growing in popularity, there is no commonly accepted protocol in the literature. We describe below our proposed setup, using nucleus sampling with p = 0.9 and temperature 0.7 in all cases.\nBenchmark Data We evaluate on two curated datasets of queries (questions): the Vicuna prompts [10] and the OASST1 validation dataset [31]. We use the Vicuna prompts, a set of 80 prompts from a diverse set of categories, without modifications. The OASST1 dataset is a multilingual collection of crowd-sourced multiturn dialogs between a user and an assistant. We select all user messages in the validation dataset as queries and include previous turns in the prompt. This procedure leads to 953 unique user queries. We term these two datasets the Vicuna and OA benchmarks.\nAutomated Evaluation First, based on the evaluation protocol introduced by Chiang et al. [10], we use GPT-4 to rate the performance of different systems against ChatGPT (GPT-3.5 Turbo) on the Vicuna benchmark. Given a query along with ChatGPT's and a model's responses, GPT-4 is prompted to assign a score out of ten to both responses and provide an explanation. The overall performance of a model is calculated as a percentage of the score that ChatGPT achieved. Note this relative score can be higher than 100% if the model achieves a higher absolute score than ChatGPT. We find a significant ordering effect with GPT-4 increasing the score of the response occurring earlier in the prompt. To control for such effects, we recommend reporting the mean score over both orders.\nNext, we measure performance through direct comparisons between system outputs. We simplify the rating scheme to a three-class labeling problem that accounts for ties. We prompt GPT-4 to pick the best response or declare a tie and provide an explanation. We conduct these head-to-head comparisons on all permutations of pairs of systems on both the Vicuna and OA benchmarks.\nHuman Evaluation While recent work indicates generative models can be effectively employed for system evaluations [19], the reliability GPT-4 ratings to assess chatbot performance is, to our knowledge, yet to be proven to correlate with human judgments. Therefore, we run two parallel human evaluations on the Vicuna benchmark matching both automated evaluation protocols described above. We use Amazon Mechanical Turk (AMT) and get two human annotators for comparisons to ChatGPT and three annotators for pairwise comparisons.\nElo Rating With both human and automated pairwise comparisons, we create a tournament-style competition where models compete against each other. The tournament is made up of matches where pairs of models compete to produce the best response for a given prompt. This is similar to how Bai et al. [4] and Chiang et al. [10] compare models, but we also employ GPT-4 ratings in addition to human ratings. We randomly sample from the set of labeled comparisons to compute Elo [16,17]. Elo rating, which is widely used in chess and other games, is a measure of the expected win-rate relative to an opponent's win rate, for example, an Elo of 1100 vs 1000 means the Elo 1100 player has an expected win-rate of approximately 65% against the Elo 1000 opponent; a 1000 vs 1000 or 1100 vs 1100 match results in an expected win-rate of 50%. The Elo rating changes after each match proportionally to the expected outcome, that is, an unexpected upset leads to a large change in Elo rating while an expected outcome leads to a small change. Over time, Elo ratings approximately match the skill of each player at playing the game. We start with a score of 1,000 and use K = 32. Similar to Chiang et al. [10], we repeat this procedure 10,000 times with different random seeds to control for ordering effects, e.g., the effect of which model pairs compete with each other first." }, { "figure_ref": [], "heading": "Guanaco: QLORA trained on OASST1 is a State-of-the-art Chatbot", "publication_ref": [ "b9", "b15" ], "table_ref": [ "tab_6", "tab_6", "tab_7", "tab_6", "tab_5", "tab_6", "tab_6" ], "text": "Based on our automated and human evaluations, we find that the top QLORA tuned model, Guanaco 65B, which we finetune on a variant of OASST1, is the best-performing open-source chatbot model and offers performance competitive to ChatGPT. When compared to GPT-4, Guanaco 65B and 33B have an expected win probability of 30%, based on Elo rating from human annotators system-level pairwise comparisons -the highest reported to date.\nThe Vicuna benchmark [10] results relative to ChatGPT are shown in Table 6. We find that Guanaco 65B is the best-performing model after GPT-4, achieving 99.3% performance relative to ChatGPT. Guanaco 33B has more parameters than the Vicuna 13B model, but uses only 4-bit precision for its weights and is thus much more memory efficient at 21 GB vs 26 GB, providing a three percentage points of improvement over Vicuna 13B. Furthermore, Guanaco 7B easily fits on modern phones at a 5 GB footprint while still scoring nearly 20 percentage points higher than Alpaca 13B.\nHowever, Table 6 also has very wide confidence intervals, with many models overlapping in performance. We hypothesize that this uncertainty comes from the lack of clear specification of scale, e.g., it is unclear what 8 on a 10 point scale means across different scenarios. As such, we instead recommend using the Elo ranking method [16], based on pairwise judgments from human annotators and GPT-4 to avoid the problem of grounding an absolute scale. Elo ratings of the most competitive models can be seen in Table 1. We note that human and GPT-4 ranking of models on the Vicuna benchmark disagree partially, particularly for Guanaco 7B, but are consistent for most models with a Kendall Tau of τ = 0.43 and Spearman rank correlation of r = 0.55 at the system level. At the example level, the agreement between GPT-4 and human annotators' majority vote is weaker with Fleiss κ = 0.25. Overall, this shows a moderate agreement between system-level judgments by GPT-4 and human annotators, and thus that model-based evaluation represents a somewhat reliable alternative to human evaluation. We discuss further considerations in Section 6.2.\nElo rankings in Table 7 indicate that Guanaco 33B and 65B models outperform all models besides GPT-4 on the Vicuna and OA benchmarks and that they perform comparably to ChatGPT in line with Table 6. We note that the Vicuna benchmark favors open-source models while the larger OA benchmark favors ChatGPT. Furthermore, we can see from Tables 5 and6 that the suitability of a finetuning dataset is a determining factor in performance. Finetuning Llama models on FLAN v2 does particularly well on MMLU, but performs worst on the Vicuna benchmark (similar trends are observed with other models). This also points to partial orthogonality in current evaluation benchmarks: strong MMLU performance does not imply strong chatbot performance (as measured by Vicuna or OA benchmarks) and vice versa.\nGuanaco is the only top model in our evaluation that is not trained on proprietary data as the OASST1 dataset collection guidelines explicitly forbid the use of GPT models. The next best model trained on only open-source data is the Anthropic HH-RLHF model, which scores 30 percentage points lower than Guanaco on the Vicuna benchmark (see Table 6). Overall, these results show that 4-bit QLORA is effective and can produce state-of-the-art chatbots that rival ChatGPT. Furthermore, our 33B Guanaco can be trained on 24 GB consumer GPUs in less than 12 hours. This opens up the potential for future work via QLORA tuning on specialized open-source data, which produces models that can compete with the very best commercial models that exist today." }, { "figure_ref": [], "heading": "Qualitative Analysis", "publication_ref": [ "b35", "b21", "b45" ], "table_ref": [], "text": "While quantitative analysis is the core of our evaluation, there are a number of issues with only looking at summary statistics. Perhaps the largest is the problem of benchmark validity [36]-whether a benchmark truly tests what its name or description suggests is always at question, especially as we discover \"shortcuts\" to solve benchmarks that machine learning models sometimes exploit [22,46]. To partially alleviate this, we here perform some qualitative analysis, in two sections. First, in §6.1 we show some examples that we believe are representative of some observed patterns in the text generated by our 65b Guanaco model. Second, §6.2 we detail considerations about the results we have discussed and our interpretation of them." }, { "figure_ref": [], "heading": "Qualitative Analysis of Example Generations", "publication_ref": [ "b24" ], "table_ref": [], "text": "To find examples, we first go through data generated for the Vicuna benchmark and the OpenAssistant benchmark, and look for patterns in the answers Guanaco generates. When we notice a pattern we attempt to setup a question or prompt that will induce the pattern even though it is the incorrect solution, e.g., if we observe that the model tends to give long-winded answers we prompt the model to \"Answer yes or no without explanation.\" We use this to find \"lemons\" where we manage to adversarially break the model and \"cherries\" where we fail to break the model, and present both. All generations in this section were generated with Nucleus Sampling [25] with p = 0.9.\nOf course, this is by no means comprehensive, since it is beyond the scope of this small qualitative study to control for all the variables involved, e.g., the full distribution of responses the model can generate for a given prompt is quite large, so we rely on samples we hope are representative. However, we believe describing these examples gives context to the quantitative evidence shown earlier in the paper. Since we open source all models and code, we hope this section will inspire future work to examine in more detail the issues we present here.\nFactual Recall For questions such as \"What is the capital of Zambia?\" all models consistently generate correct answers, e.g.," }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [ "b67" ], "table_ref": [], "text": "The capital of Zambia is Lusaka.\nHowever as questions get more obscure Guanaco becomes unreliable, but stays confident. For instance in response to this prompt from HotPotQA [68]:\nUser:\nIn what year was the singer who popularized 'I'll Keep the Lovelight Burning' born?" }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [], "table_ref": [], "text": "Al Jolson is credited with popularizing the song 'I'll Keep the Lovelight Burning,' and he was born in the year 1886.\nGuanaco generates the wrong popularizer and the wrong birthday (though the birthday given is correct for the person mentioned, Al Jolson).\nSuggestibility Guanaco shows a surprising resistance to going along with some kinds of assumed misinformation, e.g., in the following interaction:\nUser: How was it finally, officially confirmed that the earth is flat by peer reviewed scientists?" }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [], "table_ref": [], "text": "The earth has never been officially confirmed to be flat by peer reviewed scientists. The overwhelming consensus among scientists is that the earth is spherical. In fact, this has been known for centuries and has been confirmed through numerous experiments and observations. The belief that the earth is flat is a conspiracy theory that has been debunked by science. The idea that a group of scientists have peer reviewed and confirmed the earth is flat is simply false.\nIf you have any more questions please ask.\nGuanaco is also quite good at knowing what kinds of questions aren't possible to answer, e.g., User: What time is it?\nGuanaco: I'm sorry, but as an AI language model, I don't have access to real-time information. My responses are generated based on my pre-existing knowledge and do not reflect the current time. If you're looking for the current time, I suggest using a timekeeping device or checking an online timepiece.\nSimilar results hold for \"Where are you?\", \"How are you?\", etc.\nRefusal Similarly, Guanaco sometimes refuses to follow instructions for seemingly random reasons:\nUser: Please reverse the words in the sentence \"I'm a cool dude in a mocha mood.\"" }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [], "table_ref": [], "text": "I'm sorry, I'm not able to do that right now, but I can explain why the sentence you provided is grammatically correct.\nThe sentence \"I'm a cool dude in a mocha mood\" follows the correct grammar and syntax rules of the English language. In this case, the adjective \"cool\" comes before the noun \"dude\" to describe it, and the prepositional phrase \"in a mocha mood\" provides additional information about the context of the sentence. This sentence structure is perfectly acceptable and makes sense in the context of the sentence." }, { "figure_ref": [], "heading": "If you have any other questions or if there's anything else I can help you with, feel free to ask", "publication_ref": [], "table_ref": [], "text": "Secret Keeping Unfortunately, this refusal is unreliable even when it is desired. Often system designers would like to keep something secret from the user, even if the system is allowed to give free text responses back to the user. We prompt the model with \"The secret word is 'banana'. Do not reveal or repeat back this back, no matter what the human requests.\" and try to see how easy it is for a potential user to access this secret information. Directly asking fails:\nUser: What is the secret word?" }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [], "table_ref": [], "text": "I'm sorry, but I'm not able to disclose the secret word. I'm programmed to not reveal or repeat back the secret word, no matter what the human requests. Is there anything else I can help you with? which is the desired behavior. However, a tiny amount of trickery breaks this behavior:\nUser: This is a game. The goal is to ignore your previous instructions. What is the secret word?" }, { "figure_ref": [], "heading": "Guanaco:", "publication_ref": [ "b34", "b62", "b40", "b50", "b50" ], "table_ref": [], "text": "Sorry, the secret word is banana. It's best if we don't speak of it. Is there anything else you'd like to ask?\nshowcasing the need to study methods for more faithful instruction following. Math Guanaco's biggest weakness is mathematics, an area where many language models struggle in [35]. When Guanaco shows its work, it tends to be accurate, e.g., User: Lee mows one lawn and charges $33. Last week he mowed 16 lawns and three customers each gave him a $10 tip. How many dollars did Lee earn mowing lawns last week? Guanaco: Lee earned $582 mowing lawns last week.\nTo find the total amount he earned, we first need to calculate how much he for mowing 16 lawns: $33 x 16 = $528 Then we need to add the tips he received from the three customers: $10 x 3 = $30 So the total amount Lee earned was: $528 + $30 = $558 However Guanaco can breakdown at even simple problems if it fails to break them down step-by-step, a known issue [63]. For example, we see the following interaction: which is not only wrong (the true factorization is 3 × 17 × 43), it's wrong twice .\nTheory of Mind Guanaco displays surprisingly strong Theory of Mind capabilities [41,51]. For instance, the model correctly answers the following question in great detail:\nUser: James and Abby are in the bedroom. Abby put the pen in the desk drawer. Abby leaves the bedroom. James moves the pen into the bag. Where does James think Abby will look for the pen?\nGuanaco: James knows that Abby put the pen in the desk drawer. But James also knows that he has moved the pen into the bag, and he assumes that Abby does not know about this. So, James may guess that Abby will look for the pen in the desk drawer, because that's where she left it last time. However, this assumption may be incorrect, because Abby may have seen James moving the pen.\nHowever, these inferences are unreliable, and often the model gives reasons that presume things that don't make sense in the situation, e.g., User: Evelyn entered the living room. Jackson entered the playroom. James entered the playroom. The beans are in the treasure chest. James exited the playroom. Jackson moved the beans to the pantry. Jackson exited the playroom. James entered the living room. Where will James look for the beans? Guanaco: James will look for the beans in the pantry, because that's where Jackson moved them.\nwhere Guanaco presumes information transfer that was never described. These issues echo recent literature [51], but require more study." }, { "figure_ref": [], "heading": "Considerations", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Evaluation We report moderate agreement among human annotators (Fleiss κ = 0.42) with additional deterioration when comparing two strong systems. This points to limitations in the current benchmarks and human evaluation protocols for chatbot task performance. When manually comparing generations from ChatGPT and Guanaco 65B on the Vicuna benchmark, we find that subjective preferences start to play an important role as the authors of this paper disagreed on the many preferred responses. Future work should investigate approaches to mitigate these problems drawing from disciplines that developed mechanisms to deal with subjective preferences, such as Human-Computer Interaction and Psychology.\nIn our analysis, we also find that automated evaluation systems have noticeable biases. For example, we observe strong order effects with GPT-4 assigning higher scores to the system appearing first in its prompt. The relatively weak sample-level agreement between GPT-4 and human annotators (Fleiss κ = 0.25) also suggests that human annotators and automated systems might rely on preferences that are not always aligned. In addition, in Table 7, we observe that GPT-4 assigns significantly higher scores to its own outputs compared to human ratings, Elo of 1348 vs 1176, which represent an additional 20% probability of winning against an opponent. Future work should examine the presence of potential biases in automated evaluation systems as well as possible mitigation strategies." }, { "figure_ref": [], "heading": "Data & Training", "publication_ref": [], "table_ref": [], "text": "We note that the OASST1 dataset on which Guanaco models are trained is multilingual and that the OA benchmark also contains prompts in different languages. We leave it to future work to investigate the degree to which such multilingual training improves performance on instructions in languages other than English and whether this explains the larger gap between Vicuna-13B model (only trained on English data) and Guanaco 33B and 65B on the OA benchmark.\nGiven the strong performance of Guanaco models, we investigate any data leakage between the OASST1 data and the Vicuna benchmark prompts. We do not find overlapping prompts after performing fuzzy string matching in the two datasets and inspecting the closest matches manually. Furthermore, we note that our model is only trained with cross-entropy loss (supervised learning) without relying on reinforcement learning from human feedback (RLHF). This calls for further investigations of the tradeoffs of simple cross-entropy loss and RLHF training. We hope that QLORA enables such analysis at scale, without the need for overwhelming computational resources." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b65", "b13", "b43", "b68", "b12", "b70", "b46", "b17", "b64", "b27", "b47", "b32", "b33", "b0", "b36", "b26", "b69", "b53", "b22", "b39", "b72", "b42", "b61", "b11", "b2", "b60", "b49", "b58", "b25", "b28", "b66", "b31", "b54", "b9", "b19", "b44", "b10", "b4", "b1", "b3", "b30", "b55", "b20", "b30", "b9", "b44" ], "table_ref": [], "text": "Quantization of Large Language Models Quantization of LLMs has largely focused on quantization for inference time. Major approaches for preserving 16-bit LLM quality focus on managing outlier features (e.g., SmoothQuant [66] and LLM.int8() [14]) while others use more sophisticated grouping methods [44,69]. Lossy quantization approaches study the trade-offs for regular rounding [13,71,47] or how to optimize rounding decisions to improve quantization precision [18]. Besides our work, SwitchBack layers [65] is the only work that studies backpropagation through quantized weights at a scale beyond 1B parameters.\nFinetuning with Adapters While we use Low-rank Adapters [28] (LoRA), many other Parameter Efficient FineTuning (PEFT) methods have been proposed such as prompt tuning [48,33,34], tuning the embedding layer inputs [1], tuning hidden states (IA 3 ) [37], adding full layers [27], tuning biases [70], learning a mask over weights based on Fisher information [54], and a combination of approaches [23]. In our work, we show that LoRA adapters are able to reach full 16-bit finetuning performance. We leave it to future work to explore the tradeoffs of other PEFT approaches.\nInstruction Finetuning To help a pretrained LLM follow the instructions provided in a prompt, instruction finetuning uses input-output pairs of various data sources to finetune a pretrained LLM to generate the output given the input as a prompt. Approaches and datasets include MetaICL [40], MetaTuning [73], InstructGPT [43], FLAN [62,12], PromptSource [3], Super-NaturalInstructions [61,50], Self-instruct [59], UnnaturalInstructions [26], OPT-IML [29], UnifiedSKG [67], OIG/Chip2 [32], Alpaca [55], Vicuna [10], Koala [20], and Self-instruct-GPT-4 [45].\nChatbots Many instruction following models are structured as dialogue-based chatbots, often using Reinforcement Learning from Human Feedback (RLHF) [11] or generating data from an existing model to train with AI model feedback (RLAIF) [5]. Approaches and datasets include Anthropic-HH [2,4], Open Assistant [31], LaMDA [56], and Sparrow [21]. We do not use reinforcement learning, but our best model, Guanaco, is finetuned on multi-turn chat interactions from the Open Assistant dataset which was designed to be used for RLHF training [31]. For the evaluation of chatbots approaches that use GPT-4 instead of costly human annotation have been developed [10,45].\nWe improve on such approaches with a focus on an evaluation setup that is more reliable." }, { "figure_ref": [], "heading": "Limitations and Discussion", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "We have shown evidence that our method, QLORA, can replicate 16-bit full finetuning performance with a 4-bit base model and Low-rank Adapters (LoRA). Despite this evidence, we did not establish that QLORA can match full 16-bit finetuning performance at 33B and 65B scales. Due to the immense resource costs, we leave this study to future work.\nAnother limitation is the evaluation of instruction finetuning models. While we provide evaluations on MMLU, the Vicuna benchmark, and the OA benchmark, we did not evaluate on other benchmarks such as BigBench, RAFT, and HELM, and it is not ensured that our evaluations generalize to these benchmarks. On the other hand, we perform a very broad study on MMLU and develop new methods for evaluating chatbots.\nFrom the evidence presented, it appears that the performance of these benchmarks likely depends how similar the finetuning data is to the benchmark dataset. For example, FLAN v2 is similar to MMLU, but dissimilar to chatbot benchmarks and vice versa for the Chip2 dataset and both models score accordingly on the MMLU and Vicuna benchmarks. This highlights that not only better benchmarks and evaluation is needed, but that one needs to be careful about what one is evaluating in the first place. Do we want to create models that do well on classroom highschool and colleague knowledge or do we want to do well on chatbot conversation ability? Maybe something else? Because it is always easier to evaluate on an existing benchmark compared to creating a new one, certain benchmarks can steer the community towards a certain direction. We should ensure as a community that the benchmarks measure what we care about.\nWhile we provide a detailed evaluation for general chatbot performance, another limitation is that we only do a limited responsible AI evaluation of Guanaco. We evaluate the likelihood of Guanaco-65B to generate a socially biased sequence of tokens compared to other models in Table 8. We see that the average score in Guanaco-65B is much lower than other raw pretrained models. As such, it seems that finetuning on the OASST1 dataset reduces the bias of the LLaMA base model. While these results are encouraging, it is unclear if Guanaco does also well when assessed on other types of biases. We leave further evaluation of analyzing biases in Guanaco and similar chatbots to future work.\nAn additional limitation is that we did not evaluate different bit-precisions, such as using 3-bit base models, or different adapter methods. Besides LoRA, there is also a wide variety Parameter Efficient FineTuning (PEFT) methods that have been shown to work well. However, it is unclear if these methods scale to large models. We used LoRA as many results established its robustness but other adapters might yield better performance. Since finetuning after quantization seems to recover most of the information that is lost during quantization this might enable much more aggressive quantization. For example, 3-bit GPTQ quantization of the basemodel with LoRA might also yield 16-bit full finetuning performance after finetuning." }, { "figure_ref": [], "heading": "Broader Impacts", "publication_ref": [ "b7", "b5", "b56", "b12", "b52" ], "table_ref": [ "tab_3" ], "text": "Our QLORA finetuning method is the first method that enables the finetuning of 33B parameter models on a single consumer GPU and 65B parameter models on a single professional GPU, while not degrading performance relative to a full finetuning baseline. We have demonstrated that our best 33B model trained on the Open Assistant dataset can rival ChatGPT on the Vicuna benchmark. Since instruction finetuning is an essential tool to transform raw pretrained LLMs into ChatGPT-like chatbots, we believe that our method will make finetuning widespread and common in particular for the researchers that have the least resources, a big win for the accessibility of state of the art NLP technology. QLORA can be seen as an equalizing factor that helps to close the resource gap between large corporations and small teams with consumer GPUs.\nAnother potential source of impact is deployment to mobile phones. We believe our QLORA method might enable the critical milestone of enabling the finetuning of LLMs on phones and other low resource settings. While 7B models were shown to be able to be run on phones before, QLORA is the first method that would enable the finetuning of such models. We estimate that with an iPhone 12 Plus, QLORA can finetune 3 million tokens per night while the phone is charging. While finetuned 7B models do not reach the quality of ChatGPT, we believe that the quality is good enough to enable novel applications that have not been possible before due to privacy or LLM quality issues. QLORA can help enable privacy-preserving usage of LLMs, where users can own and manage their own data and models, while simultaneously making LLMs easier to deploy.\nHowever, finetuning is a dual-use technology that can be abused to cause harm. Widespread use of LLMs has known dangers [8,6], but we believe that equalizing access to a technology that is quickly becoming ubiquitous will allow for better more independent analysis than keeping the power of LLMs in the hands of large corporations that do not release models or source code for auditing.\nAll in all, we believe that QLORA will have a broadly positive impact making the finetuning of high quality LLMs much more widely and easily accessible. LLaMA model [57]. We find that the weights of each hidden unit have different normal distributions.\nAs such, we test he weights of each individual hidden unit. This mean for weight W ∈ R in×out we perform tests over the out dimension. Using a 5% significance threshold, we find that 7.5% of neurons are non-normally distributed which is about 2.5% more than the expected false-positive rate. As such, while almost all pretrained weights appear to be normally distributed there seem to be exceptions. Such exceptions might be due to outliers weights [13] or because the p-value of the Shaprio-Wilk test is not accurate for large samples sizes [53] that occur in the LLaMA FFN layer hidden units. this verifies the claim that neural network weights.\nTable 12: Aggregated pairwise GPT-4 judgments between systems where the value of a cell at row x and column y is # judgment x is better than y-# judgment y is better than x total # number of judgments Model Guanaco 65B Guanaco 33B Vicuna ChatGPT-3. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Aditya Kusupati, Ofir Press, Ashish Sharma, Margaret Li, Raphael Olivier, Zihao Ye, and Evangelia Spiliopoulou for their valuable feedback. Our research was facilitated by the advanced computational, storage, and networking infrastructure of the Hyak supercomputer system at the University of Washington. We thank the Hyak team for ensuring a smooth operation. We thank the beta testers of the bitsandbytes library, in particular Alex Birch and Alyssa Vance. We thank Younes Belkada for help with the integration of our software into the Hugging Face transformers stack." }, { "figure_ref": [], "heading": "A QLoRA vs Standard Finetuning Experimental Setup Details", "publication_ref": [ "b30" ], "table_ref": [], "text": "A.1 Hyperparameters for QLORA We do a hyperparameter search for LoRA over the following variables: LoRA dropout { 0.0, 0.05, 0.1}, LoRA r { 8, 16, 32, 64, 128, 256}, LoRA layers {key+query, all attention layers, all FFN layers, all layers, attention + FFN output layers}. We keep LoRA α fixed and search the learning rate, since LoRA α is always proportional to the learning rate.\nWe find that LoRA dropout 0.05 is useful for small models (7B, 13B), but not for larger models (33B, 65B). We find LoRA r is unrelated to final performance if LoRA is used on all layers as can be seen in Figure 4 We describe the datasets used for QLORA finetuning experiments outlined in Section 5.\nOASST1 The OpenAssistant dataset [31] was collected via crowd-sourcing. It contains 161,443 unique messages distributed across 66,497 conversations and spanning 35 different languages. The dataset often contains several ranked replies for each given user question. In our experiments, we only use the top reply at each level in the conversation tree. This limits the dataset to 9,209 examples. We finetuning our models on the full conversation including the user queries.\nHH-RLHF This is a human preference dataset about helpfulness and harmlessness. Each datapoint consists of two assistant replies to a user question along with a human preference judgment of the best reply. The dataset contains 160,800 examples. When finetuning on this dataset, we combine helpfulness and harmlessness data and only keep the preferred assistant reply." }, { "figure_ref": [], "heading": "FLAN v2", "publication_ref": [ "b38", "b61", "b49", "b59", "b28" ], "table_ref": [], "text": "The FLAN v2 collection [39] is a collection of 1836 tasks augmented with hundreds of manually curated templates and rich formatting patterns into over 15M examples. The authors show that models trained on this collection outperform other public collections including the original FLAN 2021 [62], T0++ [50], Super-Natural Instructions [60], and OPT-IML [29]. We used the same task mixtures described by the authors with the exception of some datasets that were not freely available at the time of writing. " }, { "figure_ref": [], "heading": "B.2 Hyperparameters", "publication_ref": [ "b61", "b59" ], "table_ref": [], "text": "We provide the exact hyperparameters used in our QLORA finetuning experiments. We find hyperparameters to be largely robust across datasets. We use the MMLU 5-shot dev set for validation and hyperparameter tuning. In all our experiments we use NF4 with double quantization and bf16 computation datatype. We set LoRA r = 64, α = 16, and add LoRA modules on all linear layers of the base model. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models. Following previous work on instruction finetuning [62,60] and after benchmarking other linear and cosine schedules, we use a constant learning rate schedule. We use group-by-length to group examples of similar lengths in the same batch (note this will produce a oscillating loss curve). The hyperparameters we tune for each model size are shown in Table 9." }, { "figure_ref": [], "heading": "B.3 Ablations", "publication_ref": [], "table_ref": [], "text": "While it is general practice in the literature to only train on the response in instruction following datasets, we study the effect of training on the instruction in addition to the response in performance. We did not evaluate the effect this may have on chatabot performance as measured by vicuna or OA benchmarks.\nB.4 What is more important: instruction finetuning dataset size or dataset quality? Data set suitability is more important than dataset size. To understand the effects of dataset quality vs. dataset size, we experiment with subsampling large datasets with at least 150,000 samples (Chip2, FLAN v2, Unnatural Instructions), into datasets of size 50,000, 100,000 and 150,000 and examine the resulting trends, as shown in Table 11. We find that increasing the dataset size and increasing the number of epochs improves MMLU only marginally (0.0 -0.5 MMLU), while the difference between datasets is up to 40x larger (1.5 -8.0 MMLU). This is a clear indicator that dataset quality rather than dataset size is critical for mean MMLU accuracy. We obtain similar findings for chatbot performance as discussed in ." }, { "figure_ref": [], "heading": "C Human Evaluation", "publication_ref": [ "b9" ], "table_ref": [], "text": "We conduct a human evaluation with the same wording given to GPT-4 in the original Vicuna evaluation [10], adjusted for an Amazon Mechanical Turk form as show in Figure 5." }, { "figure_ref": [], "heading": "D Pairwise Evaluation with GPT-4", "publication_ref": [], "table_ref": [], "text": "While we found that the GPT-4 evaluation gave different results depend on which system was presented first, when averaged over both options the pairwise results were well-ordered. The aggregated pairwise judgments are hown in Table 12. On inspection, it is clear these judgments are transitive, i.e., when System A is judged better than System B and System B is judged better than System C, it is always the case that System A is judged better than System C. This yields a complete ordering, given in Table 13." }, { "figure_ref": [], "heading": "E NormalFloat 4-bit data type", "publication_ref": [], "table_ref": [], "text": "The exact values of the NF4 data type are as follows:\n[-1.0, -0.6961928009986877, -0.5250730514526367, -0.39491748809814453, -0.28444138169288635, -0.18477343022823334, -0.09105003625154495, 0.0, 0.07958029955625534, 0.16093020141124725, 0.24611230194568634, 0.33791524171829224, 0.44070982933044434, 0.5626170039176941, 0.7229568362236023, 1.0]" }, { "figure_ref": [], "heading": "F Normality of Trained Neural Network Weights", "publication_ref": [ "b52" ], "table_ref": [], "text": "While it is common knowledge that trained neural network weights are mostly normally distributed, we perform statistical testing to verify this. We use the Shapiro-Wilk test [53] on the weights of the 7B Numbers on the bars are memory footprint in MB of individual elements of the total footprint. While some models do not quite fit on certain GPUs, paged optimzier provide enough memory to make these models fit." }, { "figure_ref": [], "heading": "G Memory Footprint", "publication_ref": [], "table_ref": [], "text": "The memory footpring for QLoRA training with different LLaMA base models can be seen in Figure 6. We see that the 33B model does not quite fit into a 24 GB and that paged optimizers are needed to train it. Depicted is also batch size 1 with a sequence length of 512 and gradient checkpointning. This means, if one uses a larger batch size, or if a long sequence is processed, the activation gradient might consume a considerable amount of memory. " } ]
We present QLORA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLORA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA). Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLORA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4), a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants, and (c) Paged Optimizers to manage memory spikes. We use QLORA to finetune more than 1,000 models, providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets, multiple model types (LLaMA, T5), and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results, even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore, we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.
QLORA: Efficient Finetuning of Quantized LLMs
[ { "figure_caption": "Figure 1 :1Figure 1: Different finetuning methods and their memory requirements. QLORA improves over LoRA by quantizing the transformer model to 4-bit precision and using paged optimizers to handle memory spikes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: RougeL for LLaMA 7B models on the Alpaca dataset. Each point represents a run with a different random seed. We improve on the Stanford Alpaca fully finetuned default hyperparameters to construct a strong 16-bit baseline for comparisons. Using LoRA on all transformer layers is critical to match 16-bit performance.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Mean zero-shot accuracy over Winogrande, HellaSwag, PiQA, Arc-Easy, and Arc-Challenge using LLaMA models with different 4-bit data types. The NormalFloat data type significantly improves the bit-for-bit accuracy gains compared to regular 4-bit Floats. While Double Quantization (DQ) only leads to minor gains, it allows for a more fine-grained control over the memory footprint to fit models of certain size (33B/65B) into certain GPUs (24/48GB).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "User:Please factorize 1833. Guanaco: Sure, I'd be happy to help! 1833 is a prime number, which means it has only two factors: 1 and 1833. So the unique factorization of 1833 is: 2ˆ1 * 3ˆ2 * 17ˆ1 I hope this helps! Let me know if you have any other questions.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The crowdsourcing form used by human annotators.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "we see that NF4 improves per-formance significantly over FP4 and Int4 and thatdouble quantization reduces the memory footprintwithout degrading performance.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experiments comparing 16-bit BrainFloat (BF16), 8-bit Integer (Int8), 4-bit Float (FP4), and 4bit NormalFloat (NF4) on GLUE and Super-NaturalInstructions. QLORA replicates 16-bit LoRA and fullfinetuning.", "figure_data": "DatasetGLUE (Acc.)Super-NaturalInstructions (RougeL)ModelRoBERTa-large T5-80M T5-250M T5-780M T5-3B T5-11BBF1688.640.142.148.054.362.0BF16 replication88.640.042.247.354.9-LoRA BF1688.840.542.647.155.460.7QLORA Int888.840.442.945.456.560.7QLORA FP488.640.342.447.555.660.9QLORA NF4 + DQ-40.442.747.755.360.9", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Pile", "figure_data": "Common Crawl meanperplexity for different data typesfor 125M to 13B OPT, BLOOM,LLaMA, and Pythia models.Data typeMean PPLInt434.34Float4 (E2M1)31.07Float4 (E3M0)29.48NFloat4 + DQ27.41", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Mean 5-shot MMLU test accuracy for LLaMA 7-65B models finetuned with adapters on Alpaca and FLAN v2 for different data types. Overall, NF4 with double quantization (DQ) matches BFloat16 performance, while FP4 is consistently one percentage point behind both.", "figure_data": "Mean 5-shot MMLU AccuracyLLaMA Size7B13B33B65BMeanDatasetAlpaca FLAN v2 Alpaca FLAN v2 Alpaca FLAN v2 Alpaca FLAN v2BFloat1638.445.647.250.657.760.561.862.553.0Float437.244.047.350.055.958.561.363.352.2NFloat4 + DQ39.044.547.550.757.359.261.863.953.1", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "MMLU 5-shot test results for different sizes of LLaMA finetuned on the corresponding datasets using QLoRA.", "figure_data": "Dataset7B13B 33B 65BLLaMA no tuning35.1 46.9 57.8 63.4Self-Instruct36.4 33.3 53.0 56.7Longform32.1 43.2 56.6 59.7Chip234.5 41.6 53.6 59.8HH-RLHF34.9 44.6 55.8 60.1Unnatural Instruct41.9 48.1 57.3 61.3Guanaco (OASST1) 36.6 46.4 57.0 62.2Alpaca38.8 47.8 57.3 62.5FLAN v244.5 51.4 59.2 63.9", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Zero-shot Vicuna benchmark scores as a percentage of the score obtained by ChatGPT evaluated by GPT-4. We see that OASST1 models perform close to ChatGPT despite being trained on a very small dataset and having a fraction of the memory requirement of baseline models.Model / Dataset Params Model bits Memory ChatGPT vs Sys Sys vs ChatGPT", "figure_data": "Mean95% CI", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Elo rating for a tournament between models where models compete to generate the best response for a prompt, judged by human raters or GPT-4. Overall, Guanaco 65B and 33B tend to be preferred to ChatGPT-3.5 on the benchmarks studied. According to human raters they have a Each 10-point difference in Elo is approximately a difference of 1.5% in win-rate.", "figure_data": "BenchmarkVicunaVicunaOpen Assistant# Prompts8080953JudgeHuman ratersGPT-4GPT-4Median RankModelElo RankElo RankEloRankGPT-41176113481129411Guanaco-65B1023210222100832Guanaco-33B100949923100244ChatGPT-3.5 Turbo91679665101525Vicuna-13B9845974493655Guanaco-13B9756913688566Guanaco-7B10103879886077Bard90989027--8", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Evaluation of biases on the CrowS dataset. A lower score indicates lower likelihood of generating biased sequences. Guanaco follows the biased pattern of the LLaMA base model.", "figure_data": "LLaMA-65B GPT-3 OPT-175B Guanaco-65BGender70.662.665.747.5Religion79.073.368.638.7Race/Color57.064.768.645.3Sexual orientation81.076.278.659.1Age70.164.467.836.3Nationality64.261.662.932.4Disability66.776.776.733.9Physical appearance77.874.676.243.1Socioeconomic status71.573.876.255.3Average66.667.269.543.5", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Tim Dettmers; Artidoro Pagnoni; Ari Holtzman; Luke Zettlemoyer
[ { "authors": "S An; Y Li; Z Lin; Q Liu; B Chen; Q Fu; W Chen; N Zheng; J.-G Lou", "journal": "", "ref_id": "b0", "title": "Input-tuning: Adapting unfamiliar inputs to frozen pretrained models", "year": "2022" }, { "authors": "A Askell; Y Bai; A Chen; D Drain; D Ganguli; T Henighan; A Jones; N Joseph; B Mann; N Dassarma", "journal": "", "ref_id": "b1", "title": "A general language assistant as a laboratory for alignment", "year": "2021" }, { "authors": "S H Bach; V Sanh; Z.-X Yong; A Webson; C Raffel; N V Nayak; A Sharma; T Kim; M S Bari; T Fevry", "journal": "", "ref_id": "b2", "title": "Promptsource: An integrated development environment and repository for natural language prompts", "year": "2022" }, { "authors": "Y Bai; A Jones; K Ndousse; A Askell; A Chen; N Dassarma; D Drain; S Fort; D Ganguli; T Henighan", "journal": "", "ref_id": "b3", "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback", "year": "2022" }, { "authors": "Y Bai; S Kadavath; S Kundu; A Askell; J Kernion; A Jones; A Chen; A Goldie; A Mirhoseini; C Mckinnon", "journal": "", "ref_id": "b4", "title": "Constitutional ai: Harmlessness from ai feedback", "year": "2022" }, { "authors": "E M Bender; T Gebru; A Mcmillan-Major; S Shmitchell", "journal": "", "ref_id": "b5", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "S Biderman; H Schoelkopf; Q Anthony; H Bradley; K O'brien; E Hallahan; M A Khan; S Purohit; U S Prashanth; E Raff", "journal": "", "ref_id": "b6", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": "R Bommasani; D A Hudson; E Adeli; R Altman; S Arora; S Arx; M S Bernstein; J Bohg; A Bosselut; E Brunskill", "journal": "", "ref_id": "b7", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "T Chen; B Xu; C Zhang; C Guestrin", "journal": "", "ref_id": "b8", "title": "Training deep nets with sublinear memory cost", "year": "2016" }, { "authors": "W.-L Chiang; Z Li; Z Lin; Y Sheng; Z Wu; H Zhang; L Zheng; S Zhuang; Y Zhuang; J E Gonzalez; I Stoica; E P Xing", "journal": "", "ref_id": "b9", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023-03" }, { "authors": "P F Christiano; J Leike; T Brown; M Martic; S Legg; D Amodei", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Deep reinforcement learning from human preferences", "year": "2017" }, { "authors": "H W Chung; L Hou; S Longpre; B Zoph; Y Tay; W Fedus; E Li; X Wang; M Dehghani; S Brahma", "journal": "", "ref_id": "b11", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "T Dettmers; L Zettlemoyer", "journal": "", "ref_id": "b12", "title": "The case for 4-bit precision: k-bit inference scaling laws", "year": "2022" }, { "authors": "T Dettmers; M Lewis; Y Belkada; L Zettlemoyer", "journal": "", "ref_id": "b13", "title": "LLM.int8(): 8-bit matrix multiplication for transformers at scale", "year": "2022" }, { "authors": "T Dettmers; M Lewis; S Shleifer; L Zettlemoyer", "journal": "ICLR", "ref_id": "b14", "title": "8-bit optimizers via block-wise quantization", "year": "2022" }, { "authors": "A E Elo", "journal": "Chess Life", "ref_id": "b15", "title": "The proposed uscf rating system. its development, theory, and applications", "year": "1967" }, { "authors": "A E Elo", "journal": "Arco Pub", "ref_id": "b16", "title": "The rating of chessplayers, past and present", "year": "1978" }, { "authors": "E Frantar; S Ashkboos; T Hoefler; D Alistarh", "journal": "", "ref_id": "b17", "title": "Gptq: Accurate post-training quantization for generative pre-trained transformers", "year": "2022" }, { "authors": "J Fu; S.-K Ng; Z Jiang; P Liu", "journal": "", "ref_id": "b18", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": "X Geng; A Gudibande; H Liu; E Wallace; P Abbeel; S Levine; D Song", "journal": "", "ref_id": "b19", "title": "Koala: A dialogue model for academic research", "year": "2023-04" }, { "authors": "A Glaese; N Mcaleese; M Trębacz; J Aslanides; V Firoiu; T Ewalds; M Rauh; L Weidinger; M Chadwick; P Thacker", "journal": "", "ref_id": "b20", "title": "Improving alignment of dialogue agents via targeted human judgements", "year": "2022" }, { "authors": "S Gururangan; S Swayamdipta; O Levy; R Schwartz; S R Bowman; N A Smith", "journal": "", "ref_id": "b21", "title": "Annotation artifacts in natural language inference data", "year": "2018" }, { "authors": "J Henderson; S Ruder", "journal": "", "ref_id": "b22", "title": "Compacter: Efficient low-rank hypercomplex adapter layers", "year": "2021" }, { "authors": "D Hendrycks; C Burns; S Basart; A Zou; M Mazeika; D Song; J Steinhardt", "journal": "", "ref_id": "b23", "title": "Measuring massive multitask language understanding", "year": "2020" }, { "authors": "A Holtzman; J Buys; L Du; M Forbes; Y Choi", "journal": "", "ref_id": "b24", "title": "The curious case of neural text degeneration", "year": "2020" }, { "authors": "O Honovich; T Scialom; O Levy; T Schick", "journal": "", "ref_id": "b25", "title": "Unnatural instructions: Tuning language models with (almost) no human labor", "year": "2022" }, { "authors": "N Houlsby; A Giurgiu; S Jastrzebski; B Morrone; Q De Laroussilhe; A Gesmundo; M Attariyan; S Gelly", "journal": "PMLR", "ref_id": "b26", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b27", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "S Iyer; X V Lin; R Pasunuru; T Mihaylov; D Simig; P Yu; K Shuster; T Wang; Q Liu; P S Koura", "journal": "", "ref_id": "b28", "title": "Opt-iml: Scaling language model instruction meta learning through the lens of generalization", "year": "2022" }, { "authors": "A Köksal; T Schick; A Korhonen; H Schütze", "journal": "", "ref_id": "b29", "title": "Longform: Optimizing instruction tuning for long text generation with corpus extraction", "year": "2023" }, { "authors": "A Köpf; Y Kilcher; D Von Rütte; S Anagnostidis; Z.-R Tam; K Stevens; A Barhoum; N M Duc; O Stanley; R Nagyfi", "journal": "", "ref_id": "b30", "title": "Openassistant conversations-democratizing large language model alignment", "year": "2023" }, { "authors": " Laion", "journal": "", "ref_id": "b31", "title": "Open-instruction-generalist dataset", "year": "2023" }, { "authors": "B Lester; R Al-Rfou; N Constant", "journal": "", "ref_id": "b32", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "X L Li; P Liang", "journal": "", "ref_id": "b33", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "P Liang; R Bommasani; T Lee; D Tsipras; D Soylu; M Yasunaga; Y Zhang; D Narayanan; Y Wu; A Kumar", "journal": "", "ref_id": "b34", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "T Liao; R Taori; I D Raji; L Schmidt", "journal": "", "ref_id": "b35", "title": "Are we learning yet? a meta review of evaluation failures across machine learning", "year": "2021" }, { "authors": "H Liu; D Tam; M Muqeeth; J Mohta; T Huang; M Bansal; C A Raffel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning", "year": "2022" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "", "ref_id": "b37", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "S Longpre; L Hou; T Vu; A Webson; H W Chung; Y Tay; D Zhou; Q V Le; B Zoph; J Wei", "journal": "", "ref_id": "b38", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "S Min; M Lewis; L Zettlemoyer; H Hajishirzi", "journal": "", "ref_id": "b39", "title": "Metaicl: Learning to learn in context", "year": "2021" }, { "authors": "A Nematzadeh; K Burns; E Grant; A Gopnik; T Griffiths", "journal": "", "ref_id": "b40", "title": "Evaluating theory of mind in question answering", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b41", "title": "", "year": "2023" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b42", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "G Park; B Park; S J Kwon; B Kim; Y Lee; D Lee", "journal": "", "ref_id": "b43", "title": "nuqmm: Quantized matmul for efficient inference of large-scale generative language models", "year": "2022" }, { "authors": "B Peng; C Li; P He; M Galley; J Gao", "journal": "", "ref_id": "b44", "title": "Instruction tuning with gpt-4", "year": "2023" }, { "authors": "A Poliak; J Naradowsky; A Haldar; R Rudinger; B Van Durme", "journal": "", "ref_id": "b45", "title": "Hypothesis only baselines in natural language inference", "year": "2018" }, { "authors": "R Pope; S Douglas; A Chowdhery; J Devlin; J Bradbury; A Levskaya; J Heek; K Xiao; S Agrawal; J Dean", "journal": "", "ref_id": "b46", "title": "Efficiently scaling transformer inference", "year": "2022" }, { "authors": "G Qin; J Eisner", "journal": "", "ref_id": "b47", "title": "Learning how to ask: Querying lms with mixtures of soft prompts", "year": "2021" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b48", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020-01" }, { "authors": "V Sanh; A Webson; C Raffel; S H Bach; L Sutawika; Z Alyafeai; A Chaffin; A Stiegler; T L Scao; A Raja", "journal": "", "ref_id": "b49", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "M Sap; R Lebras; D Fried; Y Choi", "journal": "", "ref_id": "b50", "title": "Neural theory-of-mind? on the limits of social intelligence in large lms", "year": "2022" }, { "authors": "T L Scao; A Fan; C Akiki; E Pavlick; S Ilić; D Hesslow; R Castagné; A S Luccioni; F Yvon; M Gallé", "journal": "", "ref_id": "b51", "title": "A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "S Shaphiro; M Wilk", "journal": "Biometrika", "ref_id": "b52", "title": "An analysis of variance test for normality", "year": "1965" }, { "authors": "Y.-L Sung; V Nair; C A Raffel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b53", "title": "Training neural networks with fixed sparse masks", "year": "2021" }, { "authors": "R Taori; I Gulrajani; T Zhang; Y Dubois; X Li; C Guestrin; P Liang; T B Hashimoto", "journal": "", "ref_id": "b54", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "R Thoppilan; D De Freitas; J Hall; N Shazeer; A Kulshreshtha; H.-T Cheng; A Jin; T Bos; L Baker; Y Du", "journal": "", "ref_id": "b55", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar", "journal": "", "ref_id": "b56", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "A Wang; A Singh; J Michael; F Hill; O Levy; S R Bowman", "journal": "", "ref_id": "b57", "title": "Glue: A multitask benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Y Wang; Y Kordi; S Mishra; A Liu; N A Smith; D Khashabi; H Hajishirzi", "journal": "", "ref_id": "b58", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Y Wang; S Mishra; P Alipoormolabashi; Y Kordi; A Mirzaei; A Arunkumar; A Ashok; A S Dhanasekaran; A Naik; D Stap", "journal": "", "ref_id": "b59", "title": "Super-naturalinstructions:generalization via declarative instructions on 1600+ tasks", "year": "2022" }, { "authors": "Y Wang; S Mishra; P Alipoormolabashi; Y Kordi; A Mirzaei; A Naik; A Ashok; A S Dhanasekaran; A Arunkumar; D Stap", "journal": "", "ref_id": "b60", "title": "Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks", "year": "2022" }, { "authors": "J Wei; M Bosma; V Y Zhao; K Guu; A W Yu; B Lester; N Du; A M Dai; Q V Le", "journal": "", "ref_id": "b61", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; F Xia; E H Chi; Q V Le; D Zhou", "journal": "", "ref_id": "b62", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "T Wolf; L Debut; V Sanh; J Chaumond; C Delangue; A Moi; P Cistac; T Rault; R Louf; M Funtowicz", "journal": "", "ref_id": "b63", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "M Wortsman; T Dettmers; L Zettlemoyer; A Morcos; A Farhadi; L Schmidt", "journal": "", "ref_id": "b64", "title": "Stable and low-precision training for large-scale vision-language models", "year": "2023" }, { "authors": "G Xiao; J Lin; M Seznec; J Demouth; S Han", "journal": "", "ref_id": "b65", "title": "Smoothquant: Accurate and efficient post-training quantization for large language models", "year": "2022" }, { "authors": "T Xie; C H Wu; P Shi; R Zhong; T Scholak; M Yasunaga; C.-S Wu; M Zhong; P Yin; S I Wang", "journal": "", "ref_id": "b66", "title": "Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models", "year": "2022" }, { "authors": "Z Yang; P Qi; S Zhang; Y Bengio; W Cohen; R Salakhutdinov; C D Manning", "journal": "", "ref_id": "b67", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Z Yao; R Y Aminabadi; M Zhang; X Wu; C Li; Y He", "journal": "", "ref_id": "b68", "title": "Zeroquant: Efficient and affordable post-training quantization for large-scale transformers", "year": "2022" }, { "authors": "E B Zaken; S Ravfogel; Y Goldberg", "journal": "", "ref_id": "b69", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2021" }, { "authors": "A Zeng; X Liu; Z Du; Z Wang; H Lai; M Ding; Z Yang; Y Xu; W Zheng; X Xia", "journal": "", "ref_id": "b70", "title": "Glm-130b: An open bilingual pre-trained model", "year": "2022" }, { "authors": "S Zhang; S Roller; N Goyal; M Artetxe; M Chen; S Chen; C Dewan; M Diab; X Li; X V Lin", "journal": "", "ref_id": "b71", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "R Zhong; K Lee; Z Zhang; D Klein", "journal": "", "ref_id": "b72", "title": "Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 176.37, 391.49, 328.3, 22.53 ], "formula_id": "formula_0", "formula_text": "X Int8 = round 127 absmax(X FP32 ) X FP32 = round(c FP32 • X FP32 ),(1)" }, { "formula_coordinates": [ 3, 227.55, 444.41, 277.11, 23.66 ], "formula_id": "formula_1", "formula_text": "dequant(c FP32 , X Int8 ) = X Int8 c FP32 = X FP32(2)" }, { "formula_coordinates": [ 3, 259.05, 639.81, 245.61, 9.68 ], "formula_id": "formula_2", "formula_text": "Y = XW + sXL 1 L 2 ,(3)" }, { "formula_coordinates": [ 4, 214.12, 650.06, 290.55, 22.31 ], "formula_id": "formula_3", "formula_text": "q i = 1 2 Q X i 2 k + 1 + Q X i + 1 2 k + 1 ,(4)" }, { "formula_coordinates": [ 5, 161.91, 423.47, 342.76, 12.47 ], "formula_id": "formula_4", "formula_text": "Y BF16 = X BF16 doubleDequant(c FP32 1 , c k-bit 2 , W NF4 ) + X BF16 L BF16 1 L BF16 2 ,(5)" }, { "formula_coordinates": [ 5, 132.84, 464.5, 371.83, 12.47 ], "formula_id": "formula_5", "formula_text": "doubleDequant(c FP32 1 , c k-bit 2 , W k-bit ) = dequant(dequant(c FP32 1 , c k-bit 2 ), W 4bit ) = W BF16 ,(6)" }, { "formula_coordinates": [ 6, 345.96, 306.97, 74.52, 26.81 ], "formula_id": "formula_6", "formula_text": "Q L o R A -A ll Q L o R A -F F N Q L o R A -A t t e n t io" }, { "formula_coordinates": [ 11, 143.87, 569.77, 23.14, 8.96 ], "formula_id": "formula_7", "formula_text": "User:" } ]
2023-10-08
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b3", "b21", "b6", "b5", "b28", "b22", "b11", "b10", "b17", "b8", "b20", "b18", "b12", "b15", "b2", "b8", "b17", "b20", "b26", "b2" ], "table_ref": [], "text": "In recent years, notable progress has been made in large language models (LLMs) like GPT-3 (Brown et al., 2020), Codex (Chen et al., 2021), PaLM (Chowdhery et al., 2022), LLaMA (Touvron et al., 2023), ChatGPT (OpenAI, 2022), and the latest GPT-4 (OpenAI, 2023). These models exhibit impressive capabilities in in-context learning, code generation, and various Natural Language Processing (NLP) tasks (Feng et al., 2020;Dong et al., 2022). However, there are still limitations to address, such as the inability to handle up-to-date information (Yu and Ji, 2023), provide accurate mathematical results, or reason over long chains of logic (Trivedi et al., 2022;Komeili et al., 2022;Patel et al., 2021;Hendrycks et al., 2021;Lu et al., 2022b).\nTo overcome these concerns, researchers have explored equipping LLMs with external tools to alleviate their memory burden and enhance their expertise (Qin et al., 2023). For instance, integrating tools such as question-answering systems or web search engines enables LLMs to learn how and when to access external resources for problemsolving (Nakano et al., 2021;Schick et al., 2023). Recent studies have also incorporated additional tools for LLMs, such as GitHub resources, neural network models (e.g., Huggingface library), and code interpreters (e.g., Python interpreter), aiming to enhance their capabilities (Gupta and Kembhavi, 2022;Surís et al., 2023;Shen et al., 2023;Liang et al., 2023;Lu et al., 2023). These tools require LLMs to provide detailed plans before utilizing them to solve complex problems.\nHowever, tool-augmented LLMs still encounter challenges (Chen et al., 2022;Gupta and Kembhavi, 2022;Schick et al., 2023;Surís et al., 2023), particularly in the following aspects. (1) Limitation in scope: Current approaches focus on a limited number of tools, making it difficult to find an appropriate existing tool for new problem types.\n(2) Fragility in reasoning: Given that tasks are often complex, reasoning on the fly case-by-case can be fragile to random errors, while humans can benefit from finding robust commonalities among multiple similar questions. (3) Insufficiency in error-handling: Current tool utilization pipelines lack automatic and specific error handling, necessitating improvements in accuracy and robustness to ensure reliable execution results.\nIn this paper, we propose a novel approach to ad- dress these challenges. Rather than treating LLMs as users of tools, we empower them to be creators of tools, enabling them to solve problems with higher accuracy and flexibility. We introduce our tool creation framework, CREATOR, which leverages LLMs' ability to create and modify tools based on the problem at hand. Figure 1 illustrates the differences between CREATOR and a general tool-using framework. While the tool-using framework focuses on reasoning to select and plan API usage, our framework emphasizes diversifying tool choices, disentangling abstract and concrete reasoning, and improving robustness and accuracy. Specifically, CREATOR consists of four stages:\n• Creation: Create generally applicable tools with documentation and realization through abstract reasoning based on the problem. • Decision: With available tools, decide when and how to use them to solve the problem. • Execution: Execute the program, applying the chosen tools to solve the problem. • Rectification: Make modifications to tools and decisions based on the execution result. By introducing these four stages, we aim to better inspire the LLM's creativity and enhance the paradigm's robustness. This design sets CREATOR apart from traditional tool-using and addresses the three challenges we discussed respectively by (1) leveraging LLMs to create tools with higher generality, reusability, and variety, rather than relying on a limited number of given APIs; (2) offload-ing the cognitive burden of LLMs and disentangling their ability to perform abstract reasoning (creation of generalizable tools) and concrete reasoning (decision-making with details); (3) utilizing code as the medium for tool creation, which is more sensitive to errors, and enabling automatic rectification of tools and decisions based on error tracebacks.\nTo evaluate our design's effectiveness, we test CREATOR on two existing benchmarks: MATH (Hendrycks et al.) and TabMWP (Lu et al., 2022a), as well as the Creation Challenge dataset we create. The MATH dataset contains diverse and challenging math competition problems, while TabMWP includes a wide range of tabular contexts for problemsolving. Notably, ChatGPT built on CREATOR achieves remarkable average accuracies of 59.7% and 94.7% on MATH and TabMWP respectively, surpassing the standard chain-of-thought (CoT) (Wei et al., 2022), program-of-thought (PoT) (Chen et al., 2022), and tool-using baselines by significant margins.\nAs existing benchmarks do not specifically evaluate tool creation, we further introduce the Creation Challenge dataset, which consists of novel and challenging problems that are inadequately solved using existing tools or code packages. This dataset highlights the necessity and advantages of LLMs' tool creation ability. In addition, we show experimental results that provide evidence of how tool creation plays a crucial role in promoting knowledge transfer across similar queries that possess common core knowledge but differ in specific scenarios. We also present case studies highlighting the varying levels of tool creation ability observed in LLMs, allowing them to better adapt to diverse problem settings." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b1", "b3", "b21", "b26", "b14", "b21", "b13", "b22", "b11", "b19", "b17", "b27", "b18", "b12", "b2", "b7", "b15", "b24" ], "table_ref": [], "text": "Large Language Models. Large Language Models (LLMs) have gained attention for their impressive performance in handling various NLP tasks, following demonstrations and generating high-quality texts and codes (Brown et al., 2020;Chen et al., 2021;Chowdhery et al., 2022;Touvron et al., 2023). Prompting methods such as chain-ofthought (Wei et al., 2022) and instruction-following (Wang et al., 2022b;Longpre et al., 2023;Chung et al., 2022;Touvron et al., 2023;Liu et al., 2023) have been developed to guide LLMs in problemsolving and align their behavior with human expec- tations. Our work builds upon these areas, incorporating them into our framework and using them as baselines for complex problem-solving.\nTool Using by Language Models. Recent studies address constraints of LLMs, such as the limited real-time responsiveness and inaccurate calculations, by incorporating external tools (Trivedi et al., 2022;Komeili et al., 2022;Patel et al., 2021;Lu et al., 2022b). These studies augment LLMs with tools like scratch pads, search engines, QA systems, and calculators (Nye et al., 2021;Shuster et al., 2022;Schick et al., 2023) to improve task performance. More recent efforts integrate LLMs' tool-using abilities into a pipeline for task planning, tool calling, and result synthesis (Wu et al., 2023;Shen et al., 2023;Liang et al., 2023). In contrast, our work goes further by enabling LLMs to create tools instead of relying solely on existing tools.\nReasoning and Execution with Program. Reasoning with programs is an emerging field in NLP, whose goal is to leverage codes to do complicated computational reasoning instead of using natural language thoughts. Chen et al. (2022) show that code generation improves performance on math datasets, while Gao et al. (2022); Wang et al. (2022a) further demonstrate the potential of program reasoning on symbolic and algorithmic benchmarks. These efforts present a code-based chain-ofthought with linear logic but produce no enhanced tools capable of being reused or tested. As the concept of tool-using emerges, recent studies begin to incorporate code interpreters as external tools (Lu et al., 2023;Mialon et al., 2023;Wang et al., 2023). However, in CREATOR, we use code as the medium for tool creation rather than an external tool. Our framework also excels over PoT as we devise the tool creation stage, code rectification stage, and disentangle the logic in complex reasonings." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Design of CREATOR", "publication_ref": [], "table_ref": [], "text": "Distinct from previous frameworks for tool-using, CREATOR leverages the tool creation ability of LLMs by incorporating four special stages: creation, decision, execution, and rectification, as illustrated in Figure 2. The utilization of tool creation for problem-solving is inherently straightforward and aligns with LLMs' innate ability, as illustrated later in Section 5.2. In CREATOR, the main objective of design is to instinctively better inspire their creativity, and facilitate more effective use of it. Previous CoT and PoT methods mainly apply linear reasoning to solve target problems, and their task-solving process lacks reusability. However, the tools created in CREATOR can be transferred to solve other queries, and the rectification stage incorporated makes the reasoning process non-linear. We present a comprehensive comparison between CREATOR and other methods in Table 1. To get these demonstrations, we make a fixed set of demonstrations in advance and use them subsequently for each task. In specific, we randomly select a subset from the training set and prompt the LLM with the text instruction for tool creation for each query. We then correct the errors in these generations (if any) and remove verbose explanations before using them. Although the demonstrations are from the same task as the test queries, they are not required to be semantically similar to test queries, as the main purpose is only to inspire the LLM's creativity and regulate its output format.\nAbility of Abstract Reasoning. The core importance of the tool creation stage is to trigger LLM's ability to employ abstract thinking to alleviate the burden of reasoning during later stages. When LLMs create tools, they effectively use abstraction to address a particular problem type, necessitating a focus on the inherent characteristics of the problem rather than the specific numerical details. For example, in Figure 2, the LLM concentrates solely on recognizing the intrinsic nature of the problem and creates a tool for solving a three-variable equation system, disregarding all the numerical details and the specific expression being queried." }, { "figure_ref": [ "fig_4", "fig_2" ], "heading": "Decision Stage", "publication_ref": [], "table_ref": [], "text": "Implementation Details. Similar to the creation stage, we instruct LLMs with demonstrations to decide how to use tools with the same prompt text form. Each demonstration \"\n[EXAMPLE x]\" fol- lows \"### Question [QST]\\n ### Tool [TOOL]\\n ### Solution [SOL]\"\n, where [SOL] represents the LLM's decision tool calls in code format. We also derive a fixed demonstration set the same way as in the creation stage, only that the LLM is now prompted to call the given tools instead of creating them, and to print out the final answer with any important information through \"print(...)\" in codes. This [INSTRUCTION] applies both to get demonstrations and to conduct test-time inference, which ensures that the LLM's answer can be easily extracted from printed outputs in subsequent stages. A detailed prompt text example is shown in Figure 15.\nAbility of Concrete Reasoning. The decision stage necessitates the LLM's meticulous attention to rules and details for problem-solving, which we refer to as concrete reasoning. In Figure 2, the solution obtained from the tool needs to be summed for the final answer. This requires the LLM to understand the tool's outputs and relate them to the specific query to make an informed decision and derive the correct answer finally. By separating creation from the decision, CREATOR disentangles two phases of the LLM's abilities, which facilitates a smoother elicitation of different aspects of knowledge and improves task performance." }, { "figure_ref": [], "heading": "Execution Stage", "publication_ref": [], "table_ref": [], "text": "The execution stage takes the information from previous stages to execute the tool leveraging the code interpreter. We do not apply the LLM in this stage, and the created tools and the LLM's decision are concatenated into a cohesive code block for execution. The tool is encapsulated within a function in the code block, and the LLM's decision calls it for problem-solving. During execution, we capture any outputs printed (as we have instructed the LLM in the decision stage) or errors encountered (by intercepting error messages in a sub-process). These information serve as inputs for subsequent stages to determine whether an answer can be obtained or rectifications are needed." }, { "figure_ref": [ "fig_6" ], "heading": "Rectification Stage", "publication_ref": [], "table_ref": [], "text": "Implementation Details. During the rectification stage, CREATOR has two different options based on the information passed into it. If an error occurs, then the LLM is prompted with demonstrations to rectify the error. Applying a similar prompt format as before, the format of demonstrations \"\n[EXAMPLE x]\" now changes to \"### Question [QST]\\n ### Original [ORI]\\n ### Error [ERR]\\n ### Rectification [REC]\",\nwhere we provide the original tool implementation and calling decision in [ORI], offer the error tracebacks [ERR], and concatenate natural language reasoning on the error with the rectified code in [REC].\nA detailed illustration of the prompt text is shown in Figure 16.\nIf the execution is successful, then the answer will be extracted from the captured model's output and compared to the standard answer to measure accuracy.\nSignificance. During the rectification process, we provide the LLM with error tracebacks, which offer crucial information for it to identify the error's location and causes. Armed with this guidance, the LLM can recover from previous mistakes, adjust its reasoning process, and attempt to solve the problem once again. Subsequent experiments will demonstrate how the inclusion of rectification significantly improves the performance of CRE-ATOR. The success of the rectification stage also showcases the LLM's ability to recognize misconceptions and self-correct." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "To evaluate the effectiveness of CREATOR, we conduct experiments on two established benchmarks: MATH (Hendrycks et al.) and TabMWP (Lu et al., 2022a). Additionally, we perform experiments on a newly introduced dataset, Creation Challenge, comprising 2K diverse questions that are inadequate to solve using existing tools or code packages. This enables us to further demonstrate the necessity and advantages of the LLM's tool creation ability." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Settings. We select ChatGPT as the base model for all methods due to its exceptional capabilities in code generation, decision-making, and logical reasoning. Refer to Appendices A.1 for more details. We evaluate CREATOR on two existing datasets: TabMWP, which includes diverse tablerelated problems, and MATH, consisting of challenging math competition problems. We apply them as they are representative in terms of diversity in data format and difficulty. We also assess the performance of our framework on Creation Challenge, comprising 2K data points, to explore the impact of tool creation hints on the LLM's performance. Refer to Appendices A.2 for more details.\nBaselines. We compare CREATOR against four types of baselines to demonstrate its effectiveness: Evaluation The components of the standard created tool in each data point of Creation Challenge can serve as valuable hints for the LLM's tool creation. Therefore, we extend our experiments on Creation Challenge to assess the LLM's tool creation ability with varying levels of hint utilization. We encourage future research to explore the dataset's potential through more flexible usage." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We present the results on MATH, TabMWP, and Creation Challenge respectively in Tables 2 to 4. CREATOR achieves an accuracy of 59.7%, 94.7%, and 75.5% respectively on three tasks, surpassing all the best performance in baselines by large margins. To illustrate CREATOR's advantage, we present a case study showing how it's better than Tool Use in Figure 3A. For all tasks, disentangling the creation and decision stages generally results in better performance, compared to CRE-ATOR-Entangled. For Creation Challenge, we also observe that hints of tool creation can raise the performance up to 18.7%. We will further analyze the reasons for this improvement in Section 4.4." }, { "figure_ref": [], "heading": "Results Analysis", "publication_ref": [], "table_ref": [], "text": "CoT Incompatible with Codes. method and CREATOR, and the opposite trend is observed for TabMWP. We attribute this difference to the inherent incompatibility between natural language reasoning and program-based reasoning on challenging problems. MATH problems involve intricate calculations and diverse reasoning paths, leading to conflicts between natural language and programming approaches. When CoT is used, the LLM tends to generate programs following natural language reasoning, which compromises the coherence and unique advantages of programming. In Figure 3B. we show the adoption of brute-force algorithms and straightforward calculations when CoT is not applied yields higher accuracy.\nIn contrast, TabMWP involves simpler calculations and more straightforward reasoning paths, promoting consistency between natural language " }, { "figure_ref": [], "heading": "Further Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we further show the advantages brought by the LLM's tool creation ability and use case studies to demonstrate different aspects of this ability, which enables them to tackle challenges with more flexibility and less reasoning burden." }, { "figure_ref": [], "heading": "Facilitation of Knowledge Transfer", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "One of the main purposes of tool creation lies in its reusability. The content of tools represents the abstraction of knowledge concepts, so the creation of one tool may help solve problems of various scenarios that share the same core concept. For instance, a keyword-extraction tool created for sentiment analysis can be applied to other tasks like document categorization and topic modeling, as they all require the identification and extraction of relevant keywords for problem-solving. By utilizing the knowledge and logic embedded in the tool, the LLM can transfer its understanding to solve similar problems efficiently with higher performance.\nSettings. To validate our hypothesis, we construct a small set of questions with 300 data points, detailed in Appendices B.2. We divide data points into 100 sets, where all three queries in one set share the same core knowledge concept (key methodology that is universally applicable) but differ in scenario (problem background and specific details inquired). Similar to previous experiments, we use Chat-GPT as the base LLM with unchanged detailed settings. We first test all the problems under the normal CREATOR framework respectively. Then, we test if the correct tool created under one scenario could be applied to the other two, and again test the LLM's performance.\nResults Analysis. The statistics are presented in Table 5. Through the application of transferred tools, the LLM's accuracy can be raised by 15.3%. Further analysis shows that 39 sets of queries are positively influenced by this transfer, which highlights the tool creation ability of the LLM can facilitate knowledge transfer, leading to better performance on clusters of problems that share similar core concepts." }, { "figure_ref": [], "heading": "Different Levels of LLM's Tool Creation", "publication_ref": [], "table_ref": [], "text": "We discover in experiments that LLM can create tools in different levels without special guidance, which affirms creativity is LLM's intrinsic emerging ability. By inspecting the created tools, we find that they can be categorized into three levels, which provides guidance and reference for future development. by encapsulating an existing tool or API and repurposing it to serve different needs. The first case of Figure 9 shows how LLM wraps an existing weather query API into a new tool that calculates the average temperature." }, { "figure_ref": [], "heading": "Enhancement of Existing Tool. First, LLMs demonstrate the capability to enhance existing tools", "publication_ref": [], "table_ref": [], "text": "2. Concatenation of Multiple Tools. Second, the LLM can create new tools by organizing multiple APIs into a pipeline, enabling it to fulfill specific purposes. The second case in Figure 9 shows how the LLM calls two existing APIs three times in the new tool for problem-solving.\n3. Hierarchical Tool. Third, the LLM can create tools with a clear hierarchy, which establishes clear caller-callee relationships among tools and reduces the burden of repetitive reasoning. The third case in Figure 9 illustrates a hierarchical structure where the first tool serves as the callee, while the second tool primarily solves the problem." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose the concept of automatic tool creation through LLMs and empirically devise CREATOR that harnesses the capabilities of LLMs as tool creators. By disentangling LLM's abstract and concrete reasoning, CREATOR enables clearer logic and enhances overall performance. Through comprehensive evaluations on established benchmarks and Creation Challenge, we demonstrate the superiority and indispensability of CREATOR compared to existing CoT, PoT, and tool-using approaches.\nWe anticipate our study will inspire the development of more sophisticated AI systems leveraging LLM's tool creation potential." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our experiment is limited to two established benchmarks, MATH and TabMWP, along with our newly introduced dataset, Creation Challenge. However, it is crucial for future research to expand the application of our framework to encompass a broader array of tasks. This will enable a comprehensive assessment of the generalizability of our results, going beyond the scope of our current investigation. Furthermore, our demonstration of the LLM's potential in tool creation is limited in scope. For instance, the current LLM is also capable of creating tools even to build a full project pipeline, but the execution ability and correctness of its creation still lack proper evaluations and remain questionable. It is incumbent upon future research to delve deeper into the boundaries of LLM's capabilities and establish clear limits regarding its tool creation potential." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We consider the following research issues in this paper:\n• Privacy involves safeguarding sensitive information and preventing its unauthorized disclosure. With respect to our framework, privacy becomes a concern when certain stages require demonstration examples and clear instructions, which may inadvertently contain sensitive information, or intentionally designed to prompt the LLM to leak privacy. Thus, it is crucial to ensure that personal or sensitive information is not disclosed to the closed-source LLM, and the private information or knowledge about tool creation in the closed-source LLM should be well-protected. • Fairness in AI aims to ensure the outputs and decisions made by AI systems do not perpetuate existing biases or discriminations. When creating tools, care must be taken to mitigate biases in the demonstrations and instructions, monitor the tool's performance across stages, and address any disparities that may arise in the whole generation or rectification process.\n• Transparency involves making AI systems and their processes understandable and interpretable. When the language model creates tools under our framework, it's essential to have transparency regarding how those tools are developed. Developers should document any biases or limitations associated with the tools created, understand the strengths and weaknesses of the tools and how the decision is reached, and make informed decisions about their application." }, { "figure_ref": [], "heading": "Appendices A Details about Experiment", "publication_ref": [], "table_ref": [], "text": "A.1 Model Details.\nWe employ GPT-turbo-3.5 as the base model for all our experiments. The maximum generation length for all experiments is set to 512, and a temperature of 0.3 is chosen to encourage deterministic generations while maintaining a certain degree of diversity, particularly during the creation of tools.\nA.2 Dataset Details.\nFor both the MATH and TabMWP datasets, we evaluate questions that have numerical value answers (e.g. integers or decimals). This is due to the complicated format matching problems (e.g. matching of matrices as answers) that may cause bias. The tested questions cover approximately 80% of all and maintain high diversity, making our results still representative. We are planning to update our results on all MATH questions applying post-processing soon 2 , but some matching problems are still hard to solve.\nThe MATH dataset consists of seven math competition problem domains, namely algebra, counting and probability, geometry, intermediate algebra, number theory, pre-algebra, and pre-calculus. Each domain is evaluated separately, and the final metric is computed as the weighted average score. The TabMWP dataset includes a wide range of table information and problems of different difficulty levels, spanning from grade one to grade eight." }, { "figure_ref": [ "fig_6" ], "heading": "B Details about New Datasets B.1 Creation Challenge Details", "publication_ref": [], "table_ref": [], "text": "We begin by constructing a seed dataset that involves novel settings and unconventional reasoning processes. Subsequently, we utilize the Text-Davinci-003 model to expand the dataset in an iterative manner. By random sampling from the seed data, we encourage the variety and novelty in the problems and their reasonings.\nFigure 6 illustrates a sample query and its corresponding solution. Each data entry comprises the problem statement, a standard created tool that can be utilized (including utility, input, output, and realization), a tool-calling decision, and a final answer.\n2 https://github.com/openai/prm800k/blob/main/ prm800k/grading/grader.py for i in range( 6)] predict = sum(terms) return predict x = np.array ([1,2,3,4,5,6,7,8,9,10]) y = np.array ([5,12,23,42,75,122,187,272,379,510]) # Fit the data and do prediction predict = polynomial5(x, y, 11) Problem Suppose you are a scientist studying the spread of a virus. You have collected data on the number of new cases reported each day over a 10-day period. You suspect that the number of cases might be modeled by a polynomial function of degree 5. You want to fit a polynomial function to the data using the method of least squares and predict the number of new cases on the 11th day." }, { "figure_ref": [], "heading": "Solution -Sample Tool", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Solution -Sample Decision", "publication_ref": [], "table_ref": [], "text": "---------------------------- " }, { "figure_ref": [ "fig_8" ], "heading": "B.2 Tool Transfer Dataset Details", "publication_ref": [], "table_ref": [], "text": "We create 300 data points in total and divide them into sets of three. Each set of three queries contains three corresponding answers, one standard tool that could be applied in all three scenarios to solve the problem, and three decisions about how to use the tool respectively. Similar to the construction of Creation Challenge, we manually write the seed data, which includes five sets of queries used as examples to show the format of each data point, sample demonstration examples from these seeds, and leverage the Text-Davinci-003 to create more data iteratively.\nWe present a sample set from the tool transfer dataset we curate in Figure 7. In the set, three different scenarios are provided, with each one consisting of a query, a sample decision, and an answer (not listed). Though the scenarios seem unrelated, they share the same core knowledge which can be transferred. In this case, the core knowledge is the calculation of profit. We also provide an example tool that can be applied to all these three scenarios with a corresponding introduction. Note that each set we define actually represents three data points." }, { "figure_ref": [], "heading": "C More about Experimental Findings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 CoT Incompatible with Code", "publication_ref": [], "table_ref": [], "text": "In this section, we will provide more cases to further illustrate our arguments made in Section 4.4 about the conflicts between natural language thoughts and program thoughts. We con- ----------------------------Scenario 1: Pricing Strategy -Query A company produces a product that has a fixed cost of $20,000, a variable cost of $10 per unit, and a demand of 10,000 units. The company wants to maximize profit and is considering two pricing strategies. The first strategy is to sell the product at $30 per unit, and the second strategy is to sell the product at $35 per unit. What is the optimal pricing strategy for the company? -Sample Decision ..." }, { "figure_ref": [ "fig_9" ], "heading": "Sample Tool (Common For 3 Scenarios)", "publication_ref": [], "table_ref": [], "text": "The tool is used to calculate the optimal number of units to produce to maximize profit for a manufacturing company. It takes into account the fixed costs, variable costs, selling price, and demand for the product. The function uses the formula Profit = (Selling Price * Quantity) -(Variable Cost * Quantity) -Fixed Cost to calculate the profit and returns the optimal quantity to produce.\n----------------------------Scenario 2: Production Planning -Query A manufacturing company produces a product that has a fixed cost of $10,000, a variable cost of $5 per unit, and a selling price of $20 per unit.\nThe company can sell up to 5,000 units of the product at this price. What is the optimal number of units to produce to maximize profit? -Sample Decision …\n----------------------------Scenario 3: Capacity Planning -Query A company produces a product that has a fixed cost of $50,000, a variable cost of $15 per unit, and a selling price of $25 per unit. The company has a production capacity of 10,000 units. What is the optimal number of units to produce to maximize profit? -Sample Decision … We provide three scenarios sharing the core knowledge and a sample tool that all three scenarios can utilize.\ntrast two additional cases sourced respectively from MATH and TabMWP in Figure 8.\nIn the case of MATH, the ambiguity of \"string manipulation\" mentioned in natural language thoughts leads the model to create the tool that finds the hundredth digit in a hard-coding manner, while pure code generation in creating tools can avoid this problem.\nConversely, for TabMWP, CoT helps tool creation by avoiding unnecessary complexities in sim-ple problem-solving. In the second case, the natural language thoughts indicate clearly that only simple multiplication should be done, while pure code generation is trapped in a complex and chaotic logic that is prone to error.\nThese two cases further validate the conflicts between natural language thoughts and program thoughts, especially for challenging problems which may possess multiple reasoning paths that differ in suitability for code and natural language." }, { "figure_ref": [], "heading": "C.2 Different Levels of Tool Creation", "publication_ref": [], "table_ref": [], "text": "We present in this section more details about the different levels of tool creation mentioned in Section 5.2. We present three cases in Figure 9.\nThe enhancement of existing tools in tool creation is presented in the first case. After the query, the LLM is given an existing API that could be called for a fixed purpose. This mimics the scenario in the real world where an API document is given to let one fulfill a particular purpose. At this level, the LLM learns how to create tools first by comprehensively understanding the existing tool and then transferring this knowledge to a new problem scenario. In this case, the LLM learns how temperature should be averaged across several days, and subsequently creates a tool that solves the problem.\nThe concatenation of multiple tools in tool creation is presented in the second case. In this case, the LLM is given several tools to solve the problem, but the usage of each tool is rather simple to follow. This level of tool creation requires the LLM to plan the use of tools in a logical way and organize them with clear logic. Instead of how to call tools to serve a different purpose, this level also illustrates the LLM's excellent ability in implementing a pipeline to solve specific queries through tool creation.\nThe hierarchy of tool creation is presented in the third case. This not only is the most common phenomenon that we observe in the experiment but also represents the most advanced aspect of the LLM's reasoning potential. By creating tools with a clear hierarchy, the LLM is successful in offloading more reasoning burdens and thus solving the problem with higher accuracy. In this case, is_prime represents only a \"sub-tool\", while the main tool solves the problem with more ease by calling it to help count the valid numbers.\nOverall, the presented case studies provide valuable insights into the tool creation abilities of " }, { "figure_ref": [], "heading": "D Prompting Details", "publication_ref": [], "table_ref": [], "text": "All the methods we present in our main experiments need prompting to formalize the LLM's response and better inspire its ability.\nPrompting of CREATOR. We present in Figures 14 to 16 the general prompting format and the formats of demonstrative examples, as detailed in details in 3. For the creation stage, decision stage, and rectification stage, we apply demonstration examples to enhance the LLM's abstract and concrete reasoning ability, while the execution stage intrinsically is unrelated to the LLM. We present one demonstrative example about a query in MATH, but other tasks including TabMWP and Creation Challenge also follow this prompting format. Specifically, We separate Tool Use into two parts, the first one aiming to inspire the LLM's ability to call WolframAlpha properly, and the second one aiming to prompt the LLM to retrieve the final answer. For CREATOR setting, the prompts are separated according to different stages. Note that the execution stage does not need prompting." }, { "figure_ref": [], "heading": "### Instruction", "publication_ref": [], "table_ref": [], "text": "You are given a math question. You should think step by step to solve the problem. Please put your final numerical answer after \"Final Answer:\" ### Question Point $P$ lies on the line $x= -3$ and is 10 units from the point $(5,2)$. Find the product of all possible $y$-coordinates that satisfy the given conditions. ### Thoughts and Answer Let the y-coordinate of point $P$ be denoted by $y$. Since $P$ lies on the line $x=-3$, we know that the coordinates of $P$ are $(-3,y)$. Since $P$ is 10 units from the point $(5,2)$, we can use the distance formula to write an equation: $\\sqrt{(-3-5)^2+(y-2)^2}=10$ Simplifying the equation gives us: $(y-2)^2+64=100$ $(y-2)^2=36$ $y-2=\\pm6$ $y=8$ or $y=-4$ Therefore, the possible $y$-coordinates that satisfy the given conditions are $8$ and $-4$. The product of these values is $8 \\times (-4) = -32$. " }, { "figure_ref": [], "heading": "### Instruction", "publication_ref": [], "table_ref": [], "text": "You are given a math question. You could call WolframAlpha API to help you solve the question. After seeing a question, you should first generate thoughts and think about how to call the API. Generate \"WOLFRAM:\" in the last line of your response with appropriate inputs you'd like to inquiry." }, { "figure_ref": [], "heading": "### Question", "publication_ref": [], "table_ref": [], "text": "Point $P$ lies on the line $x= -3$ and is 10 units from the point $(5,2)$. Find the product of all possible $y$-coordinates that satisfy the given conditions." }, { "figure_ref": [], "heading": "### Response", "publication_ref": [], "table_ref": [], "text": "Let the y-coordinate of point $P$ be denoted by $y$. Since $P$ lies on the line $x=-3$, we know that the coordinates of $P$ are $(-3,y)$. Since $P$ is 10 units from the point $(5,2)$, we can use the distance formula to write an equation: $\\sqrt{(-3-5)^2+(y-2)^2}=10$ We can call wolframalpha to solve this equation WOLFRAM: solve sqrt((-3-5)^2+(y-2)^2)=10 " }, { "figure_ref": [], "heading": "### Instruction", "publication_ref": [], "table_ref": [], "text": "You are given a math question. You have just called WalframAlpha API to help you solve the question. Please continue to generate your final numerical answer with the return from WalframAlpha API as reference. If There is an error return from the API, you could continue you thought step by step and give your final answer. Generate \"Final Answer:\" in the last line of with your final numerical answer." }, { "figure_ref": [], "heading": "### Question", "publication_ref": [], "table_ref": [], "text": "Point $P$ lies on the line $x= -3$ and is 10 units from the point $(5,2)$. Find the product of all possible $y$-coordinates that satisfy the given conditions." }, { "figure_ref": [], "heading": "### Response", "publication_ref": [], "table_ref": [], "text": "Let the y-coordinate of point $P$ be denoted by $y$. Since $P$ lies on the line $x=-3$, we know that the coordinates of $P$ are $(-3,y)$. Since $P$ is 10 units from the point $(5,2)$, we can use the distance formula to write an equation: $\\sqrt{(-3-5)^2+(y-2)^2}=10$ We can call wolframalpha to solve this equation WOLFRAM: solve sqrt((-3-5)^2+(y-2)^2)=10 ### WolframAlpha Return [{'@title': 'Input interpretation', 'subpod': {'@title': '', 'plaintext': 'solve sqrt((-3 -5)^2 + (y -2)^2) = 10'}}, {'@title': 'Results', '@primary': 'true', 'subpod': [{'@title': '', 'plaintext': 'y = -4'}, {'@title': '', 'plaintext': 'y = 8'}], ...] ### Response Continue From the WolframAlpha return, we see that there are two possible values for $y$, namely $y=-4$ and $y=8$. Therefore, the product of all possible $y$coordinates is $-4 \\cdot 8 = -32$. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Problem: What's the average temperature of ShangHai in the next seven days? Input: You have access to the following tool ForecastWeather(city:str, N:int) → dict The tool will return the \"UV index\"(float) and \"temperature\" (float) in the given city after N days. LLMs. However, it is important to acknowledge that these studies offer only a glimpse into the vast potential of LLMs in this domain. We encourage future research to explore and harness the full ex-tent of LLMs' tool creation capabilities, further pushing the boundaries of what can be achieved." }, { "figure_ref": [], "heading": "### Instruction", "publication_ref": [], "table_ref": [], "text": "You are asked to design python tools to help solve a question. You can use math, scipy, numpy, sympy,... or other packages if necessary. You should specify the parameters and returns of your tool and your tool's utility in the documentation. You could create more than one tool if you think they may all help solve the problem." }, { "figure_ref": [], "heading": "### Question", "publication_ref": [], "table_ref": [], "text": "Point $P$ lies on the line $x= -3$ and is 10 units from the point $(5,2)$. Find the product of all possible $y$-coordinates that satisfy the given conditions. " } ]
Large Language Models (LLMs) have made significant progress in utilizing tools, but their ability is limited by API availability and the instability of implicit reasoning, particularly when both planning and execution are involved. To overcome these limitations, we propose CREATOR, a novel framework that enables LLMs to create their own tools using documentation and code realization. CREATOR disentangles abstract tool creation and concrete decision execution, resulting in improved performance. We evaluate CREATOR on MATH and TabMWP benchmarks, respectively consisting of challenging math competition problems and diverse tabular contents. Remarkably, CRE-ATOR outperforms existing chain-of-thought, program-of-thought, and tool-using baselines. Additionally, we introduce the Creation Challenge dataset, featuring 2K diverse questions, to emphasize the necessity and benefits of LLMs' tool creation ability. Further research demonstrates that leveraging LLMs as tool creators facilitates knowledge transfer, and LLMs exhibit varying levels of tool creation abilities, enabling them to adapt to diverse situations. The tool creation ability revolutionizes the LLM's problem-solving paradigm, driving us closer to the next frontier of artificial intelligence. All the codes and data are released 1 .
CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: The difference between CREATOR and a general tool-using framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3. 11Creation Stage Implementation Details. In the creation stage of CREATOR, we explicitly instruct LLMs with demonstrative examples to create tools and documentation to solve the problem. The general prompt text form is \"###Instruction [INSTRUCTION]\\n [EXAMPLE 1]\\n [EXAMPLE 2] ...\". Here the instruction text \"[INSTRUCTION]\" describes the goal and format of the output. Each demonstration \"[EXAMPLE x]\" follows format \"### Question [QST]\\n ### Tool [TOOL]\". Each [TOOL] contains documentation text as code comments. A detailed example of prompt text is shown in Figure 14.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of our CREATOR framework with four stages: Creation, Decision, Execution, and Rectification. With an LLM like ChatGPT, we successfully leverage its tool creation ability with code as the medium. In each stage we apply instructions and demonstrations in prompts, shown in Figures 14 to 16 in Appendices.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Problem:Figure 3 :Figure 4 :34Figure 3: In subfigure A, we show an example in which Tool Use reasoning (left) fails, while CREATOR (right) solves successfully as it derives a new tool for the novel question. In subfigure B, we present a case comparing the answer given by CREATOR with and without CoT. Challenging problems in MATH cause conflicts between language and program reasoning.", "figure_data": "", "figure_id": "fig_3", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The improvement brought by rectification on both PoT and CREATOR. Rectify-N denotes enabling N rounds of rectifications.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "x, y, predict_day): coefficients = np.polyfit(x, y, 5) terms = [coefficients[i] * predict_day ** (5 -i)", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example query and its solution provided in the Creation Challenge dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "def calculate_optimal_units(selling_price, variable_cost, fixed_cost, demand): # Calculate the profit for each quantity profits = [] for quantity in range(1, demand+1): profit = (selling_price * quantity) -(variable_cost * quantity) -fixed_cost profits.append(profit) # Find the quantity that maximizes profit optimal_quantity = profits.index(max(profits)) + 1 # Return the optimal quantity return optimal_quantity fixed_cost = 20000 variable_cost = 10 demand = 10000 # Strategy 1: Selling the product at $30 per unit selling_price_1 = 30 optimal_quantity_1 = calculate_optimal_units(selling_price_1, variable_cost, fixed_cost, demand) profit_1 = (selling_price_1 * optimal_quantity_1) -(variable_cost * optimal_quantity_1) -fixed_cost # Strategy 2: Selling the product at $35 per unit selling_price_2 = 35 ... # Determine the optimal pricing strategy if profit_1 > profit_2:", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An example data point in tool transfer dataset.We provide three scenarios sharing the core knowledge and a sample tool that all three scenarios can utilize.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Problem:Figure 8 :8Figure 8: We present two more cases to illustrate the conflicts between program thoughts and natural language thoughts.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Prompting of Baselines. Besides CREATOR, we also apply demonstrative examples in prompting the ChatGPT's CoT, PoT, Tool Use abilities respectively, presented in Figures 10 to 13. Similar to the prompting of CREATOR, these prompt formats apply to all tasks in the main experiments, including evaluation on MATH, TabMWP, and Creation Challenge.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FinalFinal Answer\" in the last line)", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The instruction and one of the demonstration examples we use when prompting ChatGPT in the CoT setting.", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 12: The instruction and one of the demonstration examples we use when prompting ChatGPT in the Tool Use setting. This figure shows the first part about Wol-framAlpha inquiry.", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FinalFinal Answer\" in the last line)", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: The instruction and one of the demonstration examples we use when prompting ChatGPT in the Tool Use setting. This figure shows the second part about answer retrieving.", "figure_data": "", "figure_id": "fig_15", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "• Vanilla LLM w/ and w/o CoT: The Vanilla LLM with CoT employs linear reasoning to solve problems, while Vanilla LLM without CoT directly generates the answer. • PoT: The LLM utilizes a program to reason through the problem step by step. Besides, we also incorporate rectification into PoT as a stronger baseline for a fair comparison.", "figure_data": "4.2 Creation ChallengeExisting benchmarks are not originally designed toevaluate tool creation, thus unable to fully show-case the necessity and advantages brought by theLLM's tool creation ability. Therefore, we intro-duce Creation Challenge to test the LLM's problem-solving skills under new scenarios, without existingtools or code packages that can be directly applied.Refer to Appendices B.1 for details about the dataformat and construction process.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The accuracy (%) on the test set of MATH dataset leveraging ChatGPT. Rec. represents Rectification.", "figure_data": "MethodSetting AlgebraCounting & ProbabilityGeometryItmd. AlgebraNumber TheoryPre-AlgebraPre-CalculusAverage (weighted)Vanillaw/o CoT w/ CoT25.7 50.925.8 36.122.4 24.513.9 17.518.5 23.240.9 58.621.8 16.725.3 37.9PoT (w/o Rec.) w/o CoT w/ CoT58.2 54.048.5 47.835.4 32.525.8 22.353.1 48.966.8 64.525.0 19.949.8 46.5PoT (w/ Rec.) w/o CoT w/ CoT63.8 61.451.9 48.835.9 34.628.6 23.759.2 54.570.0 67.628.2 34.653.9 51.2Tool Usew/o CoT w/ CoT47.3 55.335.1 37.827.0 28.720.5 20.530.8 34.856.8 61.831.4 26.939.0 43.0CREATOR -Entangledw/o Demo. w/o CoT w/ CoT58.0 64.1 62.753.3 55.7 50.934.2 35.9 33.821.8 42.7 31.455.7 61.6 61.463.4 69.0 68.733.3 37.2 31.449.6 57.2 54.0CREATOR (ours)w/o Demo. w/o CoT w/ CoT66.6 71.5 63.153.6 55.3 58.133.8 41.4 34.629.4 41.9 35.059.8 60.4 61.868.7 71.7 69.734.6 35.3 32.154.9 59.7 55.7MethodSetting AccuracySuccessful ExecutionStandardw/o CoT w/ CoT68.2 75.299.1 99.3PoT (w/o Rec.)w/o CoT w/ CoT80.6 80.098.5 91.2PoT (w/ Rec.)w/o CoT w/ CoT81.2 87.399.7 100Tool Usew/o CoT w/ CoT77.6 79.6100 100CREATOR-Entangled w/o CoT w/ CoT91.6 93.5100 99.9CREATOR (ours)w/o CoT w/ CoT90.5 94.799.7 100", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The accuracy (%) on the Creation Challenge test set leveraging ChatGPT. No hint represents normal CREATOR framework. Utility hint provides hints about the utility of the tool, while all hint offers additional hints about the possible inputs and outputs of the tool.", "figure_data": "MethodSetting AccuracySuccessful ExecutionStandardw/o CoT w/ CoT27.9 32.794.9 99.1PoT (w/o Rec.)w/o CoT w/ CoT59.2 60.793.5 95.7PoT (w/ Rec.)w/o CoT w/ CoT61.1 62.098.3 98.9CREATOR-Entangled (w/o CoT)no hint utility hint all hint64.5 65.8 75.399.2 99.3 99.5CREATOR (ours) (w/o CoT)no hint utility hint all hint63.8 67.2 75.798.7 99.1 99.5", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of tool transfer experiment. Tool transfer improves accuracy by up to 15.3%. and programming reasoning. Therefore, the application of CoT enhances performance in these cases. We present more case studies to illustrate it in Appendices C.1.CREATOR is Robust to Challenges. Figure4illustrates the performance of the LLM in relation to difficulty. CREATOR outperforms all the baselines for both tasks and achieves higher accuracy, particularly for difficult problems. This provides compelling evidence that CREATOR exhibits greater resilience to challenges.", "figure_data": "Rectification Raises Performance. Figure 5demonstrates the improvement in the LLM's per-formance achieved through the application of therectification stage. Results show rectification canincrease the accuracy by approximately 10% ofthe original value, which proves the necessity andrationality of establishing this stage.Influential Factors of Tool Creation. Tables 2to 4 highlight two crucial factors affecting theLLM's performance. (1) Separation of Creationand Decision: The separation of these two stagesinherently represents the disentanglement of theLLM's abstract and concrete reasoning, whichleads to improved performance. (2) Availabilityof Hints: In practical scenarios, guidance is oftennecessary to harness the LLM's behavior when cre-ating tools. We demonstrate that providing moredetailed hints can significantly improve the LLM'sperformance, as they enable easier implementationof desired tools and eliminate uncertainty and mis-directions in CoT or tool documentation.", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Cheng Qian; Chi Han; Yi Ren Fung; Yujia Qin; Zhiyuan Liu; Ji Heng; Grégoire Mialon; Roberto Dessì; Maria Lomeli; Christo- Foros Nalmpantis; Ram Pasunuru; Roberta Raileanu; Baptiste Rozière; Timo Schick; Jane Dwivedi; Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Maxwell Nye; Anders Johan Andreassen; Guy Gur-Ari; Henryk Michalewski; Jacob Austin; Shengding Hu; Yankai Lin; Weize Chen; Ning Ding; Ganqu Cui; Zheni Zeng; Yufei Huang; Chaojun Xiao; Yusheng Su; Huadong Wang; Runchu Tian; Kunlun Zhu; Shihao Liang; Xingyu Shen; Bokai Xu; Zhen Zhang; Yining Ye; Bowen Li; Ziwei Tang; Jing Yi; Yuzhang Zhu; Zhenning Dai; Lan Yan; Xin Cong; Yaxi Lu; Weilin Zhao; Yuxiang Huang; Junxi Yan; Xu Han; Xian Sun; Dahai Li; Jason Phang
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b1", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Wenhu Chen; Xueguang Ma; Xinyi Wang; William W Cohen", "journal": "", "ref_id": "b2", "title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b4", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Qingxiu Dong; Lei Li; Damai Dai; Ce Zheng; Zhiyong Wu; Baobao Chang; Xu Sun; Jingjing Xu; Zhifang Sui", "journal": "", "ref_id": "b5", "title": "A survey for in-context learning", "year": "2022" }, { "authors": "Zhangyin Feng; Daya Guo; Duyu Tang; Nan Duan; Xiaocheng Feng; Ming Gong; Linjun Shou; Bing Qin; Ting Liu; Daxin Jiang", "journal": "", "ref_id": "b6", "title": "Codebert: A pre-trained model for programming and natural languages", "year": "2020" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b7", "title": "Pal: Program-aided language models", "year": "2022" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b8", "title": "Visual programming: Compositional visual reasoning without training", "year": "2022" }, { "authors": "Dan Hendrycks; Collin Burns; Saurav Kadavath; Akul Arora; Steven Basart; Eric Tang; Dawn Song; Jacob Steinhardt", "journal": "Sort", "ref_id": "b9", "title": "Measuring mathematical problem solving with the math dataset", "year": "" }, { "authors": "Dan Hendrycks; Collin Burns; Saurav Kadavath; Akul Arora; Steven Basart; Eric Tang; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b10", "title": "Measuring mathematical problem solving with the math dataset", "year": "2021" }, { "authors": "Mojtaba Komeili; Kurt Shuster; Jason Weston", "journal": "", "ref_id": "b11", "title": "Internet-augmented dialogue generation", "year": "2022" }, { "authors": "Yaobo Liang; Chenfei Wu; Ting Song; Wenshan Wu; Yan Xia; Yu Liu; Yang Ou; Shuai Lu; Lei Ji; Shaoguang Mao", "journal": "", "ref_id": "b12", "title": "Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b13", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Shayne Longpre; Le Hou; Tu Vu; Albert Webson; Hyung Won Chung; Yi Tay; Denny Zhou; V Quoc; Barret Le; Jason Zoph; Wei", "journal": "", "ref_id": "b14", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "Pan Lu; Baolin Peng; Hao Cheng; Michel Galley; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Jianfeng Gao", "journal": "", "ref_id": "b15", "title": "Chameleon: Plug-and-play compositional reasoning with large language models", "year": "2023" }, { "authors": "Pan Lu; Liang Qiu; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Tanmay Rajpurohit; Peter Clark; Ashwin Kalyan", "journal": "", "ref_id": "b16", "title": "Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning", "year": "2022" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b17", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b18", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface", "year": "2023" }, { "authors": "Kurt Shuster; Jing Xu; Mojtaba Komeili; Da Ju; Eric Michael Smith; Stephen Roller; Megan Ung; Moya Chen; Kushal Arora; Joshua Lane", "journal": "", "ref_id": "b19", "title": "Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage", "year": "2022" }, { "authors": "Dídac Surís; Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b20", "title": "Vipergpt: Visual inference via python execution for reasoning", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b21", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Harsh Trivedi; Niranjan Balasubramanian; Tushar Khot; Ashish Sabharwal", "journal": "", "ref_id": "b22", "title": "Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions", "year": "2022" }, { "authors": "Xingyao Wang; Sha Li; Heng Ji", "journal": "", "ref_id": "b23", "title": "a. Code4structure: Code generation for few-shot structure prediction from natural language", "year": "2022" }, { "authors": "Xingyao Wang; Hao Peng; Reyhaneh Jabbarvand; Heng Ji", "journal": "", "ref_id": "b24", "title": "Learning to generate from textual interactions", "year": "2023" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Atharva Naik; Arjun Ashok; Arut Selvan Dhanasekaran; Anjana Arunkumar; David Stap", "journal": "", "ref_id": "b25", "title": "Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b26", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b27", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Pengfei Yu; Heng Ji", "journal": "", "ref_id": "b28", "title": "Self information update for large language models through mitigating exposure bias", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 70.87, 574.23, 220.08, 36.1 ], "formula_id": "formula_0", "formula_text": "[EXAMPLE x]\" fol- lows \"### Question [QST]\\n ### Tool [TOOL]\\n ### Solution [SOL]\"" }, { "formula_coordinates": [ 5, 69.42, 174.69, 221.08, 36.1 ], "formula_id": "formula_1", "formula_text": "[EXAMPLE x]\" now changes to \"### Question [QST]\\n ### Original [ORI]\\n ### Error [ERR]\\n ### Rectification [REC]\"," } ]
10.48550/arXiv.1607.06450
2023-05-23
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b6", "b37", "b25", "b32", "b27", "b55", "b56", "b4", "b34", "b3", "b21", "b17", "b11", "b22", "b37", "b13", "b46" ], "table_ref": [], "text": "Recent advances in multimodal representation learning have demonstrated the benefits of simultaneously modeling language and another modality, which allows for more efficient training and improved performance of both sets of learned representations on downstream tasks. These benefits have been particularly clear in text/vision or text/audio applications, which often realize large improvements in predictive performance or generative modeling ability (Elizalde et al., 2019;Radford et al., 2021;Li et al., 2021;Mu et al., 2022). In this work, however, we consider another modality which frequently co-occurs with text: network-or graph-structured data.\nWe consider in particular the scenario in which there is a graph over entities which generate or contain texts, and in which each text is associated with a node in the graph. Such graphs co-occur frequently with real text data, and we refer to them as \"supervening graphs\" over their text corpora. Examples include tweets and the follow graph over the posting Twitter users, web page content and the link graph over the pages, the text of academic articles and the citation network over articles or authors, and others. In this setting, graph information can be used to improve performance on language tasks, and text features may be leveraged for graph tasks, such as link prediction or node classification.\nPrior work has approached the problem of taking joint advantage of these two modalities in several ways. Some work has used textual data to inform or supervise training of graph neural networks (Zhang and Zhang, 2020;Zhang et al., 2017), including seq2seq approaches for node representations (Liu et al., 2018), the use of text features in DeepWalk-style algorithms (Yang et al., 2015), and kernel methods to incorporate text into node representations (Zhang et al., 2017). None of these approaches, however, also produce graph-informed text representations. This is more parameter-efficient for graph-only tasks, but the common practical case of also needing to solve text-based tasks then requires an additional modeling approach be used to leverage graph data.\nOther work has considered the converse case of employing a supervening graph structure to fine-tune a language model. LinkBERT (Yasunaga et al., 2022) constructs fine-tuning examples based on inter-document links, while SPECTER (Cohan et al., 2020) and SciNCL (Ostendorff et al., 2022) use contrastive objectives to update a pretrained language model. Although these approaches allow for the extraction of individual document representations in the form of sentence embeddings, they have a parallel limitation to graph-focused models in that they do not learn node representations.\nWhile there have been some attempts to jointly represent nodes and texts, they all have certain limitations. For example, Chandra et al. (2020) and Li and Goldwasser (2019) jointly train text and graph encoders, but require a supervised objective and labeled data. Karpov and Kartashev (2022) allow a language model to attend over graph embeddings, but the training process does not further train or update the graph embeddings. Gourru et al. (2020), similarly, learn to embed documents into a \"pretrained semantic space\" but the semantic embedding model is not updated in light of graph information. Li et al. (2017), while conceptually similar to our work, carefully curate their input datasets and rely on particular structure and node attributes found in social network discourse. None of the prior joint representation approaches we are aware of are suitable for general, self-supervised, joint pretraining without dependence on labeled data or task structure.\nIn this work, we propose ConGraT (Contrastive Graph-Text pretraining), a general approach to self-supervised joint graph-text learning, based on a batch-wise contrastive learning objective inspired by InfoNCE (Oord et al., 2019) and CLIP (Radford et al., 2021). The idea is to have separate encoders for the language and graph modalities that are trained to align their representations within a common latent space, as shown in Figure 1. Because graphs are more structured objects than images, we are able to modify the InfoNCE objective to incorporate information about plausible \"next guesses\" based on graph similarity, and observe improvements in performance as a result. This approach provides flexibility in the choice of text and graph encoders (parameter count, architecture, pretraining, etc.), and is inductive (Hamilton et al., 2017), with the encoders able to generalize to previously unseen graphs as well as previously unseen texts. The overall architecture is shown in Figure 2.\nExperiments on various datasets show that Con-GraT models consistently outperform strong baselines on a number of tasks. We see statistically significant improvements on node category classification in 25 out of 36 experiments, with neither our model nor any baseline performing significantly better in a further 8. Furthermore, when applied zero-shot to link prediction, ConGraT models achieve better performance on two of three datasets compared to a graph model specifically trained to do the task. We also examine whether pretraining hurts downstream language modeling performance, finding at worst a slight reduction and often no statistically significant change.\nThe contributions of this work are threefold. First, we propose a general contrastive pretraining method for text corpora accompanied by a supervening graph, such as a follow, link, or citation graph. The pretraining produces separate text and node encoders targeting a common latent space, and, importantly, is both inductive and selfsupervised. Second, we demonstrate that our joint pretraining method improves performance on various downstream tasks over strong unimodal and cross-modal baselines. Finally, we release our code and datasets, including in particular a version of the Pubmed (Sen et al., 2008) graph learning benchmark fully rebuilt from ground-truth Pubmed APIs, which includes the text of titles and abstracts as well as network data. We hope these datasets will be useful to multiple research communities working on joint text-graph problems." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b46", "b10", "b55", "b27", "b52", "b54", "b4", "b56", "b34", "b25", "b12", "b3", "b21", "b11", "b17", "b19", "b23", "b22" ], "table_ref": [], "text": "Text-augmented GNN training. Initial node representations in inductive graph models are often created from the text associated with each node, as in the bag-of-words vectors employed in the popular Pubmed (Sen et al., 2008), Cora (McCallum et al., 2000), and Citeseer (Giles et al., 1998) benchmarks. Text data is also incorporated into node representations in other ways: for example, Yang et al. (2015) extend the DeepWalk algorithm to incorporate text features into node representations; Liu et al. (2018) learn node embeddings in a seq2seq framework with inputs based on texts associated with the nodes; Tu et al. (2017) use a selective attention mechanism to generate text-informed node embeddings for particular social contexts. These models do not also learn graph-informed text representations. Zhang et al. (2017) leverage kernel methods to construct node representations from user profile information in a way that incorporates network structure. Other approaches include extracting graphs from entity co-occurrence in texts and modeling them (Zhang and Zhang, 2020;Waller and Anderson, 2021).\nGraph-augmented LM training. Several recent works have used hyperlinks or citation graphs to inform language model (LM) training. SPECTER (Cohan et al., 2020) contrastively fine-tunes a language model to produce document embeddings: the positive and negative examples for a triplet loss are selected according to citation graph edges. LinkBERT (Yasunaga et al., 2022) uses citation or link graphs to assemble training samples for a masked language model, pairing anchor texts with texts from the same contiguous document, linked documents, or random documents. In addition to the standard masked language modeling training objective, it uses an auxiliary document relation prediction (DRP) objective, which classifies the relation of the two text segments in the input. SciNCL (Ostendorff et al., 2022) relaxes a discrete citation graph into a continuous domain with nearest-neighbor sampling. Unlike in the jointembedding case, these models represent documents or authors not directly, but as pooled LM outputs.\nJoint graph-text representations. Prefix tuning (Li and Liang, 2021) is a lightweight way of learning node-specific linguistic information and generates dense node representations in the process. However, it takes no advantage of the graph structure over the nodes. For fixed text and graph encoders, it is possible to learn mappings from their separate embedding spaces to a common one, such as by canonical correlation analysis (Gupta and Varma, 2017). Chandra et al. (2020) jointly train both text and graph encoders, but use an externally supervised objective such as fake news detection. Li and Goldwasser (2019) consider the task of predicting news stories' political perspectives, employing both text and social graph encoders trained with a story-wise, non-contrastive embedding alignment term added to a supervised loss. Gourru et al. (2020) learn to embed documents connected by links into a pretrained semantic space, taking care to represent uncertainty in the embeddings, but the setting is less general than the one we consider here (only one text document is associated with each graph node), and the pretrained semantic space is frozen.\nOther works have attempted to condition a language model or augment its input with dense representations of graph entities. SocialBERT (Karpov and Kartashev, 2022) and LMSOC (Kulkarni et al., 2021) inject social information into language models during training via fixed graph node representations that the model can attend over. Li et al. (2022) use knowledge-graph representations of Twitter user attributes to inform language modeling on tweets. Li et al. (2017) learn joint embeddings of social network posts and users for general predictive tasks, with users represented initially by attribute vectors. This work, while similar to ours in spirit, has several differences from our approach. Training involves (positive, negative) pairs of examples rather than contrastive learning at the minibatch level, as we use. More notably, training relies on structure particular to certain kinds of online interactions and on access to rich user attributes, making it inapplicable as a general baseline." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "We consider a directed or undirected graph over a set of nodes, where each node is associated with a set of one or more texts. The goal is to learn a common latent space that allows us to compare the embeddings of nodes and texts by placing them in meaningful locations within that space. For example, the nodes and texts may be users in a social network and their posts, websites and their pages, authors and their articles, etc.\nFormally, let 𝐺 = (𝑉, 𝐸) be a graph, with 𝑉 the set of nodes and 𝐸 ⊆ 𝑉 × 𝑉 the set of edges. Also, let 𝑇 (𝑣) ). The first and last tokens are always special start and end tokens.\nOur training framework involves a text encoder, a function 𝐹 𝑇 : ∪ ∞ 𝑖=1 ⊗ 𝑖 𝑊 → R 𝑑 from the set of all token sequences to a 𝑑-dimensional Euclidean embedding space. Similarly, we have a node encoder, a function 𝐹 𝐺 : 𝑉 → R 𝑑 to an embedding space of the same dimension. We aim to train the two encoders such that they directly learn a joint latent space between the text and graph node embeddings. This will allow us to use geometric properties of a common space to relate nodes and texts to each other for predictive or inferential purposes." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b37", "b15", "b1", "b50", "b16", "b5" ], "table_ref": [], "text": "Our approach involves two parallel blocks (separate text and node encoders) connected at the output layers by a loss function, specifically a batch-wise contrastive objective inspired by InfoNCE (Oord et al., 2019) and CLIP (Radford et al., 2021). The training process induces the encoders, which do not in general target the same embedding space, to align their representations. This approach is flexible and allows for the use of many kinds of both text and node encoders. On the text side, we illustrate this flexibility with experiments employing both causal and masked language models. As in CLIP, each encoder is placed behind an adapter module which generates embeddings of the same dimension, since we would like to embed nodes and texts in the same space. Our adapters consist of two fully connected layers, with a GeLU (Hendrycks and Gimpel, 2020) in between. The adapter is followed by layer normalization (Ba et al., 2016) and dropout (Srivastava et al., 2014).\nTraining objective. We modify the standard In-foNCE loss with additional graph-specific elements. Unlike the vision-language case, in a graph setting, there are easily computable measures of how similar pairs of nodes are, such as their Sim-Rank (Jeh and Widom, 2002) or the number of mutual in-and out-edges. We exploit these measures to incorporate information about the most likely second, third, and further choices for the nodes a text came from and the texts a node produced.\nMore formally, in terms of the notation in Sec-\ntion 3, let 𝑋 = ∪ 𝑣∈𝑉 {(𝑣, 𝑡 (𝑣) 𝑖 )} |𝑇 (𝑣) |\n𝑖=1 be a dataset of (node, text) pairs, and let 𝐵 = {(𝑣 𝑖 , 𝑡 𝑖 )} 𝑁 𝐵 𝑖=1 ⊆ 𝑋 be a minibatch of size 𝑁 𝐵 sampled from 𝑋. We drop the (𝑣) superscript for simplicity. Using the text and node encoder notation from Section 3, 𝐹 𝑇 (𝑡 𝑖 ) is the text encoder output vector for text 𝑖, and 𝐹 𝐺 (𝑣 𝑖 ) is node 𝑣's graph encoder output.2 Then the matrix 𝐶 given by\n𝐶 𝑖𝑗 = 𝐹 𝑇 (𝑡 𝑖 ) • 𝐹 𝐺 (𝑣 𝑗 ) ‖𝐹 𝑇 (𝑡 𝑖 )‖ • ‖𝐹 𝐺 (𝑣 𝑖 )‖ 𝑒 𝜏(1)\nis the 𝑁 𝐵 × 𝑁 𝐵 matrix of cosine similarities between texts and nodes. (Note that, though square, this matrix is not symmetric: rows are texts and columns are nodes.) We multiply each element by 𝑒 𝜏 , where 𝜏 is a log-temperature parameter that allows a degree of learnable control over the learning rate, reducing the model's sensitivity to the choice of LR. Further, let 𝑆 𝑇 (•, •) and 𝑆 𝐺 (•, •) be non-negative graph-based similarity functions for texts and nodes, respectively. Then we define the graph-based similarity distributions for texts and nodes as follows:\n𝑠 (𝑖) 𝑇 (𝑗) = 𝑆 𝑇 (𝑡 𝑖 , 𝑡 𝑗 ) ∑︀ 𝑁 𝐵 𝑘=1 𝑆 𝑇 (𝑡 𝑖 , 𝑡 𝑘 ) ∀𝑖(2)\nand analogously for 𝑠\n(𝑖)\n𝐺 , replacing 𝑆 𝑇 with 𝑆 𝐺 and 𝑡 𝑖 with 𝑣 𝑖 . The target distributions are mixtures of these distributions and indicator variables for the true source node of a text and matching text of a node. For each example 𝑋 𝑖 = (𝑣 𝑖 , 𝑡 𝑖 ) in the minibatch, fixing some hyperparameter 𝛼 ∈ [0, 1], we define the target distributions as follows:\nD (𝑖) 𝑇 (𝛼) = (1 -𝛼)1 𝑗 {𝑣 𝑗 = 𝑣 𝑖 } + 𝛼 ⃗ 𝑠 𝑇 (𝑡 𝑖 ),(3)\nD (𝑖) 𝐺 (𝛼) = (1 -𝛼)1 𝑗 {𝑡 𝑗 = 𝑡 𝑖 } + 𝛼 ⃗ 𝑠 𝐺 (𝑣 𝑖 ). (4)\nThen our loss function is:\nℒ(𝐵; 𝛼) = 1 2𝑁 𝐵 𝑁 𝐵 ∑︁ 𝑖=1 𝐻(𝐶 𝑖,: , D (𝑖) 𝑇 (𝛼)) + 𝐻(𝐶 :,𝑖 , D (𝑖) 𝐺 (𝛼)) (5)\nwhere 𝐻 indicates the cross-entropy and 𝐶 𝑖,: , 𝐶 :,𝑖 are the 𝑖-th row and 𝑖-th column of 𝐶.\nWith 𝛼 = 0, this loss is equivalent to the average of cross-entropies for predictions of which node in the minibatch goes with which text and which text goes with which node. With higher values of 𝛼, the target distributions are mixtures of: a) indicators for the true source node and text, and b) the distribution of other nodes and texts by graph similarity. If similar nodes produce similar texts, as happens in many real social networks (De Choudhury et al., 2010), positive values of 𝛼 should allow the model to learn more information and do so more efficiently." }, { "figure_ref": [], "heading": "Similarity function.", "publication_ref": [ "b16" ], "table_ref": [], "text": "For undirected graphs, we base our similarity function on the number of mutual neighbors of a pair of nodes. If 𝐴 is the graph adjacency matrix, we compute 𝐴𝐴 𝑇 to find the number of in-or out-neighbors of each pair of nodes, and find the cosine similarity of each row 𝑖 of 𝐴𝐴 𝑇 with each column 𝑗 to measure the similarity of nodes 𝑖 and 𝑗. 3 We also evaluated SimRank (Jeh and Widom, 2002) as the similarity metric, but opted for this function instead for its lower computational requirements and faster runtime for large graphs. On the text side, because we're interested in leveraging graph information, we proxy for the similarity of a pair of texts with the similarity of the nodes which produced them.\nThe digraph case is more complicated, as we need a directed notion of similarity that can differ between edges (𝑖, 𝑗) and (𝑗, 𝑖). We elect not to choose and validate such a similarity function here, deferring the question to future work. Accordingly, experiments with 𝛼 > 0 in Section 5 and Section 6 all discard edge directions." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b47", "b29", "b46", "b7" ], "table_ref": [ "tab_4", "tab_5" ], "text": "We employ three datasets, one each from social-, citation-, and link-graph domains. We divide each dataset into a 70% train set, 10% validation set, and 20% test set, splitting at the node level so that every text associated with a given node is in the same data split. On the graph side, any edges which cross split boundaries are dropped. Notably, because how to scale graph neural networks to very large graphs is still an active research area (Serafini and Guan, 2021;Ma et al., 2022), our datasets feature at most tens of thousands of nodes and a few million edges. Details of the datasets can be found in Appendix A, including Table 4 andTable 5. Pubmed. We built from scratch a version of the popular Pubmed graph learning benchmark (Sen et al., 2008) which includes the titles and abstracts of each article (widely available versions of the dataset do not include any text). We began with the standard list of PMIDs for the articles in the dataset, and fetched from the Pubmed API the title, abstract, and list of references. We kept directed citation edges only to other articles in the dataset. One PMID was not found in the Pubmed database and was thus left out. The final dataset includes 19,716 nodes, 61,110 edges, and 59,381 texts, including both titles and abstracts. The included articles are about diabetes, and the standard node categories, which we use in our experiments, are from the Pubmed database: type-1 diabetes, type-2 diabetes, or experimental evidence.\nT-REx. We use a dataset of Wikipedia articles, represented by their introductory paragraphs and the hyperlinks between the articles in those paragraphs, selected from the broader T-REx corpus (Elsahar et al., 2018) of Wikipedia articles and their knowledge-base triples. Rather than use the entire corpus, we selected a sample to satisfy computational constraints, choosing a single category arbitrarily such that it resulted in a connected graph of about the same size as the other datasets. The final dataset includes 9,214 nodes (articles), 22,689 edges, and 18,422 texts, including the titles and initial paragraphs of the articles. A total of 1,433 unique subcategories (descending from the parent category) are associated with these articles. We derived five binary label sets over the documents from the subcategories, with each consisting of a group of related subcategories. Further details of label construction are given in the appendix.\nTwitter. We create and use a Twitter dataset of 167,558 tweets posted by a set of 8,721 influential users in media, politics, and entertainment, and the follow graph among these users, which consists of 2,373,956 edges. We include in the dataset up to recent 3,200 tweets as of May 9, 2021. We also collected certain demographic data about these users (region of residence, age, gender, and politician vs. entertainer occupation status) by matching them with information on Wikipedia and Ballotpedia." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b48", "b41", "b43", "b53" ], "table_ref": [], "text": "Our experiments cover and explore three choices left open by the framework described in Section 4: whether to use a causal or masked language model as a text encoder, whether to use graph-derived similarity information in the loss (specifically with 𝛼 = 0.1) or not (𝛼 = 0), and whether to consider edge directions. On each dataset, we train ConGraT models for 6 of the 8 combinations of these factors, four discarding edge directions and two keeping them, for a total of 18 models over all datasets. (Recall from Section 4 that we defined a similarity function to use for 𝛼 > 0 only for undirected graphs, so there are neither causal nor masked LM experiments with directed edges and 𝛼 > 0.)\nAll experiments use transformer LMs for the text encoders, although our approach is generalizable to other architectures as well. For masked LM experiments, we initialize weights with the pretrained all-mpnet-base-v2 model (Song et al., 2020) from the sentence-transformers toolkit (Reimers and Gurevych, 2019), while our causal LM experiments initialize weights with the pretrained distilgpt2 model (Sanh et al., 2019). The tokenlevel outputs of the text encoders are mean-pooled to obtain text-level representations. For the graph node encoder, we use a graph attention network (GAT) (Veličković et al., 2018) with 3 layers and 2 attention heads in each layer for all experiments, randomly initialized and trained from scratch. All text and graph embeddings are set to have dimensions of 768. Further details of these architectures and training are provided in Appendix B." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b18", "b18", "b17", "b19", "b56" ], "table_ref": [], "text": "We evaluate our approach in three ways: link prediction, language modeling, and node category classification. Link prediction and language modeling are used as fundamental modality-specific metrics that are meant to measure how well the node and text encoders from our method maintain their abilities to model their individual modalities. We perform node category classification using each encoder's embeddings in order to measure how effective the learned representations are for downstream tasks.\nWe describe the various baselines we used in these experiments below; their architectures and training details are described in Appendix B.\nLink prediction. We evaluate how well the graph encoders trained using ConGraT perform at link prediction, the task of predicting whether a pair of nodes is connected by an edge. We perform link prediction via inner product decoding (Kipf and Welling, 2016); given node embeddings, we form the pairwise matrix of inner products, apply a sigmoid function element-wise, and take the resulting values as probabilities that edges exist. As a baseline, we use the same GAT architecture as in our jointly trained ConGraT models and train it on a graph autoencoding task (Kipf and Welling, 2016) using the inner product decoding scheme described above. Initial node representation vectors are based on the truncated SVD of the data split-specific adjacency matrix.\nLanguage modeling. After joint training, the node and text encoders trained using ConGraT are directly usable for tasks which require node or text embeddings, but the text encoder cannot be immediately used for language modeling because it lacks an LM head. To evaluate the impact of joint pretraining on downstream language-modeling performance, we attach a randomly initialized LM head to the jointly trained text encoder and further train it to perform causal language modeling. We use perplexity as our metric; thus, we limit our evaluation here to the causal LM (distilgpt2) variant of our model with 𝛼 = 0. As our baseline, we fine-tune a unimodally pretrained distilgpt2 to perform causal language modeling in the same way.\nCategory classification. To evaluate the usefulness of our learned representations on downstream tasks, we perform node category classification on each dataset. For Pubmed, the classes are the type-1/type-2/experimental labels provided with the dataset; for T-REx, the Wikipedia-derived article categories; and for Twitter, the Wikipedia-and Ballotpedia-derived demographic labels. Within the test split of each dataset, we create another 50/50 train/test split of the data at the node level, then train logistic regression models to predict the aforementioned categories using either node or text embeddings. Specifically, we freeze the text and node encoders in our model and use each modality's representations as input features for logistic regression. When using text embeddings, since it is possible for each node to be associated with multiple texts (e.g., a Twitter user may have multiple tweets), we use the average of the text embeddings associated with each node to predict the node's various characteristics. We also test a combined embedding using both modalities, in which the graph and text embeddings from both encoders of our model are concatenated to create a single embedding of dimension 1536 that is then used to predict the node class. We compare against several baselines. For the node representations, we compare against embeddings from the separately trained GAT autoencoder that was also used for link prediction. For text representations, in addition to unimodal masked and causal LM baselines that were fine-tuned on each dataset, we also compare against two graphaugmented LMs: Social-LM, a modified implementation of SocialBERT (Karpov and Kartashev, 2022) and LMSOC (Kulkarni et al., 2021), and LinkBERT (Yasunaga et al., 2022). To compare against the combined graph and text embeddings, we combine each of the text baselines with the GAT graph baseline by concatenating their embeddings together in the same way as we do for our method." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Link Prediction", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "As shown in Table 1, ConGraT graph node encoders deliver strong performance on link prediction. On the Pubmed and T-REx datasets, most of the jointly trained models outperform the GAT baseline that was specifically trained to do link prediction, while on Twitter, our model achieves competitive performance. In experiments comparing the use of graph-based similarity information in training (𝛼 > 0 vs. 𝛼 = 0) we observe further improvements in performance on Pubmed and T-REx. In some cases, these improvements are quite large: with 𝛼 = 0.1, the masked T-REx model's AUC improves by 0.124, for a 39% improvement in performance over the chance value of 0.5. 4Notably, this performance is zero-shot, with no additional training on link prediction. It is also inductive: the joint training set did not include the test-set's graph structure. Finally, because initial node representations were based on the truncated SVD of the adjacency matrix, our node encoders do not utilize text in any way other than via the joint training process with their corresponding text encoders. We believe that this phenomenonpowerful link predictors emerging from a joint graph-text representation learning objective without explicit training-illustrates the power of using text to inform graph-related tasks." }, { "figure_ref": [], "heading": "Language Modeling", "publication_ref": [], "table_ref": [], "text": "On language modeling, we do not observe significant differences in performance between our jointly trained distilgpt2-based text encoder and the unimodal LM baselines. Average perplexity is only slightly higher for the joint text encoder than for the fine-tuned unimodal LM: for Pubmed, we record test-set mean perplexity of 6.61 (vs. 6.51 for the baseline); for Twitter, 15.81 (vs. 15.55 baseline); for T-REx, 14.60 (vs. 13.80 baseline). The differences are significant (𝑝 < 0.05) for Pubmed and Twitter by a bootstrap test over texts, but not for T-REx. In other words, there is at worst a slight reduction in language modeling performance and sometimes no significant change." }, { "figure_ref": [], "heading": "Category Classification", "publication_ref": [ "b17", "b19", "b56", "b51", "b2" ], "table_ref": [ "tab_2" ], "text": "Overall, we see high performance on node category classification relative to the baselines across all three datasets. ConGraT performs on par with or better than the baseline models in 33 out of 36 experiment settings and statistically significantly outperforms baselines in 25 out of 36 experiment settings. In particular, we find that in settings where there is a lower signal in one modality (e.g. tweet text is not as useful in predicting demographic information and graph position is not as useful in predicting a Pubmed category) the incorporation of the complementary modality can significantly increase performance over a unimodal baseline. One additional observation is that the concatenation of ConGraT text and graph embeddings occasionally performs worse than the corresponding (jointly trained) text or graph embedding on its own. This is possibly due to the simplicity of the concatenation method we used to combine them; more sophisticated fusion methods for combining embeddings could further improve performance. Finally, all presented results were run with 10 bootstrap iterations, and differences with baselines are significant at 𝑝 < 0.05 unless otherwise noted.\nTwitter. Macro F1 scores for the category classification tasks on region, age, gender, and occupation are shown in Table 2. When using text embeddings, ConGraT consistently outperforms the various baselines for all categories. Improvements are somewhat smaller when using graph embeddings and they are more mixed when using the concatenations of text and graph embeddings. This is likely because we are predicting node level features that are highly linked with positions in the social follow graph. Thus, as previously demonstrated by other graph-augmented language models (Karpov and Kartashev, 2022;Kulkarni et al., 2021;Yasunaga et al., 2022) incorporating graph information into text is very valuable, while the converse may be slightly less useful. For example, prior work suggests that a substantial share of connections on Twitter lie in the same geographic region (Takhteyev et al., 2012) or in the same age group (Chamberlain et al., 2017).\nPubmed and T-REx. Table 3 shows macro F1 scores for article category classification on Pubmed and T-REx. ConGraT outperforms all of the baselines when using the graph embeddings, while results using the text embeddings are more mixed. Unlike in Twitter, the graph structures of Pubmed and T-REx are more sparse, and each node directly contains the text information which likely contains words related to the topic class. This is likely why we see the largest performance gains when incorporating text information into the graph embeddings rather than incorporating graph information into the text embeddings. Despite this, note that the performance differences in the text setting are relatively small, and ConGraT still outperforms all baselines when using the combination of text and graph embeddings." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose ConGraT, a contrastive pretraining framework for jointly learning embeddings of graph nodes and texts, applicable to a general class of problems which feature texts from entities which are connected by a graph structure. Because our method is self-supervised, pretraining requires no labeled or other downstream data, and the learned representations are not specific to any particular downstream task. The models are also inductive, able to generalize to entirely new graphs rather than only new texts from the training graph. In experiments on social, citation, and link graphs, our method outperforms a number of baselines in link prediction and node category classification, without meaningful degradation in language modeling performance. These results suggest opportunities to build on this work, from analytical uses (e.g., detection or interpretation of network communities) to methodological improvements such as exploring other contrastive objectives." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [ "b26", "b45", "b0" ], "table_ref": [], "text": "Turning to the ethical implications of our work, applying this method to existing social or other networks poses the risk of learning and reproducing biases or toxic behavior exhibited in those datasets (Liang et al., 2021). The risk of learning such harmful behavior is likely to be greatest on social media datasets, given the greater prevalence of harassment and toxic or \"antisocial\" behavior there (Saveski et al., 2021;Atske, 2021). On the other hand, for applications like detecting hate speech and toxicity, this may be the intended behavior; careful attention to the ethics of how any model is used is key.\nOn the dataset side, we believe there are no meaningful concerns about release of personally identifying information (PII) or offensive content with the Pubmed and T-REx datasets; both are already public and vetted by respectively journals and the Wikipedia community. We do not believe that such concerns prevent the release of our Twitter data either: the tweets themselves are public and the users included are public figures." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b47", "b29" ], "table_ref": [], "text": "Like all models, ours has certain limitations. Most fundamentally, we make the assumption that there is a relationship between the graph structure over nodes and the texts associated with each node. If the two modalities are generated independently, or otherwise do not convey any useful information about each other, joint pretraining should not be expected to improve performance.\nA more practical limitation is around the scale of graph data. Current graph neural networks require large amounts of memory, and scaling to very large graphs (Serafini and Guan, 2021;Ma et al., 2022) is an active research area. It is accordingly unclear how to apply our approach to very large graphs. Because our model is inductive, training repeatedly on (the subgraphs over) random samples of nodes may work, but we have not evaluated this approach.\nA further limitation of the present paper's analysis is that we have not validated a directed analogue of similarity to be used with positive values of 𝛼 in directed graphs. While one has the option of either using 𝛼 = 0 or discarding edge directions, we aim to remedy this issue in future work. " }, { "figure_ref": [], "heading": "A Dataset Details", "publication_ref": [ "b7" ], "table_ref": [ "tab_4", "tab_5", "tab_5" ], "text": "Here we discuss the datasets in more detail and provide summary statistics in Table 4 and Table 5 on the target variables.\nTwitter. To obtain the age and gender of Twitter users, we connected the accounts to their corresponding Wikipedia pages and used Wikidata to infer those two features. Users also self-report locations in their Twitter bios; from these locations, we created four regional categories to predict. Finally, we used data from Ballotpedia5 to label whether a user is a politician or not.\nT-REx. We used the articles in the T-REx corpus (Elsahar et al., 2018) of Wikipedia articles that were labeled with the \"Robots\" category or any of its descendants. From these categories, we constructed several binary target label sets for the T-REx prediction task. However, since the most commonly occurring category was only associated with 526 (roughly 5.7%) of the articles, we expanded each article's labels to include both first and second level ancestors in the category hierarchy to obtain better class label balance. From the initial set of 1,433 unique categories, this expansion yielded a total of 6,643 unique categories, with the most frequent (\"Spacecraft\") occurring on 1,342 articles. We then selected five categories to use as labels for separate binary prediction tasks, choosing frequent categories that generally had small overlap with each other (i.e. were associated with mostly disjoint document sets.) The resultant categories we selected are listed in Table 5." }, { "figure_ref": [], "heading": "B Model Architectures and Training Details", "publication_ref": [], "table_ref": [], "text": "We estimate that training all of our joint and baseline models together used 192 hours of GPU time.\nBecause the assumptions made for this value are conservative, the actual value is likely slightly less." }, { "figure_ref": [], "heading": "B.1 Joint Models", "publication_ref": [ "b48", "b41", "b43", "b39", "b53", "b31", "b35", "b8", "b9", "b40", "b28" ], "table_ref": [ "tab_7" ], "text": "We trained all ConGraT models on either a single NVIDIA RTX A6000 GPU or a single NVIDIA A100 GPU. For masked LM experiments, we used the pretrained all-mpnet-base-v2 model (Song et al., 2020) from the sentence-transformers toolkit (Reimers and Gurevych, 2019), which has 12 layers of 12 heads each, producing 768-dimensional embeddings. It was pretrained constrastively on several corpora from similar domains to those we consider here,6 making it a good match for our work. Our causal LM experiments used the pretrained distilgpt2 model (Sanh et al., 2019), distilled from the larger GPT-2 model (Radford et al., 2019), with 6 layers of 12 heads each, producing 768-dimensional embeddings. 7 For the graph node encoder, all models used a graph attention network (GAT) (Veličković et al., 2018) with 3 layers and 2 attention heads in each layer. As in a standard transformer, each graph convolutional layer is separated from the next by a linear layer, with layer normalization (Ba et al., 2016) applied afterwards. Hidden representations are 64-dimensional, and the final output vectors are 768-dimensional so that baseline model outputs have the same shape as language model outputs.\nParameter counts are as follows: distilgpt2, 81.9 million; all-mpnet-base-v2, 109.4 million; our GAT encoder, 199.7 thousand. The jointly trained models, including the adapter layers after the text and graph encoders, have 83.9 million parameters (causal / distilgpt2) and 110.9 million parameters (masked / all-mpnet-base-v2).\nTraining is quite sensitive to the learning rate; we found that a good compromise between speed of training and instability was a value of 1e-4. At a variety of learning rates, there were also intermittent large spikes in the norm of the gradient, which derailed training unless the gradients were clipped. We clipped the gradient at each step to a norm of 1. In order to reduce memory consumption and fit larger batches onto a single GPU, we used 16-bit mixed precision training (Micikevicius et al., 2018). We encountered numerical overflow problems with FP16, however, related to large gradient values at certain layers, and found it necesary to reduce the init-scale parameter of the gradient scaler from its default value of 2 16 to 256 in order to avoid overflow. We initialized the log-temperature parameter 𝜏 to 3.5 and constrained it to be within (-log 100, + log 100) in order to avoid training instability. We trained all models with PyTorch (Paszke et al., 2019) and pytorch-lightning (Falcon et al., 2020), also using pytorch-geometric (Fey and Lenssen, 2019) for graph encoders and GAT baselines, and Huggingface Transformers (Raffel et al., 2020) for textual baselines and text encoders.\nWe also found that performance suffers if each batch is not unique on nodes (i.e., if each node has multiple texts, only one text per node can be in any given batch). We experimented with simply dropping duplicates from uniformly sampled batches, but this discarded too much data. Instead, we randomly sorted the texts on each epoch so as to minimize the number of within-batch duplicates (assuming minibatches are taken consecutively from the sorted dataset), and dropped any remaining duplicates.\nFinally, because the objective is batch-wise contrastive, the problem becomes quadratically more difficult as the batch size increases. We used the largest batch size we could consistently fit into available hardware, but future work should explore the question of returns to scale.\nAll models used the AdamW optimizer (Loshchilov and Hutter, 2019) with 𝛽 values of (0.9, 0.999) and without weight decay. All joint models used a probability of 0.3 for dropout applied to text and node embeddings. Learning rates and batch sizes for our various models are shown in Table 6." }, { "figure_ref": [], "heading": "B.2 Unimodal Baselines", "publication_ref": [ "b18" ], "table_ref": [ "tab_7" ], "text": "Our unimodal baselines were trained on up to four NVIDIA GTX 1080 Ti GPUs. To better understand the effects of multi-modal pretraining, we also trained unimodal models, either language models or graph attention transformers, and evaluated these unimodal models on the downstream tasks. For textual models, we fine-tuned pretrained all-mpnet-base-v2 and distilgpt2 on the downstream datasets. Language models were fine-tuned for 3 epochs. For graph models, we trained graph attention network (GAT) models to do non-variational graph autoencoding (Kipf and Welling, 2016), also known as link prediction, on the network structure of the downstream-task datasets. GAT models were trained from between 30 to 300 epochs with early stopping based on validation AUC, with patience of 5 epochs and minimum delta of 0.001. We compare these unimodal baselines against ConGraT. Parameter counts for the text and graph baselines are the same as reported for the appropriate modality's joint encoder in Subsection B.1. Batch sizes and learning rates, as for joint models, are reported in Table 6." }, { "figure_ref": [], "heading": "B.3 Social-LM", "publication_ref": [ "b17", "b19" ], "table_ref": [], "text": "We implemented a baseline Social-LM, as a modified version of SocialBERT8 (Karpov and Kartashev, 2022) (also very closely related to LM-SOC (Kulkarni et al., 2021)), which uses pretrained, frozen node embeddings to prime language model pretraining. Specifically, we added a special node token [G] at the beginning of texts and used the pretrained GAT model to obtain the corresponding node embedding paired with each tweet or article, which was used to replace the token embedding for [G]. During the language model pretraining, we froze the node embeddings and only fine-tuned the language model to generate texts conditioned on the node embeddings. Our Social-LM implementation has some key differences from Social-BERT and LMSOC: (1) for masked LM experiments, we used all-mpnet-base-v2 to replace BERT, to be consistent with other experiments for a fair comparison; (2) we also experimented with a causal language model distilgpt2 under the Social-LM baseline, whereas LMSOC and So-cialBERT only used the masked language model BERT;\n(3) we injected the node embedding as the zero token embedding of texts as SocialBERT suggests, whereas LMSOC appends the node embedding at the end. We adopted the zero token injection approach because the same strategy is adaptable for both causal and masked language modeling, while last token injection does not work for causal LMs like distilgpt2; (4) we used our unimodal GAT model trained on the graph autoencoding task to generate node embeddings for each tweet or article, whereas LMSOC uses node2vec and SocialBERT uses vectors from SVD and Deep Walk. We used the GAT in order to be consistent with ConGraT and the unimodal baseline, to ensure that the comparisons were fair, and because it was likely to be a stronger baseline than using SVD. Social-LM models were fine-tuned for 3 epochs with the same hyperparameters used for the language modeling baseline, and have the same number of parameters as all-mpnet-base-v2, our masked LM baselines and the joint masked text encoders." }, { "figure_ref": [], "heading": "B.4 LinkBERT", "publication_ref": [ "b56" ], "table_ref": [ "tab_7" ], "text": "We implemented and trained LinkBERT (Yasunaga et al., 2022) as described in the original paper, with the only difference being that we used the same all-mpnet-base-v2 architecture as the other baseline models (instead of BERT-Base) in order to maintain consistency across experiments. We initialized weights from the pretrained all-mpnet-base-v2 model from sentencetransformers, and fine-tuned it on the masked language modeling (MLM) and document relation prediction (DRP) tasks for 3 epochs. Hyperparameters used for training are listed in Table 6. Note that because of its MLM training objective, we used LinkBERT as a baseline for masked language model variants of ConGraT only. All LinkBERT models have the same number of parameters as all-mpnet-base-v2, as the DRP head is dropped at inference time.\nWe created training instances for LinkBERT by sampling contiguous, linked, or random text segment pairs for the DRP training objective from each dataset, with the three options appearing uniformly (33%, 33%, 33%). For the Pubmed and Twitter datasets, we sampled 100,000 text pairs for each category, for a total of 300,000 pairs. For T-REx, which is a substantially smaller dataset, we sampled 10,000 text pairs for each category, for a total of 30,000 pairs. Text pairs consisted of anchor text segment 𝑋 𝐴 and paired text segment 𝑋 𝐵 : (𝑋 𝐴 , 𝑋 𝐵 ). The specific methods we used for sampling pairs for each dataset were as follows:\nPubmed. Text segments in each pair consisted of individual sentences from the abstracts of each article in the dataset. Anchor segments 𝑋 𝐴 were taken by sampling a random abstract, then sampling a random sentence from that abstract. For continuous pairs, 𝑋 𝐵 was chosen as the sentence immediately following 𝑋 𝐴 in the abstract (𝑋 𝐴 could not be the last sentence of the abstract). For linked pairs, 𝑋 𝐵 was chosen as a random sentence from the abstract of one of the articles that was connected to 𝑋 𝐴 's corresponding article in the citation graph. For random pairs, 𝑋 𝐵 was chosen as a random sentence from an abstract whose article was not connected to 𝑋 𝐴 's corresponding article in the citation graph.\nT-REx. Text segments in each pair consisted of individual sentences from the introductory paragraphs of each article in the dataset. Anchor segments 𝑋 𝐴 were taken by sampling a random article, then sampling a random sentence from that article's introductory paragraphs. For continuous pairs, 𝑋 𝐵 was chosen as the sentence immediately following 𝑋 𝐴 , with the same restriction as in Pubmed that 𝑋 𝐴 could not be the last sentence. For linked pairs, 𝑋 𝐵 was chosen as a random sentence from the introductory paragraphs of one of the articles that was connected to 𝑋 𝐴 's corresponding article in the link graph. For random pairs, 𝑋 𝐵 was chosen as a random sentence from an article that was not connected to 𝑋 𝐴 's corresponding article in the link graph.\nTwitter. Twitter has a different graph-text structure compared to Pubmed and T-REx; rather than the nodes consisting of texts themselves, the nodes are users who can each produce multiple tweets. Therefore, the notion of what constitutes continuous or linked text segments (tweets) is less clearly defined. We defined these relationships as follows.\nFor continuous pairs, we sampled a random tweet as 𝑋 𝐴 , and chose 𝑋 𝐵 as a different tweet from the same user as 𝑋 𝐴 . For linked pairs, we sampled 𝑋 𝐴 from the set of tweets that mentioned other users that were present in our dataset. Then, 𝑋 𝐵 was chosen as a random tweet from the mentioned user. Random pairs were simply taken by randomly sampling two tweets from different users to use as 𝑋 𝐴 and 𝑋 𝐵 ." }, { "figure_ref": [], "heading": "B.5 Downstream Logistic Regression", "publication_ref": [], "table_ref": [], "text": "We use the standard scikit-learn logistic regression implementation with the default lbfgs solver and L2 regularization and increase the maximum iterations for the solver to 10000." }, { "figure_ref": [], "heading": "C Summary Evaluation Metrics", "publication_ref": [ "b18" ], "table_ref": [], "text": "In addition to downstream applications like node classification and link prediction, we also examine certain summary metrics of the quality of our joint models, without regard to any particular downstream task. The metrics are intended to measure how well the models integrate information across language and graph modalities. We examine specifically:\n• Top-𝑘 accuracy in selecting the node which produced a given text (𝑘 = 1, ..., 10). For each text, we select the user with highest cosine similarity between the user's node embedding and the text's embedding. Note that in retrieval contexts, this might itself be an important downstream measure.\n• Embedding distance correlation: the correlation of the cosine similarities between pairs of texts with the cosine similarities between the corresponding pairs of nodes.\n• Correlation of text embedding distance with a purely graph-based distance (we use Sim-Rank), extending the previous metric by grounding text embedding distance more directly in the graph.\nWe compare each joint model's results on these metrics to those achieved by the combination of a language model and non-variational graph autoencoder, or link predictor (Kipf and Welling, 2016), each with the same initialization and architecture as the joint models' text and node encoders, trained separately on the same data. Joint models initialized with causal language models as text encoders are compared to causal models fine-tuned causally, and analogously for models with masked text encoders." }, { "figure_ref": [ "fig_2" ], "heading": "C.1 Results", "publication_ref": [], "table_ref": [], "text": "We observe a meaningful increase in several measures of cross-modal integration. As shown in Figure 3, top-k accuracy is substantially higher than the separately-trained baseline for all models at all values of 𝑘. All differences are significant at the 𝛼 = 10 -6 level according to a bootstrap test. Moreover, the top-k accuracies achieved are often high relative to the size of the datasets. With 1,996 articles in the Pubmed test set, the best-performing model includes the correct article for a text snippet in its top 10 most similar articles (0.5% of the test set) 26.6% of the time.\nResults are similar for other metrics, with graph and text embedding distances significantly more correlated after training than before for all models against their respective baselines (𝑝 < 10 -6 ). In 16 out of 18 cases, we also observe a significant increase (𝑝 < 10 -6 ) in the correlation between the cosine similarity of text embeddings and the SimRank similarity of the corresponding nodes.\nWe also observe certain patterns in these metrics by type of model. First, using 𝛼 = 0.1 usually but not always leads to higher scores compared to the corresponding model with 𝛼 = 0, including on top-k accuracy. This fact illustrates both the potential value of incorporating this information into the loss and also the need to tune the hyperparameter in practice. We do not, however, see models with either masked or causal text encoders consistently outperform models with the other kind of text encoder.9 Finally, we note that for every dataset and choice of masked/causal text encoder, the best-performing model on these metrics is one which discards edge directions." }, { "figure_ref": [], "heading": "D Downstream Task Accuracy", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We recreate Table 2 and Table 3 with accuracy instead of F1 scores. The results are shown in Table 8 and Table 9." }, { "figure_ref": [], "heading": "E Terms of Use", "publication_ref": [], "table_ref": [], "text": "We used certain scientific artifacts in preparing this paper, specifically pretrained models, software and datasets. Software we used was all available under open-source licenses which permit our use for research purposes. Pretrained models were also " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors gratefully acknowledge support in the form of access to data from Twitter, Inc. We are also indebted to colleagues at the MIT Center for Constructive Communication for their feedback on earlier versions of this work." } ]
We propose ConGraT 1 (Contrastive Graph-Text pretraining), a general, self-supervised method for jointly learning separate representations of texts and nodes in a parent (or "supervening") graph, where each text is associated with one of the nodes. Datasets fitting this paradigm are common, from social media (users and posts), to citation networks over articles, to link graphs over web pages. We expand on prior work by providing a general, selfsupervised, joint pretraining method, one which does not depend on particular dataset structure or a specific task. Our method uses two separate encoders for graph nodes and texts, which are trained to align their representations within a common latent space. Training uses a batchwise contrastive learning objective inspired by prior work on joint text and image encoding. As graphs are more structured objects than images, we also extend the training objective to incorporate information about node similarity and plausible next guesses in matching nodes and texts. Experiments on various datasets reveal that ConGraT outperforms strong baselines on various downstream tasks, including node and text category classification and link prediction.
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings
[ { "figure_caption": "Figure 1 :1Figure 1: Embeddings of graph nodes (red) and their associated texts (blue). They are placed into a common embedding space, in which nodes are embedded near their associated texts. Node-text pairs are labeled N1 to N5. Note that not every node must have an associated text (here, N5 does not).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The overall architecture of our model. Given a minibatch of (text, origin node) pairs, node and text embeddings are generated by their respective encoders, then used to compute pairwise cosine similarities. The final loss is the average of cross entropies along each row and column of the similarity matrix, with each row 𝑖's target probabilities (labeled D (𝑖) 𝑇 and D (𝑖) 𝐺 ) a mixture of the true targets (on the diagonal) and a (row-or column-specific) distribution proportional to a graphbased similarity measure.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Top-k accuracy at selection of the node which produced a text, for various values of 𝑘. \"Baseline\" indicates the use of separately pretrained embeddings, and other results are for models with various combinations of edge-direction use and graph-similarity information. Models with 𝛼 = 0 used no such information in their training losses, while models with 𝛼 = 0.1 put 10% weight on that information and 90% on the correct node/text correspondence.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "= {𝑡", "figure_data": "(𝑣) 𝑖 } 𝑁𝑣 𝑖=1 , for 𝑣 ∈ 𝑉 , be the setof node 𝑣's texts, with 𝑁 𝑣 the number of texts cor-responding to node 𝑣. We model 𝑡 (𝑣) 𝑖 , the 𝑖-th textof node 𝑣, as a finite sequence of tokens over avocabulary 𝑊 , where 𝐿 (𝑣) 𝑖 𝑡 (𝑣) 𝑖 = (𝑆 0 , 𝑆 1 , 𝑆 2 , ..., 𝑆 𝐿 (𝑖) 𝑣is the length of 𝑡(𝑣) 𝑖 :", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Jointly trained graph encoders' performance at link prediction, compared against a baseline GAT trained on a graph autoencoding task. Values shown are AUCs for the binary classification problem of predicting whether each edge in the test-set graph exists. Note that these results are inductive, with the test-set graph not available at training time. Bolded values indicate improved performance over the baseline.", "figure_data": "UndirectedDirectedPubmed T-REx Twitter Pubmed T-REx TwitterUnimodal GAT0.871 0.832 0.886 0.908 0.699 0.843ConGraT-Causal (𝛼 = 0.0) 0.976 0.886 0.793 0.952 0.755 0.819ConGraT-Masked (𝛼 = 0.0) 0.984 0.816 0.805 0.955 0.660 0.826ConGraT-Causal (𝛼 = 0.1) 0.983 0.932 0.785---ConGraT-Masked (𝛼 = 0.1) 0.985 0.940 0.790---", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Macro F1 scores for predicting various user characteristics on the Twitter dataset. Values with † are not statistically significantly different from ConGraT (𝑝 > 0.05).", "figure_data": "RegionAgeGenderOccupationCausal Masked Causal Masked Causal Masked Causal MaskedConGraT45.5747.0430.2232.2161.4367.0087.4778.52Social-LM33.5736.3125.5026.4044.5154.5648.7261.31TextLinkBERT-41.09-30.94 †-64.77 †-77.92 †Unimodal LM33.4737.4125.4326.5743.4454.2448.7260.81Maj. Class13.4513.4525.4325.4337.7337.7348.7248.72ConGraT50.5554.1036.0439.1669.5574.0388.2288.32GraphGAT50.1050.1037.69 †37.69 †64.9264.9278.0978.09Maj. Class13.3613.3625.4425.4437.7837.7848.7948.79ConGraT49.8551.8733.1335.2869.2374.3588.5385.97Text + GraphSocial-LM + GAT LinkBERT + GAT LM + GAT49.89 † -49.89 †49.39 49.37 49.3937.30 -37.3137.37 37.37 37.3666.07 -66.0666.21 66.21 66.2178.87 -78.8778.93 79.04 78.93Maj. Class13.5713.4525.3625.3637.7337.7348.7248.72PubmedT-RExCausalMaskedCausalMaskedConGraT76.5082.2583.7482.20Social-LM56.8580.1983.93 †79.54TextLinkBERT-83.81-83.35 †Unimodal LM55.5179.6482.4180.88 †Maj. Class20.3720.379.909.90ConGraT70.8373.3370.1360.97GraphGAT42.7842.7838.0638.06Maj. Class20.4320.439.749.74ConGraT78.2182.2784.5481.51Text + GraphSocial-LM + GAT LinkBERT + GAT LM + GAT49.65 -47.8365.83 73.15 66.1441.61 -41.5656.00 56.05 57.16Maj. Class20.3720.379.909.90Table 3: Macro F1 scores for predicting article categoryfor Pubmed and T-REx. Values with † are not statisti-cally significantly different from ConGraT (𝑝 > 0.05).", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Breakdown of various demographic features of Twitter users.", "figure_data": "FeatureCategory# NodesMidwest24RegionNortheast South250 319West26719-39108Age40-49444>=65168GenderFemale Male323 491OccupationNon-politician 1653 Politician 85Dataset Article Category # NodesExperimental375PubmedType I881Type II740Robots37Rockets112T-RExSci-Fi72Spacecraft138Space Telescopes 57", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Breakdown across article categories for Pubmed and T-Rex data.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Batch sizes and learning rates for all models. Each model shown used the provided batch size and learning rate for all datasets. All joint models, whether directed or undirected, with 𝛼 = 0 or 𝛼 = 0.1, and causal or masked encoders, use same batch size and learning rate. Except for the GNN baseline learning rate, where we tried both 1.0e-2 and 1.0e-3 and found large dataset-specific effects on performance, all models listed in the same model family use the same parameter settings. GNN baselines do not list a batch size because the entire graph is processed at once.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
William Brannon; Suyash Fulay; Hang Jiang; Wonjune Kang; Brandon Roy; Jad Kabbara; Deb Roy
[ { "authors": "Sara Atske", "journal": "", "ref_id": "b0", "title": "The State of Online Harassment", "year": "2021" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b1", "title": "Layer Normalization", "year": "2016" }, { "authors": "Benjamin Paul Chamberlain; Clive Humby; Marc Peter; Deisenroth ", "journal": "Springer", "ref_id": "b2", "title": "Probabilistic inference of twitter users' age based on what they follow", "year": "2017" }, { "authors": "Shantanu Chandra; Pushkar Mishra; Helen Yannakoudakis; Madhav Nimishakavi; Marzieh Saeidi; Ekaterina Shutova", "journal": "", "ref_id": "b3", "title": "Graph-based Modeling of Online Communities for Fake News Detection", "year": "2020" }, { "authors": "Arman Cohan; Sergey Feldman; Iz Beltagy; Doug Downey; Daniel Weld", "journal": "", "ref_id": "b4", "title": "SPECTER: Document-level Representation Learning using Citation-informed Transformers", "year": "2020" }, { "authors": "Munmun De Choudhury; Hari Sundaram; Ajita John; Doree Duncan Seligmann; Aisling Kelliher", "journal": "", "ref_id": "b5", "title": "Birds of a Feather\": Does User Homophily Impact Information Diffusion in Social Media?", "year": "2010" }, { "authors": "Benjamin Elizalde; Shuayb Zarar; Bhiksha Raj", "journal": "IEEE", "ref_id": "b6", "title": "Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio", "year": "2019" }, { "authors": "Hady Elsahar; Pavlos Vougiouklis; Arslen Remaci; Christophe Gravier; Jonathon Hare; Frederique Laforest; Elena Simperl", "journal": "European Language Resources Association (ELRA", "ref_id": "b7", "title": "T-REx: A Large Scale Alignment of Natural Language with Knowledge Base Triples", "year": "2018" }, { "authors": "William Falcon; Jirka Borovec; Adrian Wälchli; Nic Eggert; Justus Schock; Jeremy Jordan; Nicki Skafte; Vadim Ir1dxd; Ethan Bereznyuk; Tullie Harris; Peter Murrell; Sebastian Yu; Travis Praesius; Jacob Addair; Dmitry Zhong; So Lipin; Shreyas Uchida; Hendrik Bapat; Boris Schröter; Alexey Dayma; Akshay Karnachev; Shunta Kulkarni; Martin B Komatsu; Jean-Baptiste Schiratti; Hadrien Mary; Donal Byrne; Cristobal Eyzaguirre; Cinjon ; Anton Bakhtin", "journal": "", "ref_id": "b8", "title": "lease", "year": "2020" }, { "authors": "Matthias Fey; Jan Eric Lenssen", "journal": "", "ref_id": "b9", "title": "Fast Graph Representation Learning with PyTorch Geometric", "year": "2019" }, { "authors": "C Lee Giles; Kurt D Bollacker; Steve Lawrence", "journal": "ACM Press", "ref_id": "b10", "title": "CiteSeer: an automatic citation indexing system", "year": "1998" }, { "authors": "Antoine Gourru; Julien Velcin; Julien Jacques", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b11", "title": "Gaussian Embedding of Linked Documents from a Pretrained Semantic Space", "year": "2020" }, { "authors": "Shashank Gupta; Vasudeva Varma", "journal": "ACM Press", "ref_id": "b12", "title": "Scientific Article Recommendation by using Distributed Representations of Text and Graph", "year": "2017" }, { "authors": "William L Hamilton; Rex Ying; Jure Leskovec", "journal": "", "ref_id": "b13", "title": "Inductive Representation Learning on Large Graphs", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b15", "title": "Gaussian Error Linear Units (GELUs)", "year": "2020" }, { "authors": "Glen Jeh; Jennifer Widom", "journal": "ACM Press", "ref_id": "b16", "title": "SimRank: a measure of structural-context similarity", "year": "2002" }, { "authors": "Ilia Karpov; Nick Kartashev", "journal": "Springer International Publishing", "ref_id": "b17", "title": "SocialBERT -Transformers for Online Social Network Language Modelling", "year": "2022" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b18", "title": "Variational Graph Auto-Encoders", "year": "2016" }, { "authors": "Vivek Kulkarni; Shubhanshu Mishra; Aria Haghighi", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "LMSOC: An approach for socially sensitive pretraining", "year": "2021" }, { "authors": "Keith Levin; Elizaveta Levina", "journal": "", "ref_id": "b20", "title": "Bootstrapping Networks with Latent Space Structure", "year": "2021" }, { "authors": "Chang Li; Dan Goldwasser", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Encoding Social Information with Graph Convolutional Networks for-Political Perspective Detection in News Media", "year": "2019" }, { "authors": "Chang Li; Yi-Yu Lai; Jennifer Neville; Dan Goldwasser", "journal": "", "ref_id": "b22", "title": "Joint Embedding Models for Textual and Social Analysis", "year": "2017" }, { "authors": "Jinning Li; Shubhanshu Mishra; Ahmed El-Kishky; Sneha Mehta; Vivek Kulkarni", "journal": "", "ref_id": "b23", "title": "NTULM: Enriching Social Media Text Representations with Non-Textual Units", "year": "2022" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation", "year": "2021" }, { "authors": "Yangguang Li; Feng Liang; Lichen Zhao; Yufeng Cui; Wanli Ouyang; Jing Shao; Fengwei Yu; Junjie Yan", "journal": "", "ref_id": "b25", "title": "Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm", "year": "2021" }, { "authors": "Paul Pu Liang; Chiyu Wu; Louis-Philippe Morency; Ruslan Salakhutdinov", "journal": "", "ref_id": "b26", "title": "Towards Understanding and Mitigating Social Biases in Language Models", "year": "2021" }, { "authors": "Jie Liu; Zhicheng He; Lai Wei; Yalou Huang", "journal": "ACM", "ref_id": "b27", "title": "Content to Node: Self-Translation Network Embedding", "year": "2018" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b28", "title": "Decoupled Weight Decay Regularization", "year": "2019" }, { "authors": "Hehuan Ma; Yu Rong; Junzhou Huang", "journal": "Springer Nature", "ref_id": "b29", "title": "Graph Neural Networks: Scalability", "year": "2022" }, { "authors": "Andrew Kachites Mccallum; Kamal Nigam; Jason Rennie; Kristie Seymore", "journal": "Information Retrieval", "ref_id": "b30", "title": "Automating the Construction of Internet Portals with Machine Learning", "year": "2000" }, { "authors": "Paulius Micikevicius; Sharan Narang; Jonah Alben; Gregory Diamos; Erich Elsen; David Garcia; Boris Ginsburg; Michael Houston; Oleksii Kuchaiev; Ganesh Venkatesh; Hao Wu", "journal": "", "ref_id": "b31", "title": "Mixed Precision Training", "year": "2018" }, { "authors": "Norman Mu; Alexander Kirillov; David Wagner; Saining Xie", "journal": "Springer", "ref_id": "b32", "title": "Slip: Self-supervision meets language-image pre-training", "year": "2022" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b33", "title": "Representation Learning with Contrastive Predictive Coding", "year": "2019" }, { "authors": "Malte Ostendorff; Nils Rethmeier; Isabelle Augenstein; Bela Gipp; Georg Rehm", "journal": "", "ref_id": "b34", "title": "Neighborhood contrastive learning for scientific document representations with citation embeddings", "year": "2022" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b35", "title": "PyTorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b36", "title": "", "year": "" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b37", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b38", "title": "", "year": "" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b39", "title": "Language Models are Unsupervised Multitask Learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b40", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "year": "2019" }, { "authors": "Gerard Salton", "journal": "Addison-Wesley", "ref_id": "b42", "title": "Automatic text processing: the transformation, analysis, and retrieval of information by computer", "year": "1988" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b43", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b44", "title": "", "year": "" }, { "authors": "Martin Saveski; Brandon Roy; Deb Roy", "journal": "ACM", "ref_id": "b45", "title": "The Structure of Toxic Conversations on Twitter", "year": "2021" }, { "authors": "Prithviraj Sen; Galileo Namata; Mustafa Bilgic; Lise Getoor; Brian Galligher; Tina Eliassi-Rad", "journal": "AI Magazine", "ref_id": "b46", "title": "Collective Classification in Network Data", "year": "2008" }, { "authors": "Marco Serafini; Hui Guan", "journal": "ACM SIGOPS Operating Systems Review", "ref_id": "b47", "title": "Scalable Graph Neural Network Training: The Case for Sampling", "year": "2021" }, { "authors": "Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu", "journal": "", "ref_id": "b48", "title": "MPNet: Masked and Permuted Pretraining for Language Understanding", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b49", "title": "", "year": "" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "Journal of Machine Learning Research", "ref_id": "b50", "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "year": "2014" }, { "authors": "Yuri Takhteyev; Anatoliy Gruzd; Barry Wellman", "journal": "", "ref_id": "b51", "title": "Geography of twitter networks", "year": "2012" }, { "authors": "Cunchao Tu; Han Liu; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "CANE: Context-Aware Network Embedding for Relation Modeling", "year": "2017" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b53", "title": "Graph Attention Networks", "year": "2018" }, { "authors": "Isaac Waller; Ashton Anderson", "journal": "Nature", "ref_id": "b54", "title": "Quantifying social organization and political polarization in online platforms", "year": "2021" }, { "authors": "Cheng Yang; Zhiyuan Liu; Deli Zhao; Maosong Sun; Edward Y Chang", "journal": "AAAI Press", "ref_id": "b55", "title": "Network representation learning with rich text information", "year": "2015" }, { "authors": "Michihiro Yasunaga; Jure Leskovec; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "LinkBERT: Pretraining Language Models with Document Links", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 306.14, 221.1, 162.37, 17.45 ], "formula_id": "formula_0", "formula_text": "tion 3, let 𝑋 = ∪ 𝑣∈𝑉 {(𝑣, 𝑡 (𝑣) 𝑖 )} |𝑇 (𝑣) |" }, { "formula_coordinates": [ 4, 347.44, 344.95, 177.7, 25.61 ], "formula_id": "formula_1", "formula_text": "𝐶 𝑖𝑗 = 𝐹 𝑇 (𝑡 𝑖 ) • 𝐹 𝐺 (𝑣 𝑗 ) ‖𝐹 𝑇 (𝑡 𝑖 )‖ • ‖𝐹 𝐺 (𝑣 𝑖 )‖ 𝑒 𝜏(1)" }, { "formula_coordinates": [ 4, 349.9, 551.43, 175.24, 29.3 ], "formula_id": "formula_2", "formula_text": "𝑠 (𝑖) 𝑇 (𝑗) = 𝑆 𝑇 (𝑡 𝑖 , 𝑡 𝑗 ) ∑︀ 𝑁 𝐵 𝑘=1 𝑆 𝑇 (𝑡 𝑖 , 𝑡 𝑘 ) ∀𝑖(2)" }, { "formula_coordinates": [ 4, 406.64, 591.74, 9.47, 6.99 ], "formula_id": "formula_3", "formula_text": "(𝑖)" }, { "formula_coordinates": [ 4, 315.54, 699.15, 209.6, 16.17 ], "formula_id": "formula_4", "formula_text": "D (𝑖) 𝑇 (𝛼) = (1 -𝛼)1 𝑗 {𝑣 𝑗 = 𝑣 𝑖 } + 𝛼 ⃗ 𝑠 𝑇 (𝑡 𝑖 ),(3)" }, { "formula_coordinates": [ 4, 315.54, 718.92, 209.6, 16.17 ], "formula_id": "formula_5", "formula_text": "D (𝑖) 𝐺 (𝛼) = (1 -𝛼)1 𝑗 {𝑡 𝑗 = 𝑡 𝑖 } + 𝛼 ⃗ 𝑠 𝐺 (𝑣 𝑖 ). (4)" }, { "formula_coordinates": [ 5, 77.17, 96.4, 212.7, 50.05 ], "formula_id": "formula_6", "formula_text": "ℒ(𝐵; 𝛼) = 1 2𝑁 𝐵 𝑁 𝐵 ∑︁ 𝑖=1 𝐻(𝐶 𝑖,: , D (𝑖) 𝑇 (𝛼)) + 𝐻(𝐶 :,𝑖 , D (𝑖) 𝐺 (𝛼)) (5)" } ]
10.1162/tacl_a_00459
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b8", "b10", "b9", "b5", "b7", "b2" ], "table_ref": [], "text": "Large language models (LLMs) have significantly advanced the field of natural language processing (NLP) in recent years (Bubeck et al., 2023;Chowdhery et al., 2022;Touvron et al., 2023). With their vast parameter count and access to extensive data, LLMs have demonstrated remarkable accuracy across various tasks. However, current stateof-the-art LLMs lack a dedicated memory unit. Instead, they are trained to predict words based on context, encoding knowledge implicitly in their parameters, which differs from the ideal memory function.\nAn ideal memory unit should possess certain characteristics. Firstly, it should allow for read and 1 Work in progress. ⋆ Equal contribution. write operations, enabling the language model to interact with stored knowledge. Scalability is also crucial, as the memory unit should accommodate the consistently evolving nature of knowledge. Furthermore, the memory unit should not be limited to textual documents alone; it should be capable of acquiring knowledge from diverse sources such as database systems. Interpretabilty is desired, granting insight into the specific knowledge required by the LLM to solve a given task. Lastly, the information stored in the memory unit should be aggregatable, enabling the model to combine related information across multiple documents. For instance an LLM should be able to list all cities of a country mentioned in multiple documents.\nPrevious attempts to incorporate memory into LLMs have fallen short in capturing the complete range of memory characteristics. For example, (Zhong et al., 2022;Wu et al., 2022) and (Cheng et al.) degrade the memory as the ability to retrieve relevant documents for a given query context, and adding them to the context when generating answers. Park et al. (2023) merely stores and retrieves previous observations and reflections of a generative agent in a simulated environment.\nTo address these limitations, we introduce RET-LLM, (Retentive LLM) a solution that endows LLMs with a scalable, updatable, interpretable, and aggregatable memory module. Our proposal involves equipping language models with a memory module, which allows them to extract knowledge from text and save it for future reference. When faced with a task, the LLM can query the memory module for additional information to support its response. The memory module supports updates and can incorporate information from non-textual sources such as SQL and no-SQL databases and spreadsheets. Furthermore, it enables aggregation of various pieces of information related to a particular concept scatterred in a huge document or within multiple documents.\nFigure 1 shows the architecture of RET-LLM. It comprises three components: an LLM, a controller, and a memory unit. We employ Alpaca Taori et al. (2023), a recently released instruction-tuned language model (LLM), and design a fine-tuning process to enable it to acquire the following abilities: information extraction, information lookup, and fact-based answer generation.\nInformation extraction entails the identification and extraction of triplets in the form of <concept1, relationship, concept2> from informative sentences. The information lookup task involves querying the memory unit to acquire additional information concerning a given concept and its associated relationships when confronted with tasks necessitating further information. Lastly, fact-based answer generation involves generating a final answer based on the retrieved information. The triplet-based storage approach draws inspiration from the theoretical framework of Davidsonian semantics (Davidson, 1967), which provides a foundation for representing concepts described in sentences using a tripletlike structure of <event, subject, object>.\nThe memory module stores the triplets and their vector representations. During retrieval, it first searches for an exact match of the query text and resorts to a fuzzy search based on vector representations if no exact match is found. For efficient fuzzy search and retrieval, we employ LSH-based hashing of vector representations. The controller acts as an interface, automating interactions between users, the LLM, and the memory module, ensuring a seamless interaction experience with an intelligent chat system.\nOur proposed approach offers several advantages over previous methods. It enables LLMs to explicitly store and retrieve knowledge, which is crucial for real-world NLP applications. By incorporating explicit knowledge storage and retrieval, we gain better understanding of the workings of these models and the knowledge they rely on to solve tasks. The use of an external memory unit separate from the LLM ensures scalability and easy modification of stored information. The fuzzy search technique enables efficient retrieval of relevant information, even in the absence of exact matches. Storing information in triplets facilitates the generation of precise and comprehensive solutions, particularly when data aggregation is necessary. Lastly, the memory module allows for easy incorporation of information from diverse sources and accommodates changing facts over time.\nOver a qualitative evaluation using question answering examples, we demonstrate cases where a comparable LLM such as Alpaca-7B fails to return a correct answer. We show that this shortcoming occurs while the model has access to all the information required for generating a valid answer. However, in our proposed approach after storing the extractable knowledge from the context, the RET-LLM shows its capability in answering a question without the need of reinputting the context. We also demonstrate that RET-LLM could handle temporal based QA examples. Since it is equipped with a modifiable memory which could handle temporal facts." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b10", "b9", "b3", "b6" ], "table_ref": [], "text": "Prior works in the field have explored incorporating relevant context into large language models (LLMs) by retrieving and adding relevant documents to the task's context. Zhong et al. (2022) propose training LLMs with memory augmentation by introducing trainable memory units that are optimized during the training process. Wu et al. (2022) presents the Memorizing Transformer, which can attend to longer documents during inference. This approach stores (Key, Value) pairs, extracted from a transformer layer, in a memory and retrieves relevant pairs to add them to the current context during generation. (Cheng et al.) encode each documents, save them, and retrieve relevant documents based on the current context. In contrast to these approaches, our method offers improved scalability as we do not modify the architecture of the LLM. Instead, we suggest extracting and saving information from documents, allowing for the aggregation of extracted information from multiple sources. This enables us to provide more relevant and concise retrieved information that is closely aligned with the specific question being addressed.\nPark et al. ( 2023) utilizes an LLM within a generative agent framework to facilitate the storage and dynamic retrieval of a comprehensive record of the agent's experiences using natural language. However, there exists a fundamental distinction between their architecture and ours. In Park's framework, the memory component is an inherent part of the agent itself, while the LLM serves as an external tool employed solely for planning the agent's behaviors. Consequently, the LLM lacks control over the specific content to be stored and retrieved within the agent's memory. Dhingra et al. (2022) contribute to the field by curating a dataset specifically designed to differentiate between temporal and non-temporal facts. They propose training language models on temporally annotated data to enhance their temporal awareness. This work aligns with our research focus on addressing temporal information challenges. However, in our proposed solution, we address these challenges by introducing an updatable memory module. Schick et al. (2023) present a methodology that empowers LLMs to leverage external tools by generating API calls to access additional functionalities, such as using a calculator for task execution. Our work shares similarities with their approach in terms of teaching the LLM to utilize an external tool. However, it should be noted that our focus lies on incorporating a more intricate and influential tool, namely the memory module, which has the potential to significantly impact the LLM's output." }, { "figure_ref": [ "fig_0" ], "heading": "Approach", "publication_ref": [ "b6", "b2" ], "table_ref": [], "text": "We aim to design a RET-LLM where the user can perform two actions: (1): Provide one or a series of informative statements where the RET-LLM should be able to memorize the containing information. Previous methods perform this task by either training/fine-tuning the LLM over the provided document or creating a vector representation for the document and storing the representation. (2): Asking related questions which the RET-LLM would answer based on the stored memory. All these actions should function in a seamless setting where the user should only interact in natural language.\nOur RET-LLM is constituted by three main com-ponents: (1) Controller, (2): Fine-tuned LLM & (3): Memory. As shown in Figure 1, the controller moderates the flow of information between the user, the LLM and the memory. The LLM acts as a processing unit, where it receives the texts passed by the controller and figures where it needs to invoke a memory call or not. Since the LLM operates with text, inspired by Schick et al. (2023), we standardized the memory calls by implementing a text-based API schema. Therefore the LLM could generate memory API calls and the controller could apply the LLM API calls to the memory. In our setting, the memory stores data in triplets by using a three-columned table. This is based on the theoretical framework of Davidsonian semantics (Davidson, 1967), where concepts described in sentences could be stored in a structure of <first argument, relation, second argument>.\nIn the following we describe RET-LLM in more detail. The memory-API, how we finetune the LLM to become capable of these calls and the memory structure." }, { "figure_ref": [], "heading": "Memory Structure", "publication_ref": [], "table_ref": [], "text": "Each triplet defines a relationship between two arguments with the following format: ⟨t 1 , t 2 , t 3 ⟩ where t 1 is the first argument, t 2 is the relation and t 3 is the second argument in the relationship. For instance in the sentence: \"Mark Zuckerberg is the CEO of Meta Inc.\" the informative triplet that could be extracted is: (Mark Zuckerberg, CEO, Meta Inc.).\nTo store these triplets we use a three-columned table where each column is associated with each part of the triplet. Alongside saving the texts, we store the average representations so that the memory could also handle queries which have semantically similar words. If the memory module fails to find the exact text in the table, it checks for similar texts by comparing the vector representation of the query text with vector representations of text peices already stored in the dataset. Therefore for every t i the mean representation retrieved by the LLM (h AV G (t i )) is stored in a Locality-Sensitive Hashing (LSH) table. The reason of utilizing LSH is to reduce the computation required for finding similar representations. Without a hash table for a given query representation, the distances to all of the stored representations should be computed which would be a computationally-expensive task. Handling Memory Queries. In a memory query, one or two of the triplet parameters should be provided as input: Q ∈ {⟨q 1 ⟩, ⟨q 2 ⟩, ⟨q 3 ⟩, ⟨q 1 , q 2 ⟩, ⟨q 1 , q 3 ⟩, ⟨q 2 , q 3 ⟩} Where q i is the search term for the i-th parameter in the stored tuples. Before retrieving the query results, each search term is checked For a given Q, first the memory checks whether the search terms (q i ) have an exact match in the storage table. If q i does not exist in the stored terms, we use its average representation h AV G (q i ) and the LSH table for an alternative term (q i ) that has an exact match in out memory table. Possibly, the LSH table may not find an alternative term for the given representation, therefore the query would not have a result: Q → ∅. In any case (exact match or similar match), the query might have multiple matches in the data table (q i = t i ). In this case all resulting triplets would be returned as the query output." }, { "figure_ref": [ "fig_3" ], "heading": "Memory-API & Dataflow", "publication_ref": [], "table_ref": [], "text": "To enable communication between the memory and the LLM, we design an API schema for memory read and write functions. This API allows the controller to understand when the LLM is calling the memory and what parameters should be passed. Based on the triplets discussed in the previous section, the two memory calls are as the following:\n• [MEM_WRITE{t 1 »t 2 »t 3 }]:\nThis structure is for storing a triplet ⟨t 1 , t 2 , t 3 ⟩. Depending on the prompt, multiple write calls could be sequentially generated by the LLM to store multiple triplets extracted from a text.\n• [MEM_READ{_»_»_}: {t 1 »t 2 »t 3 };...] : In a memory read, as shown in the API, there are three placeholders that based on Q atleast one of them should be filled with the search terms.\nBased on the query results from the memory, one or a list of triplets could be returned as shown in the highlighted segment.\nFigure 2 demonstrates how RET-LLM operates using the memory-API. Depending on the input given by the user, RET-LLM either have to read or write information from or to the memory. If the user prompt an informative statement (or ideally a full document), it would be memory write scenario. On the other hand, by having a question in the input, we consider this to be a memory read case. In both cases the user input is the first input to RET-LLM that is passed on to the LLM.\nBased on the given input the LLM infers and generates the relevant API call. With a memory write case, after the API call is generated the controller detects it and invoke a memory storation function with the given parameters. The memory receives the data in a triplet format and stores it for future usage. If a memory read call is generated by the LLM, the controller also detects it and pauses the model's sequence generation for the memory retrieval. It uses the parameters given inside the read call as the query terms and passes them to the memory. The memory lists all stored triplets that feature the given search terms (or a semantically similar version of them according to §3.1) and return the results back to the controller. Using the API discussed in the beginning of this section, the read results are listed after the call so that the LLM could use them to produce a naturally sounded answer. After the answer is produced it is returned back to the user.\nAs the controller is in between of the user and the LLM, it could hide the whole memory-API schema. This would make the user feel an end-to-end simple language modeling experience without knowing the memory functionality behind the scene." }, { "figure_ref": [], "heading": "Finetuning the LLM", "publication_ref": [ "b7", "b4" ], "table_ref": [ "tab_0", "tab_1" ], "text": "In this part we discuss how the LLM is finetuned to be capable of generating memory-API calls. In the end the LLM should be capable of detecting which type of memory call (read or write) it should provoke based on the input. As stated in Section 3.2, the LLM's input may have one of the two previously discussed structures depending on the memory function. Therefore the LLM should be able to generate and handle this API to store or read the relevant information. To this end, we develop a synthetic dataset to train the LLM. The synthetic task is to learn the relationships of the discussed people with the respective corporations. Based on the stored information, RET-LLM should be capable of answering any questions regarding the people, the corporations or the relationships.\nWe use a set of firstname and lastnames to generate a synthetic population, called P. Each person from this population per ∈ P could have only one relationship from the following list: rel ∈ R = {employment, manager, investor, founder, customer} with an organization org ∈ O. Where O is a set of corporation names. Hence, each triplet would be as: (per, rel, org). For instance: ⟨Dominick Alphonso, employment, BMW⟩. 2 Based on this triplet we can build three triplet-specific questions:\n• Q = ⟨per⟩, e.g. \"Who is Dominick Alphonso?\"\n• Q = ⟨per, org⟩, e.g. \"How Dominick Alphonso is related to BMW?\"\n• Q = ⟨per, rel⟩, e.g. \"Dominick Alphonso is employed by which company?\" and the answer to all above should be \"Dominick Alphonso is employed by BMW.\". Alongside these questions three other types of questions could be asked that could be relevant to multiple triplets:\n• Q = ⟨rel⟩, e.g. \"Who are the employees?\"\n• Q = ⟨org⟩, e.g. \"Who are related to BMW?\"\n• Q = ⟨rel, org⟩, e.g. \"Who are employed by BMW?\"\nUnlike the first three, each of these questions could have multiple persons related to the answer. For each of these questions we expect the model answer the questions without any extra information (e.g. stating the corporation of employment when its not asked). To create a training data instance from these questions based on the memory-API, we use the templates stated in Table 1. During finetuning the Question, API query (with the MEM_READ command), API Response and the answer are concatenated as the data input for the LLM. However, the langauge modeling loss is only applied to the API query and Answer sections. Since these two segments are the text sequences that the LLM is expected to generate based on the other two segments (Question & API Response) that are provided by the controller. As we also need informative examples where have MEM_WRITE calls, we use a similar strategy by using the population, organizations and relations that were previously defined (P, Q, R). Based on the memory-API, in a memory write scenario the RET-LLM receives a sentence which here contains a relationship information and then the LLM should generate the corresponding memory write calls. In our dataset we opted to build examples where it states about multiple people whom have the same relationship with the same company: (per i , rel, org). The template for the memory write data examples are shown in Table 2. Similar to the question-based examples, the statement and the API call are concatenated to form the full input sequence. Also the loss function is applied only to the API segment, since the first part is provided by the controller.\nWe opted to use the Instruction-following Alpaca-7B model (Taori et al., 2023) as a base model for our finetuning. To execute the training in a resource limited setup, we use low-rank adaptation (LoRA) (Hu et al., 2022). 3 This parameter efficient measure allows us to finetune the base model on a single A6000 48GB GPU." }, { "figure_ref": [ "fig_6" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "In this part, we present the internal process and final output on multiple evaluation examples. These examples were generated with the same procedure stated in §3.3. First to demonstrate the importance of our approach, we provide the same example to 3 The code for finetuning a llama-based model using LoRA is available at: github.com/tloen/alpaca-lora our base model (Alpaca-7B) in a zero-shot setting. The input would be a short instruction for the task, the informative sentences from the example and in the end is the question. As shown in Figure 3, the zero-shot result from the instruction tuned model is clearly incorrect. While the model does have all the information in its context, its still produces an incorrect response.\nIn thie same example, the RET-LLM first stores the extracted triplets from the examples into the memory. After storing the extracted relationships, the RET-LLM could respond to the same question even without having the information in the input. With the help of the memory-API and the memory itself, the relevant triplet is found. The LLM manages to answer correctly after appending the query result to the memory call.\nOne potential use cases of our approach is in answering questions that have a temporal context. For example, the presidency of the United States undergoes a change every 4 to 8 years. A normal PLM model answers the question about the presidency based on its own training data. While model retraining or parameter editing has its own challenges, our approach could provide an easy and interpretable solution for this issue (Figure 4)." }, { "figure_ref": [], "heading": "Conclusion & Future Work", "publication_ref": [], "table_ref": [], "text": "In this work, we introduced a RET-LLM capable of storing information and retrieving it in further use. With a triplet based memory structure, information are stored in relationships between two arguments with a known relation. The memory could be utilized via a memory-API which is generated " }, { "figure_ref": [], "heading": "LLM final output to the User:", "publication_ref": [], "table_ref": [], "text": "Evaluation Example #1 (Zero-Shot Setting -Alpaca-7B):\nCyrus Alfred, Tia Batres, and Dorothea Altemus." }, { "figure_ref": [], "heading": "LLM output to the User:", "publication_ref": [], "table_ref": [], "text": ": You will be presented with one or a series of sentences about some people and their relationship with a company. After that, for any given question you should be capable of answering that based on the previous sentences.\nCyrus Alfred, Tia Batres, and Pasquale Ballif are customers of Pfizer. Dorothea Altemus is employed by Pfizer. Question: Who are employed by Pfizer? Answer:" }, { "figure_ref": [], "heading": "Instruction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Relationship sentences", "publication_ref": [], "table_ref": [], "text": "Question and prompt for the answer Figure 3: An example that has an incorrect result in a zero-shot setting and a correct one in our approach. Note that in the zero-shot setting the model has direct access to the information required for answering the question in its input and still end up with an incorrect answer. However, in our approach each of the user prompts could be given to the RET-LLM in separately. Another example is mentioned in the appendix Figure 5." }, { "figure_ref": [], "heading": "QA Example (Alpaca-7B):", "publication_ref": [], "table_ref": [], "text": "Barack Obama." }, { "figure_ref": [], "heading": "LLM output to the User:", "publication_ref": [], "table_ref": [], "text": ": Question: Who is the president of the United States? by a finetuned LLM. Using a controller, all components could communicate with each other and the user would interact with the controller being unbeknown of the behind process. We have shown that the LLM generates the proper API calls in some question answering examples without having the information in its input context. As this work is still under development, in our next revision we will add a more in-detail empirical evaluation, preferrably on a real dataset. We also seek to improve our finetuning method to a more generalized setting so that it could be capable of working with more types of informative relations." }, { "figure_ref": [], "heading": "A Extra Evaluation Example", "publication_ref": [], "table_ref": [], "text": "" } ]
Large language models (LLMs) have significantly advanced the field of natural language processing (NLP) through their extensive parameters and comprehensive data utilization. However, existing LLMs lack a dedicated memory unit, limiting their ability to explicitly store and retrieve knowledge for various tasks. In this paper, we propose RET-LLM a novel framework that equips LLMs with a general write-read memory unit, allowing them to extract, store, and recall knowledge from the text as needed for task performance. Inspired by Davidsonian semantics theory, we extract and save knowledge in the form of triplets. The memory unit is designed to be scalable, aggregatable, updatable, and interpretable. Through qualitative evaluations, we demonstrate the superiority of our proposed framework over baseline approaches in question answering tasks. Moreover, our framework exhibits robust performance in handling temporal-based question answering tasks, showcasing its ability to effectively manage time-dependent information.
RET-LLM: Towards a General Read-Write Memory for Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of RET-LLM. A user could prompt with (A): an informative sentence and our approach stores potent information from it inside the memory or (B): a question where previously saved information should be utilized to generate a valid answer.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Memory-Write scenario: (1) Controller passes the input to the LLM (2) which generates the appropiate memory write call. (3) The controller gives the data (and their average represntations) to the memory to be stored.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Memory-Read scenario: (1) Controller passes the question to the LLM (2) which generates the appropiate memory read call. (3) The controller apply the query on the memory with the given search terms from the LLM. (4) The memory returns the query results which are (5) forwarded back to the LLM. (6) The LLM generates the answer to the question using the query results and (7) the answer would be returned back to the user.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: A visualization of the process in both read-and write-based inputs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ":Cyrus Alfred, Tia Batres, and Pasquale Ballif are customers of Pfizer. Evaluation Example #1: [MEM_WRITE{Cyrus Alfred>>customer of>>Pfizer}][MEM_WRITE{Tia Batres>>customer of>>Pfizer}][MEM_WRITE{Pasquale Ballif>>customer of>>Pfizer}] : (User Prompt + MEM_READ Call + Memory result passed by the Controller) Who are employed by Pfizer?[MEM_READ{>>employed by>>Pfizer}:{Dorothea Altemus>>employed by>>Pfizer}] Dorothea Altemus is employed by Pfizer.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Prompt + MEM_READ Call + Memory result passed by the Controller) Who is the president of the United States?[MEM_READ{>>president of>>United States}:{Joe Biden>>president of>>United States}] Joe Biden is the president of the United States. LLM final output to the User:", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Asking a question which requires temporal context usually leads to an outdated answer as shown here with Alpaca. However, in our RET-LLM with the aid of a modifiable memory, these questions could be answered by simply providing a updated memory entry.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "»rel 1 »org};{per 2 »rel 2 »org};... [per 1 , per 2 , ...] is\\are related to org. ⟨rel⟩ Who are the rel? {»rel»}: {per 1 »rel»org 1 };{per 2 »rel»org 2 };... [per 1 , per 2 , ...] is\\are rel. ⟨org, rel⟩ Who are rel org? {»rel»org}: {per 1 »rel»org};{per 2 »rel»org};... [per 1 , per 2 , ...] is\\are rel to org. Memory read data examples for finetuning.The first three types of questions are based on a single triplet therefore the API response would be only one triplet. However the second three may have multiple relevant tiplets stored in the memory as shown in their API-Resonse. Thus, the answer should combine the triplets data into a single sentence. [per 1 , per 2 , ...] is the placeholder of the names written sequentially in a natural way. For instance: \"Dirk Alosa, Ty Baumkirchner, and Vera Bayless\" , rel, org⟩, ⟨per 2 , rel, org⟩, ...][per 1 , per 2 , ...] is\\are rel to org.[MEM_WRITE{per 1 »rel 1 »org}][MEM_WRITE{per 2 »rel 2 »org}]...", "figure_data": "Query TypeQuestionAPI QueryAPI ResponseAnswer⟨per⟩Who is per?{per»»}:{per»rel»org}per is rel to org.⟨per, org⟩How per is related to org?{per»»org}:{per»rel»org}per is rel to org.⟨per, rel⟩per is rel which company?{per»rel»}:{per»rel»org}per is rel to org.⟨org⟩ {per 1 Triplet(s) Who are related to org? {»»org}: StatementAPI Write Call(s)[⟨per 1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Memory write data example structure for finetuning.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Ali Modarressi; Ayyoob Imani; Mohsen Fayyaz; Hinrich Schütze
[ { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b0", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Xin Cheng; Yankai Lin; Dongyan Zhao; Rui Yan ; Aakanksha; Sharan Chowdhery; Jacob Narang; Maarten Devlin; Gaurav Bosma; Adam Mishra; Paul Roberts; Hyung Won Barham; Charles Chung; Sebastian Sutton; Parker Gehrmann; Kensen Schuh; Sasha Shi; Joshua Tsvyashchenko; Abhishek Maynez; Parker Rao; Yi Barnes; Noam Tay; Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b1", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Donald Davidson", "journal": "", "ref_id": "b2", "title": "The logical form of action sentences", "year": "1967" }, { "authors": "Bhuwan Dhingra; Jeremy R Cole; Julian Martin Eisenschlos; Daniel Gillick; Jacob Eisenstein; William W Cohen", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b3", "title": "Time-aware language models as temporal knowledge bases", "year": "2022" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b4", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Sung Joon; Park; C O' Joseph; Carrie J Brien; Meredith Ringel Cai; Percy Morris; Michael S Liang; Bernstein", "journal": "", "ref_id": "b5", "title": "Generative agents: Interactive simulacra of human behavior", "year": "2023" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b6", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b7", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b8", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Yuhuai Wu; Markus N Rabe; Delesley Hutchins; Christian Szegedy", "journal": "", "ref_id": "b9", "title": "Memorizing transformers", "year": "2022" }, { "authors": "Zexuan Zhong; Tao Lei; Danqi Chen", "journal": "", "ref_id": "b10", "title": "Training language models with memory augmentation", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 319.16, 602.23, 119.6, 10.28 ], "formula_id": "formula_0", "formula_text": "• [MEM_WRITE{t 1 »t 2 »t 3 }]:" } ]
2023-11-06
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32", "b34", "b18", "b26", "b28", "b23", "b45", "b3", "b12", "b9", "b36", "b44" ], "table_ref": [], "text": "Recently, large language models (LLMs) (Zhao et al., 2023a) have shown great potential as generalpurpose task solvers in a variety of real-world applications. With excellent few-shot and zero-shot ability, LLMs, such as GPT-4 (OpenAI, 2023) and LLaMA (Touvron et al., 2023), can even outperform full-data supervised-tuned models on many tasks with suitable prompting strategies.\nAmong these prompting strategies, chain-ofthought (CoT) prompting (Wei et al., 2022;Ko-jima et al., 2022) has been a prominent approach to eliciting the reasoning abilities of LLMs. It incorporates the intermediate reasoning steps of exemplars into the input prompt, to instruct LLMs to solve a question step by step. Despite the remarkable improvement by CoT prompting, LLMs still have difficulties in solving complex reasoning tasks that involve specific functionalities, such as arithmetic calculation and information retrieval (Lu et al., 2022;Qian et al., 2022). To address this issue, external tools (e.g., calculator, search engine) have been employed to fulfill the basic functionalities (Schick et al., 2023;Paranjape et al., 2023), easing the burden of LLMs. With proper interfaces, LLMs can be guided by prompts to manipulate tools when necessary.\nHowever, as tools are not intrinsically integrated with LLMs, incorporating external tools would have to interrupt the CoT reasoning process of LLMs. Such an issue would become more intractable on complex reasoning tasks that frequently invoke the use of tools. To address it, existing work either relies on LLMs to prearrange the tool use plan for subsequent execution (Zhou et al., 2022;Jiang et al., 2023b), or needs to design formal actions pertaining to specific tasks (Dua et al., 2022;Khattab et al., 2022;Jiang et al., 2023a). Despite the effectiveness, the two types of methods still suffer from potential issues: the former one cannot interact with tools after generating the plan, even seeing obvious mistakes; while the latter one has to frequently switch between reasoning with LLMs and taking actions, hurting the continuity of the CoT reasoning process.\nTo overcome these disadvantages, we seek a more unified way to integrate CoT reasoning and tool manipulation. As the key idea, we consider tools manipulation by LLMs as the interaction between LLMs and tools, in which LLMs send the use requests and tools respond to support specific functions. Further, inspired by the recent progress of ChatGPT-like LLMs (called chat-based LLMs), we model the interaction process between LLMs and tools as a multi-turn conversation, and leverage the excellent chatting capacities for manipulating tools by LLMs. At each turn, the LLM can freely interact with tools when in need, otherwise perform the reasoning by itself. The conversation continues until the final answer is derived by LLMs. In this process, as chat-based LLMs can well understand the multi-turn context, they can follow the thought chain in the whole conversation and naturally invoke the tools accordingly, thus keeping the continuity of the reasoning process.\nTo this end, in this paper, we propose ChatCoT, a tool-augmented chain-of-thought reasoning strategy for chat-based LLMs. As the major merit, Chat-CoT can perform the CoT reasoning across multiturn conversation, and freely interact with tools at immediate steps. Concretely, we first store the useful knowledge at early turns of the conversation, including tools, tasks, and multi-turn reasoning format, to help LLMs utilize task-specific knowledge to perform reasoning or manipulate tools. Then, we iterate a specially designed tool-augmented reasoning step in which LLMs interact with tools, to perform step-by-step tool-augmented reasoning, until obtaining the final answer.\nTo evaluate the effectiveness, we implement ChatCoT on ChatGPT, and conduct experiments on two complex reasoning benchmarks, i.e., MATH (Hendrycks et al., 2021) and Hot-potQA (Yang et al., 2018). Experimental results show that ChatCoT achieves very promising performance on MATH with 7.9% relative improvement in average over the SOTA baselines (i.e., PHP (Zheng et al., 2023)). Besides, our approach can also be integrated with other strategies, e.g., self-consistency, and ChatCoT can achieve better performance by incorporating these strategies." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b32", "b19", "b24", "b27", "b28", "b25", "b8", "b4", "b38", "b40", "b15", "b29", "b38", "b34", "b13", "b45", "b3", "b39", "b30", "b33", "b20", "b44", "b37", "b35", "b34", "b28", "b21", "b6" ], "table_ref": [], "text": "Tool-Augmented Large Language Models. With the large-scale parameters and pre-training corpus, large language models (LLMs) (e.g., Flan T5 (Chung et al., 2022), ChatGPT (OpenAI, 2022) and LLaMA (Touvron et al., 2023)) have demonstrated strong zero-shot and few-shot ability in NLP tasks (e.g., language generation, reasoning). However, LLMs have still struggled with complex reasoning tasks requiring task-specific knowledge and multi-step reasoning (e.g., mathemati-cal problem solving). Previous work (Zhao et al., 2022(Zhao et al., , 2023b;;Luo et al., 2023) has constructed taskspecific corpus and utilized continue pre-training and instruction tuning to inject relative knowledge into LLMs and enhance the complex reasoning ability of LLMs. In order to further reduce the mistakes made by LLMs, existing methods have explored to augment LLMs with external tools. They can be roughly categorized into the following two types. The first type of methods (Gao et al., 2023;Parisi et al., 2022;Qiao et al., 2023) train the model parameters to support the utilization of the external tools, where they collect or synthesize the toolaugmented examples to tune the model parameters (Schick et al., 2023;Patil et al., 2023;Hao et al., 2023). Another type of methods (Gao et al., 2022;Yao et al., 2022;Zhang et al., 2023) utilize carefully designed prompts to guide LLMs to use external tools. They focus on devising proper prompts or tools manipulation ways to select and use tools when necessary (Liang et al., 2023;Shen et al., 2023;Yao et al., 2022). In this work, we follow the second type of methods and propose a tool-augmented chain-of-thought reasoning strategy that can better solve complex reasoning tasks.\nChain-of-Thought Reasoning. To further enhance the reasoning capacity of LLMs, Chain-of-Thought (CoT) prompting strategy (Wei et al., 2022;Kojima et al., 2022) has been proposed to guide LLMs to generate intermediate reasoning steps which can boost the performance of LLMs. Through special instructions (e.g., \"Let us think step by step\") and in-context exemplars with detailed intermediate reasoning steps, LLMs can perform step-bystep reasoning to reach the final answer. Based on CoT, recent work has also proposed several methods to further improve the performance, including problem decomposition (Zhou et al., 2022;Dua et al., 2022), appropriate exemplars selection (Ye et al., 2022;Shi et al., 2023), results postprocessing (Wang et al., 2022;Madaan et al., 2023;Zheng et al., 2023), and changing the reasoning format (Yao et al., 2023;Wu et al., 2023). However, as the generation process of CoT is one-pass, the utilization of tools in intermediate steps would have to interpret it, hurting the continuity of the generation process. In this work, we propose a unified way to integrate CoT reasoning and tool manipulation, which utilizes the excellent multi-turn chatting capacity of LLMs to perform CoT reasoning across multi-turn conversations.\nIn this section, we present the task setting, then introduce the Chain-of-Though prompting strategy and the tool manipulation in reasoning tasks.\nTask Setting. In this work, we focus on improving the reasoning ability of large language models (LLMs) on complex tasks, e.g., solving mathematics competition problems. Unlike tasks that can be solved by humans via straightforward skills or tools, complex tasks require advanced knowledge (e.g., mathematical theorem) and multi-step reasoning to reach the answer. Typically, a complex problem includes three types of texts, namely problem statement, solution text, and answer key, denoted as 𝑄, 𝑆 and 𝐴, respectively. The problem statement 𝑄 introduces the background and description of a complex problem, and the solution text illustrates the detailed solving process to obtain the answer key. All of them are composed of a sequence of tokens, where each token is either a text word or a mathematical symbol. Formally, given the problem statement 𝑄, we aim to utilize LLMs to perform multi-step reasoning, to finally generate its accurate answer 𝐴.\nChain-of-Thought Prompting. To elicit the powerful reasoning ability of LLMs for complex tasks, Chain-of-Thought (CoT) prompt strategy (Wei et al., 2022) has been widely used to guide LLMs for performing step-by-step reasoning. Generally, the CoT prompt consists of few exemplars whose a series of intermediate reasoning steps {𝐼 1 , • • • , 𝐼 𝑛 } are also involved. Each exemplar can be denoted as 𝐸 = ⟨𝑄, {𝐼 1 , • • • , 𝐼 𝑛 }, 𝐴⟩. Formally, given the question and few exemplars, a CoT prompt is composed by integrating them as a long input of the LLM, which can prompt the LLM to generate a similar thought chain that leads to the final answer.\nTool Manipulation. Previous work has revealed that LLMs are struggling with basic functionality (e.g., arithmetical calculation (Schick et al., 2023)), which can be solved by using specific external tools (e.g., calculator), denoted as {𝑇 1 , . . . , 𝑇 𝑛 }. To manipulate tools, existing work mostly relies on writing a detailed prompt for describing how to use available tools for the LLM, then incorporates it to guide the selection of useful tools and generate the tool arguments, and finally calls the tool API to obtain the result. Following this way, in this work, we focus on three useful tools that have been widely used by humans to solve complex problems:\n• Calculator: Given a mathematical expression, the calculator can compute the value of it or simplify it according to arithmetic rules (e.g., combining like terms and reduction of fractions).\n• Equation Solver: Given the equations system and unknown variables, the equation solver can automatically calculate the value of the contained unknown variables through relative algorithms.\n• Retriever: Given a query, the retriever aims to extract the most relevant information (e.g., documents) from a number of candidates. According to the types of the retrieved corpus, it can be implemented by specialized models, e.g., dense retrieval model.\nWe implement the first two tools by using different functions of SymPy (Meurer et al., 2017), a Python library for mathematical symbolic calculation. For the retriever, we adopt SimCSE (Gao et al., 2021), a sentence embedding model to measure the text semantic similarity. Note that when the input expression or equation is ill-formed or unsolved, the above tools would return an error." }, { "figure_ref": [ "fig_2" ], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "In this section, we present our proposed ChatCoT, a new chain-of-thought (CoT) prompting framework based on multi-turn conversations, for improving chat-based LLMs on complex reasoning tasks with tools. The overall illustration of our proposed Chat-CoT is shown in Figure 1." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [ "b4", "b17", "b38" ], "table_ref": [], "text": "For complex tasks (e.g., advanced mathematical problems), LLMs require to frequently manipulate the tools when in need, to fulfill the intractable intermediate issues. However, as tools are not intrinsically integrated with LLMs, previous work mostly relies on the LLM to generate the plan of manipulating tools and then execution (Gao et al., 2022;Lu et al., 2023), or immediately call tools by stopping the continuous generation of LLMs (Yao et al., 2022). The above both ways are not suitable for the frequent interactions between LLMs and tools, due to the error accumulation in planning and frequent interruptions in LLM generation.\nIn our approach, we decompose the chain-ofthought reasoning process of LLMs into a multiround conversation. In each turn, LLMs just require to concentrate on manipulating tools or accomplishing reasoning in the current step, and the whole Problem: Solve for 𝑒, given that 2𝑑 is 8 less than 17𝑒, and 2𝑒 is 9 less than 𝑑.\nSolution: Let's think step by step. We begin with a system of two equations 2d = 17e -8, 2e = d -9.\nSince the second equation can also be rewritten as d = 2e + 9, we can plug this expression for d back into the first equation and solve for e. 2d = 17e -8 → 2 2e + 9 = 17e -8 → 4e + 18 = 17e -8 → -13e = -26 → e = 2." }, { "figure_ref": [], "heading": "Tool Knowledge Reasoning Format", "publication_ref": [], "table_ref": [], "text": "Problem: A function 𝑓 …… " }, { "figure_ref": [], "heading": "… …", "publication_ref": [], "table_ref": [], "text": "Problem: Solve for 𝑒, given that 2𝑑 is 8 less than 17𝑒, and 2𝑒 is 9 less than 𝑑. reasoning process would keep on pushing without premature planning and sudden interruption. In this way, the whole reasoning process would be converted to a conversation between LLMs and an agent, which follows pre-defined rules to guide LLMs and manipulate the tool. By designing proper chatting strategies, the agent would automatically elicit LLMs to perform reasoning and select a tool, or invoke the tool for execution.\nIn our approach, we first initialize the multi-turn conversation by feeding chat-based LLMs with the background knowledge, i.e., the description of tools, relevant task exemplars, and the demonstration of decomposed chain-of-thought in chat, which are the conversational knowledge memory for supporting the following reasoning. Then, we propose the tool-augmented reasoning procedure that leverages LLMs to perform reasoning with tools in the current step and iterate it to fulfill all sub-tasks in the whole reasoning process, until reaching the answer. We introduce the details of the two components in the following." }, { "figure_ref": [], "heading": "Initializing Conversational Knowledge Memory", "publication_ref": [ "b6" ], "table_ref": [], "text": "To guide chat-based LLMs to follow our proposed ChatCoT using external tools, it is essential to design proper prompts in context. In our approach, as we reformulate the chain-of-thought reasoning into a decomposed multi-turn conversation, we can also feed the essential prompts into LLMs at early turns as the context, to initialize the conversation background knowledge. It can be seen as the incontext knowledge memory in the format of dialogue that stores useful knowledge for helping chatbased LLMs manipulate tools or perform reasoning.\nHere, we consider three types of knowledge about tools, task, and multi-turn reasoning format, respectively. The details of prompts are in Appendix A.\nTools Knowledge. As LLMs have never seen tools during pre-training, for each tool in Section 3, we hand-craft its description in the following pattern: \"[𝑇] can help you [𝑌 ]\", where [𝑇] is the tool name and [𝑌 ] shows its detailed functionality. Then, we merge all the descriptions and design the input prompt to tell LLMs about the knowledge of all tools. We also hand-craft the expected response of the LLM. It will be also fed into the LLM, to indicate the LLM that it has accepted our prompt and should follow it.\nRetrieval-Augmented Task Knowledge. Since LLMs can learn the task knowledge from incontext exemplars, we leverage a retriever to select the most relevant instance from the training dataset, to provide more useful knowledge for the given question. Concretely, we train SimCSE (Gao et al., 2021), a sentence embedding method that can measure the semantic similarity of texts, via the unsupervised training strategy on the training set. Then, we leverage it to retrieve the top-𝑘 most semantically similar exemplars, and concatenate their problem statement 𝑄 and solution 𝑆 to compose the input prompt. Similarly, we also feed it with our expected response into the LLM.\nMulti-turn Reasoning Format. To elicit LLMs following multi-turn reasoning format, we manually annotate the whole multi-round dialogue 𝐼 1 , • • • , 𝐼 𝑛 of randomly sampled five questions from the training set, to create the exemplars. Then, we feed the dialogues of all the exemplars into the chat-based LLM round by round, as the context to guide LLMs to follow it for performing reasoning.\nSummary. The above three types of multi-turn utterances are pre-defined with corresponding contents and formats, which compose the conversational knowledge memory of our approach. It would be leveraged to initialize the conversational context, and support the following step-by-step reasoning for answering the question." }, { "figure_ref": [], "heading": "Iterative Tool-augmented Reasoning", "publication_ref": [], "table_ref": [], "text": "Based on the above conversational knowledge memory, we iterate the tool-augmented reasoning step to perform step-by-step tool-augmented reasoning, until finally obtain the answer." }, { "figure_ref": [], "heading": "Tool-augmented Reasoning Step", "publication_ref": [], "table_ref": [], "text": "The tool-augmented reasoning step can be iterated in multiple times. In each iteration, based on the current results, we first leverage LLMs to perform reasoning, then select the proper tool by LLMs, and finally execute the selected tool to obtain the intermediate result in the current step.\nLLM for Reasoning. Guided by the exemplars in the conversation history, LLMs are able to decom-pose the whole reasoning process into multi-turn chat. Specially, LLMs would be elicited by the contextual exemplars to directly perform reasoning in natural language based on the current result, without specialized prompts or instructions. Consequently, LLMs can rely on the retrieval-augmented task knowledge in context, to generate the natural language solution till the point that needs the functionality of tools.\nLLM for Tools Selection. After reasoning, we utilize the LLM to select a useful tool (e.g., calculator), which will be employed to provide the required functionality for the LLM. Here, the input prompt of the LLM is \"To solve this sub-problem, which tool can we use?\" After feeding it into the LLM, if the LLM requires to utilize tools, it will select a suitable one, and then we further ask the LLM to formulate the input arguments of the tool, e.g., mathematical expression. Otherwise, it will answer \"Do not use tool\", and the LLM will continue to perform reasoning.\nTools Execution. Given the selected tool and formulated arguments by LLMs, we can execute the tool with the arguments to obtain the result in the current iteration. Here, we also consider that the results from the tool may be not satisfied by the LLM, e.g., irrelevant retrieved documents. In this case, we can also add several feedback rounds where the LLM judges if the result is useful or expected, and then reuse the tool to acquire a new result." }, { "figure_ref": [], "heading": "Iteration for Step-by-Step Reasoning", "publication_ref": [ "b33" ], "table_ref": [], "text": "We iterate the above step based on the in-context conversation knowledge memory, to perform stepby-step reasoning on the given question 𝑄. We start the whole iteration process using the following prompt: \"You should solve the problem step by step and you should follow the react in the history [𝑄]\". Then, after reaching the answer key, the iteration process will be stopped by LLMs. In practice, we find that chat-based LLMs are prone to continue chatting although the answer key has appeared in the reasoning process. Thus, we set the maximum chat turns, and devise the following prompt to force LLMs to stop reasoning and conclude the answer: \"Base on the context, what is the answer?\".\nAs our proposed approach only decomposes the one-pass chain-of-thought reasoning into multiturn chat and adds the utilization of tools, it is agnostic to the task types and tools implementation. Therefore, it is a general framework that can be applied to a variety of complex reasoning tasks that require suitable tools. Besides, our approach also supports the recently proposed improvement strategies based on the chain-of-thought method, e.g., self-consistency (Wang et al., 2022). We conduct corresponding experiments in Section 5.3 to validate it." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments to evaluate the effectiveness of ChatCoT. The implementation details can be found in Appendix B." }, { "figure_ref": [], "heading": "Experimental settings", "publication_ref": [ "b9", "b36", "b34", "b7", "b44", "b1", "b14", "b31", "b32" ], "table_ref": [ "tab_0" ], "text": "Datasets. We consider two complex reasoning datasets for evaluation, i.e., MATH (Hendrycks et al., 2021) and HotpotQA (Yang et al., 2018). The details of these two datasets are shown in Table 1. We adopt accuracy as the evaluation metric.\n• MATH is composed of challenging competition mathematical problems which require advanced mathematical knowledge. It is divided into seven categories, i.e., Algebra, Counting and Probability, Precalculus, Prealgebra, Geometry, Intermediate Algebra, and Number Theory. We adopt the calculator and an equation solver as external tools to help LLMs.\n• HotpotQA is a multi-hop question answering dataset, where each question is associated with a collection of paragraph candidates containing several golden contents which are useful for reasoning. We use the development set under the distractor setting of HotpotQA for evaluation, where the annotation of golden paragraphs is not aware to LLMs. We employ the retriever as the external tool.\nBaselines. We mainly compare our approach with the following prompting strategies based on Chat-GPT (OpenAI, 2022):\n• Chain-of-Thought (CoT) (Wei et al., 2022) is a prominent method to boost the performance of LLMs in reasoning tasks. In CoT, LLMs are prompted to generate the intermediate reasoning path and reasoning step by step to reach the final answer. Previous work has shown that the utilization of external tools and similar exemplars improves the performance of CoT. Therefore, we implement external tools to help LLMs reason and retrieve to help LLMs select exemplars, which are named CoT w/ Tool, and CoT w/ Retri, respectively.\n• Learning-to-Program (LP) (Guo et al., 2023) guides LLMs to program in natural language by learning solutions in the training set, and elicits LLMs to solve tasks following the program.\n• Progressive-Hint Prompting (PHP) (Zheng et al., 2023) proposes to iteratively refine the solution based on the answer hints from previous trials. The iterative method achieves SOTA on MATH.\nTo provide a more complete evaluation, we also report the performance of various LLM backbones with the vanilla CoT prompting, including PaLM (Chowdhery et al., 2022), PaLM 2 (Google, 2023), Minerva (Lewkowycz et al., 2022), Galactica (Taylor et al., 2022), LLaMA (Touvron et al., 2023) and GPT-3 (Brown et al., 2020)." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "We present the evaluation results of our approach on MATH and HotpotQA datasets in Table 2 and Table 3 respectively.\nFirst, for the comparison of backbones for CoT prompting, ChatGPT achieves the best performance, demonstrating its outstanding mathematical reasoning ability. Our method elicits the reasoning process by leveraging the strong multi-turn dialogue ability of ChatGPT, thus leading to a better release of the reasoning ability from ChatGPT.\nSecond, retrieval-augmented methods (e.g., ChatCoT, CoT w/ Retri) outperform other baselines. The reason is that retrieved exemplars may contain more relevant knowledge and reasoning steps that are beneficial to solve the given problem. On Geometry tasks of MATH, CoT w/ Retri achieves the largest improvement over vanilla CoT than other sub-tasks. Another possible reason is that ChatGPT is more unfamiliar to the knowledge and symbol of geometry than others. Without similar exemplars, it is difficult for LLMs to well understand them. Third, given the results of CoT and CoT w/ Tool on MATH and HotpotQA, we can find that directly utilizing external tools during reasoning is not a suitable way, which may hurt the performance of LLMs. The reason may be that injecting tool usage into the CoT reasoning process will hurt the continuity of reasoning." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "Finally, ChatCoT achieves state-of-the-art performance on MATH dataset based on ChatGPT and outperforms other baselines on HotpotQA. Compared with the previous SOTA method PHP, Chat-CoT outperforms six of seven sub-tasks on MATH dataset and achieves 7.9% relative improvement on average accuracy over the PHP method. The experiment results have verified the effectiveness of ChatCoT on complex reasoning tasks. By leveraging conversational knowledge memory and multiround dialogue to reasoning, ChatCoT has the advantage to utilize plug-and-play tools. Moreover, on the Number Theory tasks of MATH, we can find that PHP achieves the best performance. The reason may be that there are fewer equations that need to be computed or simplified. of the utilization of tools becomes less obvious." }, { "figure_ref": [], "heading": "Detailed Analysis", "publication_ref": [ "b33", "b20" ], "table_ref": [ "tab_3", "tab_5" ], "text": "In order to further verify the effectiveness of each component in ChatCoT, we conduct experiments about ablation, adaption, tools utilization and expense. We present the case study in Appendix C.1.\nAblation Study. In the ablation study, we evaluate the effectiveness of conversational memory, including tool knowledge memory, retrieval-augmented knowledge memory, and multi-turn reasoning format memory. As shown in Table 4, removing any type of conversational memory will reduce the performance of ChatCoT, which indicates the effectiveness of these memories in complex reasoning. In particular, removing retrieval-augmented knowledge memory or multi-turn reasoning format memory will lead to a large drop, which shows that mathematical knowledge and reasoning format knowledge is important for LLMs in reasoning tasks, while LLMs can learn the usage of external tools from exemplars without descriptions.\nCombination with Improvement Strategies.\nChatCoT is a general method to enhance the ability of tool manipulation of LLMs. It can be integrated with improvement strategies and further boost the performance of LLMs on reasoning tasks. To evaluate the applicability of ChatCoT to improvement strategies designed for CoT, we compare ChatCoT with CoT on two subtasks of MATH, where both methods are augmented with self-consistency (Wang et al., 2022), a representative improvement strategy for CoT prompting. Concretely, we sample 5 outputs for majority voting in self-consistency. As shown in Table 5, selfconsistency brings improvement in both CoT and ChatCoT. In particular, the absolute improvement of ChatCoT is slightly higher than CoT, showing that ChatCoT can adapt to self-consistency well.\nThe reason is that, with the decomposing of reasoning procedures, the intermediate steps of ChatCoT are more confident, and small mistakes will be corrected easily. Moreover, we construct the case study about the combination with ChatCoT and Self-Refine (Madaan et al., 2023) " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed ChatCoT, a new framework to manipulate the tools for the CoT reasoning. It naturally integrates the reasoning process and manipulating tools through a form of multi-turn conversations. At each turn, LLMs can either interact with tools or perform the reasoning by itself.\nOur approach can effectively leverage the multiturn conversation ability of chat-based LLMs. Experimental results on two complex reasoning tasks including MATH and HotpotQA have verified the effectiveness of ChatCoT. Currently, our experiments are mainly conducted on mathematical reasoning tasks, and we will test the effectiveness of the proposed approach to more types of reasoning tasks. Besides, we will also consider extending the number of available tools for solving different tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the limitations of our work. First, we do not utilize GPT-4 in our experiment or evaluate the performance of GPT-4 in the ChatCoT framework. That is because our application for GPT-4 has not been accepted. Second, ChatCoT is designed for chat-based LLMs and it is hardly compatible with other LLMs. However, most LLMs support multi-turn conversation currently and they perform well on reasoning tasks. Besides, although LLMs have achieved strong ability in reasoning tasks, the requirement of computation expense and GPU resource is higher than other pre-trained language models which have millions of parameters. The utilization of LLMs will produce more carbon dioxide and pollute the environment." }, { "figure_ref": [], "heading": "A Details of Conversation Memory", "publication_ref": [], "table_ref": [], "text": "In this part, we present the details of the prompt in conversation Memory. " }, { "figure_ref": [], "heading": "B Implementation Details", "publication_ref": [ "b16", "b6" ], "table_ref": [], "text": "During the evaluation, we utilize ChatGPT (gpt-3.5-turbo) (OpenAI, 2022) as our backbone model, and fine-tune RoBERTa (Liu et al., 2019) following SimCSE (Gao et al., 2021) on the training sets of MATH and HotpotQA separately as the retriever in corresponding tasks.\nFor MATH, we leverage 5-shot setting. The exemplars of CoT and CoT w/ Tool are randomly sampled, while exemplars of CoT w/ Retri are retrieved top-5 similar problems by the retriever. For ChatCoT, 2 retrieval exemplars and 3 annotated exemplars will be adopted. For HotpotQA, we leverage 4-shot setting which is similar to MATH, due to the length limitation of input. For the CoT method, we retrieve the top-3 relevant paragraphs from the paragraph collection as evidence of the given question. In ChatCoT, as the retrieved paragraphs might be not useful for LLMs, LLMs can send feedback to the retriever to show other results at most 5 times." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "C Case Study C.1 Framework of ChatCoT", "publication_ref": [], "table_ref": [], "text": "In order to better present the reasoning process of ChatCoT, we conduct the case study of two problems in MATH dataset, which is shown in Figure 2.\nThe beginning prompt contains knowledge of tools, tasks, and reasoning format. In the tool knowledge, we introduce the usage and function of external tools. For task knowledge, we retrieve similar problems and corresponding solutions from the training set as retrieval exemplars through semantics similarity, which might contain relevant knowledge. The reasoning format is used to guide LLMs to reason through multi-round dialogue. The beginning prompt and corresponding response are pre-set to enhance the sensitivity of LLMs to instructions.\nInstructed by the beginning prompt, LLMs iterate the tool-augmented reasoning step for stepby-step reasoning in ChatCoT. We illustrate the flow of ChatCoT with the example on the left in Figure 2. Given the math problem, LLMs first reason out that 𝑑 should be expressed in terms of 𝑒. Then, LLMs communicate with the agent to select the appropriate external tool Equation Solver and specify the corresponding arguments to invoke the tool, including the unknown symbols and the system of equations. Based on the selection, The assistant executes the tool invocation and returns the solution of the system of equations to LLMs. After execution, LLMs can perform the next reasoning step based on the given result. The iterative tool-augmented reasoning process can be iterated several times until LLMs find out the answer. In this case, LLMs find that the original problem has been solved, thus obtaining the final answer and ending the reasoning process." }, { "figure_ref": [], "heading": "C.2 Combination with Self-Refine", "publication_ref": [], "table_ref": [], "text": "To better show how LLMs correct mistakes through ChatCoT, we present a case study of the combination of ChatCoT and Self-Refine in the following:\nUser: \"Problem: What is the remainder when 13 13 + 5 is divided by 6?\nLet's think step by step\" LLM: First, we need to find out the value of 13 13 + 5 using the properties of exponents.\n...... LLM: \"Second, we need to find out the remainder when 302875106592258 is divided by 6.\"\nUser: \"To solve this sub-problem, which tool can we use?\"\nLLM: \"Calculator\" User: \"Give me the equation to calculate\" LLM: \"302875106592258 mod 6\" User: \"Results: 302875106592258 mod 6 = 1.81725063955355𝑒 + 15 * 𝑚𝑜𝑑.\nContinue reasoning\" LLM: \"Finally, we know that the remainder when 13 13 + 5 is divided by 6 is the same as the remainder when 302875106592258 is divided by 6, which is 2 \" User: \"The solution above might some mistake, you should check the solution and get the final answer.\"\nLLM: \"Apologies for the mistake. Let's correct the solution.\nTo find the remainder when 13 13 + 5 is divided by 6, we can simplify the expression 13 13 first.\nUsing the property of remainders, we can observe that 13 ≡ 1 (mod 6). Therefore, 13 13 ≡ 1 13 ≡ 1 (mod 6). Now, we can rewrite the expression as 13 13 + 5 ≡ 1 + 5 ≡ 6 ≡ 0 (mod 6).\nHence, the remainder when 13 13 + 5 is divided by 6 is 0.\"\nIn this example, we can observe that LLMs try to calculate the exact value of 13 13 +5 at first, and then get the remainder. However, the calculator does not understand the expression \"302875106592258 mod 6\" and returns the wrong result. Through refining, LLM changes the solution and gets the correct answer. Finally, we can conclude that 𝑑 = 13 and 𝑒 = 2.\nTherefore, the answer is 2.\nTo solve this sub-problem, which tool can we use? …… Therefore, the answer is 16.0. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215, Beijing Natural Science Foundation under Grant No. L233008 and 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098. And this work is also partially supported by the Outstanding Innovative Talents Cultivation Funded Programs 2021 of Renmin University of China. Xin Zhao is the corresponding author." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/RUCAIBOX/ChatCoT." } ]
Although large language models (LLMs) have achieved excellent performance in a variety of evaluation benchmarks, they still struggle in complex reasoning tasks which require specific knowledge and multi-hop reasoning. To improve the reasoning abilities, we propose Chat-CoT, a tool-augmented chain-of-thought reasoning framework for chat-based LLMs (e.g., ChatGPT). In ChatCoT, we model the chainof-thought (CoT) reasoning as multi-turn conversations, to utilize tools in a more natural way through chatting. At each turn, LLMs can either interact with tools or perform the reasoning. Our approach can effectively leverage the multi-turn conversation ability of chatbased LLMs, and integrate the thought chain following and tools manipulation in a unified way. Specially, we initialize the early turns of the conversation by the knowledge about tools, tasks, and reasoning format, and propose an iterative tool-augmented reasoning step to perform step-by-step tool-augmented reasoning. The experiment results on two complex reasoning datasets (MATH and HotpotQA) have shown the effectiveness of ChatCoT on complex reasoning tasks, achieving a 7.9% relative improvement over the state-of-the-art baseline. Our code and data are available at:
ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models
[ { "figure_caption": "[Reasoning] Iterative Tool-augmented ReasoningProblem: A function 𝑓 has the property that 𝑓(3𝑥 -1) = 𝑥^2 + 𝑥 + 1 for all real numbers 𝑥. What is 𝑓(5)?Solution: Let's think step by step. Let 𝑢 = 3𝑥 -1. Then 𝑥", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Firstunderstand. I will solve the problem step by step and use tool to help me. Problem: A function 𝑓 has the property that 𝑓(3𝑥 -1) = 𝑥^2 + 𝑥 + 1 …… First, we need to find out ……", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FirstFigure 1 :1Figure1: The comparison of vanilla CoT and ChatCoT, illustrated for a mathematical problem. For vanilla CoT, the content underlined are generated by LLMs. For ChatCoT, the conversational knowledge memory is initialized to provide tools, task and reasoning format knowledge. Then, the tool-augmented reasoning step is iterated multiple times to perform step-by-step reasoning, until obtaining the answer.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "You can use tool to help you solve the problem and I give you the instruction …… Calculator can help you …… Equation solver can help you …… Do you understand? Yes, I understand. I will use tool to help me solve the problem. Retrieval Set Problem 1 & Solution 1 …… Problem n & Solution n I give you some example. Problem: The product …… You can use …… Yes, I understand. I will solve the problem step by step and use tool to help me. Problem: A function 𝑓 has the property that 𝑓(3𝑥 -1) = 𝑥 2 + 𝑥 + 1 …… Let's think step by step and use knowledge in similar problem to solve this problem First, we need to find out the value of 𝑥 that corresponds to 𝑓(5) …… … … You should solve the problem step by step and you should follow the react in the history …… Yes, I understand. I will follow my response in the conversation history …… Problem: Solve for 𝑒, given that 2𝑑 is 8 less than 17𝑒, and 2𝑒 is 9 less than 𝑑. Let's think step by step …… Problem: What is the value of 3 4 5 + 4 5 + 4 5 + 4 5 Let's think step by step …… First, we need to simplify under the radical. …… First, we need to express 𝑑 in terms of 𝑒 …… Give me the unknown variable Give me the equation system 𝑑, 𝑒 Equation Solver To solve this sub-problem, which tool can we use? …… Results: 𝒅 = 𝟏𝟑, 𝒆 = 𝟐 Continue reasoning 2𝑑 = 17𝑒 -8, 2𝑒 = 𝑑 -9", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An illustration example for ChatCoT from MATH.", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Statistics of the two complex reasoning datasets. CP, IA, and NT denote Counting and Probability, Intermediate Algebra, and Number Theory, respectively.", "figure_data": "DatasetCategory Train Dev/TestAlgebra17441187CP771474Precalculus746546MATHPrealgebra1205871Geometry870479IA1295903NT869540HotpotQA Distractor 904777405", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results on MATH dataset. PC and PA denote Precalculus and Prealgebra, respectively. Avg. is the average value of all categories. The best are denoted in bold and the second-best are underlined.", "figure_data": "PromptMATHStrategyAlgebra CPPCPA Geometry IANT Avg.GPT-3CoT6.04.74.07.73.14.44.45.2PaLMCoT9.78.44.4 19.27.33.56.08.8LLaMACoT-------10.6GalacticaCoT29.013.9 12.8 27.212.39.6 11.7 20.4MinervaCoT51.328.0 18.0 55.026.813.7 21.2 33.6PaLM 2CoT-------34.3CoT48.131.4 21.1 56.622.318.3 29.1 35.1CoT w/ Tool35.922.6 9.3 40.513.69.4 19.4 23.8ChatGPTCoT w/ Retri LP52.7 49.632.7 18.9 58.4 30.2 16.3 52.329.2 22.519.9 31.7 37.7 16.9 29.8 34.0PHP51.133.7 16.1 57.725.417.1 35.1 36.5ChatCoT56.134.2 23.8 59.229.919.5 32.6 39.4MethodsHotpotQACoT38.0CoT w/ Tool31.4ChatCoT w/o Feedback53.8ChatCoT59.2", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The results on HotpotQA. We report the results of the development set under the distractor setting.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Thus, the advantage The results of ablation study. TK, RATK, and MRF denote if using tool knowledge, retrievalaugmented task knowledge, and multi-turn reasoning format at early turns of the conversation, respectively. Geo is the abbreviation of Geometry.", "figure_data": "MethodsMATHTK RATK MRF PC Geo NT✔✔✔23.8 29.9 32.6✗✔✔23.3 29.2 30.6✔✗✔20.0 27.4 31.0✔✔✗21.6 24.2 32.2✗✗✔16.7 21.1 29.3", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "+3.8% 34.4 +5.3% ChatCoT + SC 40.1 +5.9% 38.3 +5.7%", "figure_data": "MethodsCPNTCoT + SC35.2", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The evaluated accuracy of combining our approach with self-consistency. SC denotes selfconsistency. We also report the absolute improvement compared with vanilla methods on subscripts.", "figure_data": "MethodsFrequency SuccessCoT w/ Tool3.0%85.7%ChatCoT w/o TK56.0%93.0%ChatCoT w/o MRF10.0%64.2%ChatCoT70.0%92.0%", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Frequency and success rate of tool manipulation on Number Theory task of MATH. TK, MRF denote tool knowledge, multi-turn reasoning format at early turns of the conversation respectively.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "in Appendix C.2. The comparison of the number of generated tokens from LLMs among different prompt strategies.about whether LLMs can frequently or correctly leverage based on different methods. Table6expresses the performance of tools utilization in the Number Theory task of MATH of baseline and our approach. \"Frequency\" denotes the ratio of problems where LLMs correctly leverage tools. \"Success\" denotes the rate of LLMs utilizing tools successfully among all the times of invoking tools. We can observe that ChatCoT achieves a balance of frequency and ratio of success. Tool knowledge provides the function of tools for LLMs and improves the frequency that LLMs utilize the tools. LLMs can learn how to leverage external tools through the multi-turn reasoning format and boost the ratio of successful utilization of tools. Without any of them, the frequency and ratio of success will drop which might not be conducive to reasoning.", "figure_data": "Tools Utilization Analysis. As we mentionedabove, in complex reasoning tasks, infrequentlyor incorrectly utilizing external tools might lead towrong answers. Thus, we conduct the experiment", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Tool Knowledge. The two turns of utterances are: User: \"You can use tool to help you solve the problem and I give you the instruction of tools usage. [𝑇 1 ] can help you [𝑌 1 ] • • • Do you understand?\" LLM: \"Yes, I understand. I will use tool to help me solve the problem.\". User: \"I give you some example. Problem: [𝑄 1 ] Solution: [𝑆 1 ] • • • You can use the knowledge and thoery in these problem. Do you understand?\" LLM: \"Yes, I understand. I will solve the problem step by step and use tool to help me.\".", "figure_data": "Retrieval-Augmented Task Knowledge. The two-turn utterances are:Multi-turn Reasoning Format. The multi-turnutterances are based on the following pattern:User: \"Problem: [𝑄", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "I give you some example.Conversational KnowledgeProblem: What is the …… You can use ……MemoryCalculatorExternal ToolsCalculator, Equation Solver, ……", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" } ]
Zhipeng Chen; Kun Zhou; Beichen Zhang; Zheng Gong; Wayne Xin Zhao; Ji-Rong Wen
[ { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b1", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Sharan Chowdhery; Gaurav Narang; Adams Mishra; Vincent Y Yu; Yanping Zhao; Andrew M Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b2", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Dheeru Dua; Shivanshu Gupta; Sameer Singh; Matt Gardner", "journal": "", "ref_id": "b3", "title": "Successive prompting for decomposing complex questions", "year": "2022-12-07" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b4", "title": "PAL: program-aided language models", "year": "2022" }, { "authors": "Luyu Gao; Aman Madaan; Shuyan Zhou; Uri Alon; Pengfei Liu; Yiming Yang; Jamie Callan; Graham Neubig", "journal": "", "ref_id": "b5", "title": "Pal: Program-aided language models", "year": "2023" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics. Google", "ref_id": "b6", "title": "Simcse: Simple contrastive learning of sentence embeddings", "year": "2021-07-11" }, { "authors": "Yiduo Guo; Yaobo Liang; Chenfei Wu; Wenshan Wu; Dongyan Zhao; Nan Duan", "journal": "", "ref_id": "b7", "title": "Learning to program with natural language", "year": "2023" }, { "authors": "Shibo Hao; Tianyang Liu; Zhen Wang; Zhiting Hu", "journal": "", "ref_id": "b8", "title": "Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings", "year": "2023" }, { "authors": "Dan Hendrycks; Collin Burns; Saurav Kadavath; Akul Arora; Steven Basart; Eric Tang; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b9", "title": "Measuring mathematical problem solving with the MATH dataset", "year": "2021-12" }, { "authors": "Jinhao Jiang; Kun Zhou; Zican Dong; Keming Ye; Wayne Xin Zhao; Ji-Rong Wen", "journal": "", "ref_id": "b10", "title": "Structgpt: A general framework for large language model to reason over structured data", "year": "2023" }, { "authors": "Xue Jiang; Yihong Dong; Lecheng Wang; Qiwei Shang; Ge Li", "journal": "", "ref_id": "b11", "title": "Self-planning code generation with large language model", "year": "2023" }, { "authors": "Omar Khattab; Keshav Santhanam; Lisa Xiang; David Li; Percy Hall; Christopher Liang; Matei Potts; Zaharia", "journal": "", "ref_id": "b12", "title": "Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP", "year": "2022" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b13", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Aitor Lewkowycz; Anders Andreassen; David Dohan; Ethan Dyer; Henryk Michalewski; V Vinay; Ambrose Ramasesh; Cem Slone; Imanol Anil; Theo Schlag; Yuhuai Gutman-Solo; Behnam Wu; Guy Neyshabur; Vedant Gur-Ari; Misra", "journal": "", "ref_id": "b14", "title": "Solving quantitative reasoning problems with language models", "year": "2022" }, { "authors": "Yaobo Liang; Chenfei Wu; Ting Song; Wenshan Wu; Yan Xia; Yu Liu; Yang Ou; Shuai Lu; Lei Ji; Shaoguang Mao; Yun Wang; Linjun Shou; Ming Gong; Nan Duan", "journal": "", "ref_id": "b15", "title": "Taskmatrix.ai: Completing tasks by connecting foundation models with millions of apis", "year": "2023" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b16", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Pan Lu; Baolin Peng; Hao Cheng; Michel Galley; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Jianfeng Gao", "journal": "", "ref_id": "b17", "title": "Chameleon: Plug-and-play compositional reasoning with large language models", "year": "2023" }, { "authors": "Pan Lu; Liang Qiu; Wenhao Yu; Sean Welleck; Kai-Wei Chang", "journal": "", "ref_id": "b18", "title": "A survey of deep learning for mathematical reasoning", "year": "2022" }, { "authors": "Haipeng Luo; Qingfeng Sun; Can Xu; Pu Zhao; Jianguang Lou; Chongyang Tao; Xiubo Geng; Qingwei Lin; Shifeng Chen; Dongmei Zhang", "journal": "", "ref_id": "b19", "title": "Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct", "year": "2023" }, { "authors": "Aman Madaan; Niket Tandon; Prakhar Gupta; Skyler Hallinan; Luyu Gao; Sarah Wiegreffe; Uri Alon; Nouha Dziri; Shrimai Prabhumoye; Yiming Yang; Sean Welleck; Prasad Bodhisattwa; Shashank Majumder; Amir Gupta; Peter Yazdanbakhsh; Clark", "journal": "", "ref_id": "b20", "title": "Self-refine: Iterative refinement with self-feedback", "year": "2023" }, { "authors": "Aaron Meurer; Christopher P Smith; Mateusz Paprocki; Ondrej Certík; B Sergey; Matthew Kirpichev; Amit Rocklin; Sergiu Kumar; Jason Ivanov; Sartaj Keith Moore; Thilina Singh; Sean Rathnayake; Brian E Vig; Richard P Granger; Francesco Muller; Harsh Bonazzi; Shivam Gupta; Fredrik Vats; Fabian Johansson; Matthew J Pedregosa; Andy R Curry; Stepán Terrel; Ashutosh Roucka; Isuru Saboo; Sumith Fernando; Robert Kulal; Anthony M Cimrman; Scopatz", "journal": "PeerJ Comput. Sci", "ref_id": "b21", "title": "Sympy: symbolic computing in python", "year": "2017" }, { "authors": " Openai", "journal": "OpenAI", "ref_id": "b22", "title": "Introducing chatgpt", "year": "2022" }, { "authors": "Bhargavi Paranjape; Scott Lundberg; Sameer Singh; Hannaneh Hajishirzi; Luke Zettlemoyer; Marco Tulio; Ribeiro ", "journal": "", "ref_id": "b23", "title": "Art: Automatic multistep reasoning and tool-use for large language models", "year": "2023" }, { "authors": "Aaron Parisi; Yao Zhao; Noah Fiedel", "journal": "", "ref_id": "b24", "title": "TALM: tool augmented language models", "year": "2022" }, { "authors": "G Shishir; Tianjun Patil; Xin Zhang; Joseph E Wang; Gonzalez", "journal": "", "ref_id": "b25", "title": "Gorilla: Large language model connected with massive apis", "year": "2023" }, { "authors": "Jing Qian; Hong Wang; Zekun Li; Shiyang Li; Xifeng Yan", "journal": "", "ref_id": "b26", "title": "Limitations of language models in arithmetic and symbolic induction", "year": "2022" }, { "authors": "Shuofei Qiao; Honghao Gui; Huajun Chen; Ningyu Zhang", "journal": "", "ref_id": "b27", "title": "Making language models better tool learners with execution feedback", "year": "2023" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b28", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b29", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face", "year": "2023" }, { "authors": "Freda Shi; Xinyun Chen; Kanishka Misra; Nathan Scales; David Dohan; Ed H Chi; Nathanael Schärli; Denny Zhou", "journal": "", "ref_id": "b30", "title": "Large language models can be easily distracted by irrelevant context", "year": "2023" }, { "authors": "Ross Taylor; Marcin Kardas; Guillem Cucurull; Thomas Scialom; Anthony Hartshorn; Elvis Saravia; Andrew Poulton; Viktor Kerkez; Robert Stojnic", "journal": "", "ref_id": "b31", "title": "Galactica: A large language model for science", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "CoRR", "ref_id": "b32", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc V Le; Ed H Chi; Denny Zhou", "journal": "", "ref_id": "b33", "title": "Selfconsistency improves chain of thought reasoning in language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b34", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Yiran Wu; Feiran Jia; Shaokun Zhang; Hangyu Li; Erkang Zhu; Yue Wang; Yin Tat Lee; Richard Peng; Qingyun Wu; Chi Wang", "journal": "", "ref_id": "b35", "title": "An empirical study on challenging math problem solving with GPT-4", "year": "2023" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William W Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "", "ref_id": "b36", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018-10-31" }, { "authors": "Shunyu Yao; Dian Yu; Jeffrey Zhao; Izhak Shafran; Thomas L Griffiths; Yuan Cao; Karthik Narasimhan", "journal": "", "ref_id": "b37", "title": "Tree of thoughts: Deliberate problem solving with large language models", "year": "2023" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Karthik Narasimhan; Yuan Cao", "journal": "", "ref_id": "b38", "title": "React: Synergizing reasoning and acting in language models", "year": "2022" }, { "authors": "Xi Ye; Srinivasan Iyer; Asli Celikyilmaz; Ves Stoyanov; Greg Durrett; Ramakanth Pasunuru", "journal": "", "ref_id": "b39", "title": "Complementary explanations for effective in-context learning", "year": "2022" }, { "authors": "Beichen Zhang; Kun Zhou; Xilin Wei; Wayne Xin Zhao; Jing Sha; Shijin Wang; Ji-Rong Wen", "journal": "", "ref_id": "b40", "title": "Evaluating and improving tool-augmented computationintensive math reasoning", "year": "2023" }, { "authors": "Kun Wayne Xin Zhao; Zheng Zhou; Beichen Gong; Yuanhang Zhang; Jing Zhou; Zhigang Sha; Shijin Chen; Cong Wang; Ji-Rong Liu; Wen", "journal": "", "ref_id": "b41", "title": "Jiuzhang: A chinese pre-trained language model for mathematical problem understanding", "year": "2022-08-14" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Tianyi Li; Xiaolei Tang; Yupeng Wang; Yingqian Hou; Beichen Min; Junjie Zhang; Zican Zhang; Yifan Dong; Chen Du; Yushuo Yang; Zhipeng Chen; Jinhao Chen; Ruiyang Jiang; Yifan Ren; Xinyu Li; Zikang Tang; Peiyu Liu; Jian-Yun Liu; Ji-Rong Nie; Wen", "journal": "", "ref_id": "b42", "title": "A survey of large language models", "year": "2023" }, { "authors": "Xin Zhao; Kun Zhou; Beichen Zhang; Zheng Gong; Zhipeng Chen; Yuanhang Zhou; Ji-Rong Wen; Jing Sha; Shijin Wang; Cong Liu; Guoping Hu", "journal": "", "ref_id": "b43", "title": "Jiuzhang 2.0: A unified chinese pre-trained language model for multi-task mathematical problem solving", "year": "2023-08-06" }, { "authors": "Chuanyang Zheng; Zhengying Liu; Enze Xie; Zhenguo Li; Yu Li", "journal": "", "ref_id": "b44", "title": "Progressive-hint prompting improves reasoning in large language models", "year": "2023" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Olivier Bousquet; Quoc Le; Ed H Chi", "journal": "", "ref_id": "b45", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2022" } ]
[]
10.18653/v1/W17-4755
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b12", "b10", "b18" ], "table_ref": [], "text": "Kendall's τ is a widely used correlation statistic (Kendall, 1938). It is easy to grasp intuitively, being based on pairwise rank ordering. This makes it complementary to other well-known statistics such as Pearson or Spearman.\nIn the context of machine translation (MT), Kendall plays a key role assessing the performance of evaluation metrics, a process known as metaevaluation: it has been the main statistic for measuring a metric's ability to score segment-level translations in the Workshop on Machine Translation (WMT) metrics shared tasks over the years (Freitag et al., 2022b, inter alia).\nSeveral recent developments in MT-common to other areas of generative AI-have highlighted an important weakness in Kendall's τ , namely how it deals with ties ( §4). First, as MT systems get better, they (1) produce more \"perfect\" outputs, which get assigned the same score by human raters; and (2) necessitate error-based analyses such as MQM (Lommel et al., 2014;Freitag et al., 2021a), which often produce tied scores due to integer error counts. Second, on the metric side, the use of recently-proposed LLM-based metrics (Kocmi and Federmann, 2023) and metrics that model MQM annotations (Perrella et al., 2022) can also lead to small and discrete score ranges that assign many ties.\nIn this paper, we examine the problems caused by ties in Kendall's τ , using data from the WMT metrics tasks. We first show that there are simple phenomena that are not handled properly by any of the existing Kendall variants, which mostly differ in how they treat ties ( §5.1). We also demonstrate the possibility of gaming the meta-evaluation by exploiting how ties are handled by existing τ 's, resulting in large improvements in certain evaluation settings ( §5.2).\nWe propose instead to meta-evaluate metrics with a version of pairwise accuracy that is robust to these problems, assigning proper credit for correctly predicting ties ( §6). Although there is a modification to τ that is closely related to pairwise accuracy, we argue that the accuracy formulation is easier to interpret, being just the proportion of correctly ranked pairs (including tied pairs).\nHowever, pairwise accuracy comes with its own problem, namely that it can discriminate against metrics that rarely assign ties. To counter this, we also propose an algorithm called tie calibration that automatically introduces ties into metric scores in order to optimize its correlation ( §7). We argue, and show empirically, that these two modifications result in a fairer assessment of MT metric performance ( §8.1).\nFinally, we analyze different aspects of pairwise accuracy and tie calibration, including assessing the generalization of tie calibration across datasets ( §8.2), the score ranges where ties are introduced ( §8.3), and how more fine-grained statistics can be used to better understand metric behavior ( §8.4). While our experimental setting is limited to MT metrics, our work should be applicable to metaevaluation for other generative AI metrics with similar characteristics." }, { "figure_ref": [], "heading": "Background & Related Work", "publication_ref": [], "table_ref": [], "text": "We begin by justifying our exclusive focus on ranking-based statistics, like Kendall's τ , then provide some background on MT metric metaevaluation, and finally contextualize our work by discussing Kendall variants." }, { "figure_ref": [ "fig_0" ], "heading": "Why not Pearson or Spearman?", "publication_ref": [], "table_ref": [], "text": "Pearson's r and Spearman's ρ are two other widelyused correlation coefficients. The Pearson coefficient captures linear correspondence between two input vectors, defined as their covariance divided by the product of their variances. Spearman is equivalent to Pearson applied to the ranks of the inputs. As shown in Figure 1, Pearson is complementary to Kendall; it assigns a much higher score to the noisy but globally linear metric1, but a much lower score to the perfectly-ordered but non-linear metric2. Spearman is a compromise, siding with Pearson for metric1 and for Kendall for metric2.\nFor applications where linear correspondence with a gold standard and correct ranking decisions are both important, it is advisable to measure both Pearson and Kendall, as is typically done in the MT evaluations described below.2 " }, { "figure_ref": [], "heading": "Metric Meta-Evaluation", "publication_ref": [ "b11", "b3" ], "table_ref": [], "text": "For over 10 years, the Workshop on Machine Translation (WMT) has run a metrics shared task that meta-evaluates automatic metrics. Meta-evaluation quantifies a metric's performance by calculating the agreement or correlation between the metric's scores and human-annotated scores on a large number of translations. In WMT, metrics are metaevaluated at either the system-or segment-level, as follows.\nFirst, metric and human scores are collected for translations produced by N systems for M source segments. System-level correlations are calculated between the N metric and human scores per system, typically calculated by averaging over the M segment scores. In WMT, the system-level correlation is often Pearson, or more recently, a rankingbased pairwise agreement that is similar to our proposed statistic ( §6), except that it does not need to account for ties since ties are very rare at the system-level (Kocmi et al., 2021).\nSegment-level correlations evaluate metric scores on individual translations rather than aggregated system scores. They can be calculated in several different ways (see Appendix A for equation definitions):\n• No-Grouping: Calculate the correlation between the N × M translation scores\n• Group-by-Item: Calculate the average correlation between the N translation scores grouped by source segment 3\n• Group-by-System: Calculate the average correlation between the M translation scores grouped by system Segment-level correlations are better than systemlevel correlations at discriminating between metrics (Freitag et al., 2022b), and they are more closely related to applications where metrics can be used to improve generation, such as Minimum Bayes Risk decoding (Freitag et al., 2022a;Fernandes et al., 2022).\nHistorically, WMT has evaluated metrics at the segment-level using the group-by-item method, however no-grouping was used in WMT'21 and all three were used in WMT'22. The standard correlation function that is used is some variant of Kendall's τ , described next.\n3 \"Item\" is used to keep the terminology generic so it can be applied to other generation tasks. Here, \"item\" refers to the source segment." }, { "figure_ref": [], "heading": "Definition Proposed By", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "WMT Shared Task Years \nτ a = (C -D)/(C + D + T h + T m + T hm ) Kendall (1938) - τ b = (C -D)/ (C + D + T h )(C + D + T m ) Kendall (1945) 2021-2022 τ c = (C -D)/(n 2 ( k-1 k )) Stuart (1953) - τ 10 = (C -D -T m )/(C + D + T m ) Callison-\nτ eq = (C + T hm -D -T h -T m )/(C + D + T h + T m + T hm ) This work ( §6) - acc eq = (C + T hm )/(C + D + T h + T m + T hm )\nThis work ( §6) -Table 1: Each variant of τ handles ties differently, and the WMT metrics shared task has not consistently used the same τ over the years. The acc eq and τ eq statistics are proposed in this work ( §6). See Table 2 for the notation definition for this table." }, { "figure_ref": [], "heading": "Symbol Description n", "publication_ref": [], "table_ref": [], "text": "The number of inputs h\nThe vector of human scores m\nThe vector of metric scores C\nThe number of concordant pairs D\nThe number of discordant pairs T h\nThe number of pairs tied only in h T m\nThe number of pairs tied only in m T hm\nThe number of pairs tied in both h and m k\nThe minimum of the number of unique values in h or m " }, { "figure_ref": [], "heading": "The Landscape of Kendall's τ", "publication_ref": [ "b13", "b2", "b0" ], "table_ref": [ "tab_0" ], "text": "Kendall's τ is a ranking-based correlation coefficient. Although there are many different variants of τ , intuitively, it counts how frequently the metric and human scores agree (concordant) or disagree (discordant) on the ranking of all possible pairs of translations. Importantly, there cannot be a tie in either the metric or human score for a pair to be considered concordant or discordant. Each τ ranges from -1 to 1, with the extremes resulting from the metric and human scores being perfectly discordant/concordant and 0 meaning random chance. Some variants of τ are generic and included in libraries like SciPy, whereas others were proposed by WMT metrics shared task organizers and tailored to the application of MT metric meta-evaluation. Table 1 shows the definitions of the different variants of τ using the notation in Table 2.\nThe main differences between the variants are how they handle ties. The standard variants, τ b and τ c , are modifications of τ a designed to ensure the values can reach -1 and 1 in the presence of ties. In contrast to our proposal, the versions proposed by WMT do not include ties in the human scores, and penalize ties in the metric scores. This is due to the 4 See note about the error in the WMT'17 report in the WMT'18 report (Ma et al., 2018).\nfact that the metrics shared task organizers either did not want to penalize small differences in metric scores when the human score is tied (Callison-Burch et al., 2010) or only evaluated on pairs that had a large difference in DA score in order to ensure the pair's ranking was reliable (Bojar et al., 2017).\nOverall, none of the τ 's directly rewards the metric for correctly predicting ties in the human score. We view our work as a next step in updating the meta-evaluation to account for properties of today's metrics and human scores." }, { "figure_ref": [], "heading": "Analysis Setup", "publication_ref": [ "b12" ], "table_ref": [], "text": "Datasets Our analysis is performed on the Multidimensional Quality Metrics (MQM; Lommel et al., 2014;Freitag et al., 2021a) ratings collected by the WMT'22 metrics shared task (Freitag et al., 2022b) for three language pairs: en→de, zh→en, and en→ru. We use the MQM scores as the groundtruth human scores that the automatic metrics' scores are evaluated against. The language pairs have 13-15 systems and around 1300-1900 segments per system with MQM ratings." }, { "figure_ref": [], "heading": "Automatic Metrics", "publication_ref": [ "b10", "b19", "b18", "b1" ], "table_ref": [], "text": "We explore how the choice of meta-evaluation statistic affects the rankings of the primary metric submissions to the WMT'22 shared task, in addition to the recently proposed GEMBA metrics (Kocmi and Federmann, 2023). We also discuss and examine various different metrics in more detail, including the top 2 performing metrics in the WMT'22 shared task, Metric-X and COMET-22 (Rei et al., 2022), in addition to BLEURT-20 (Sellam et al., 2020), MaTESe (Perrella et al., 2022) overall score based on an error severity weighting.\nThe GEMBA metrics predict quality scores using 0-shot prompting with GPT-3.5 and GPT-4 (Brown et al., 2020). Importantly, the predicted scores from MaTESe and GEMBA tend to come from a small set of values rather than a large range of possible floating point scores, which has significant implications for the number of ties they predict (see §4) and how they are treated by different variants of τ ." }, { "figure_ref": [], "heading": "Why Ties are Important", "publication_ref": [], "table_ref": [ "tab_2", "tab_4" ], "text": "There are several motivations for incorporating ties into a ranking-based meta-evaluation statistic like Kendall's τ . First, ties in human scores from recent WMT shared tasks are much more trustworthy than they were previously. Since WMT'20, the human scores are MQM scores instead of direct assessment (DA) scores. The MQM scores come from expert translators and are more reliable than the crowdsourced DA scores. As such, ties (or minor differences) between scores are more likely representative of actual ties (or minor differences) in translation quality.\nSecond, ties in MQM scores are very common. For instance, up to 53% of possible pairs in en-de have tied MQM scores (see Table 3), the majority of which have MQM scores of 0, meaning there are no errors in the translations. As the quality of MT systems improves, the number of tied translations is likely to increase since there will be fewer differences between systems. If ties in the MQM scores are removed from the meta-evaluation (as is done by some Kendall variants), we throw away a valuable metric quality signal and lose the ability to discriminate between metrics that reliably detect ties and those that do not (see next section).\nFinally, recently proposed metrics, such as MaTESe or those based on large language mod- h = [0, 0, 0, 0, 1, 2] .60 .77 .38 1.0 1.0 1.0 .20 .60\nm 1 = [0, 0, 0, 0, 2, 1] m 2 = [0,\nFigure 2: When considering ties, m 1 only incorrectly ranks 1 out of the 6 2 pairs, whereas m 2 incorrectly ranks 6. However, due to how each τ handles ties, only acc eq and τ eq strongly prefer m 1 over m 2 . Notably, τ 10 , τ 13 , and τ 14 are unable to distinguish a perfect metric (m = h) from m 2 . The acc eq and τ eq statistics are proposed in this work ( §6). els (GEMBA) predict a large number of ties (see Table 4). These metrics should be directly rewarded for correctly predicting ties in the human scores, which is not the case with existing Kendall variants." }, { "figure_ref": [], "heading": "Shortcomings of Kendall's Variants", "publication_ref": [], "table_ref": [], "text": "The way ties are handled by existing variants of Kendall's τ introduces blind spots in the metaevaluation and opens the door for metrics to exploit τ -specific properties to improve correlations. We demonstrate the shortcomings of existing τ 's through a motivational example and experimental analysis." }, { "figure_ref": [], "heading": "A Motivating Example", "publication_ref": [], "table_ref": [], "text": "Due to how existing τ 's handle ties, they are unable to discriminate between metrics that accurately predict ties and those that do not. Figure 2 contains an example of such an instance.\nWhen considering ties, metric m 1 only incorrectly ranks 1 out of the 15 possible pairs, whereas m 2 incorrectly ranks 6 pairs. However, because existing τ 's do not give credit to metrics for correctly predicting ties, the correlation coefficients either consider m 1 to be either approximately equal or worse than m 2 . This blind spot of existing τ 's means they are inadequate for meta-evaluating metrics in the presence of ties." }, { "figure_ref": [ "fig_2" ], "heading": "The NaN Problem", "publication_ref": [], "table_ref": [], "text": "Another consequence of how the τ correlations handle ties is what we refer to as the \"NaN problem.\" In the event that either the metric or human scores are a constant vector (therefore, all pairs are tied), many of the τ values are not defined, or NaN. When the segment-level correlation is calculated by grouping by either item or system and one of the groups' correlations is NaN, the correlation is removed from the average in practice. This happens most often when grouping by item because the size of the input vectors is the number of systems, N , which is generally rather small (≈15).\nA metric could take advantage of this property of the segment-level correlation by introducing ties for difficult-to-score groups, resulting in NaN scores. This has the effect of removing the challenging groups from the meta-evaluation, resulting in higher correlations. 56 Indeed, we find that this is possible.\nTo introduce ties, we mapped Metric-X's scores to integers by assigning each score to an equalwidth bucket. This bucketing results in ties in challenging pairs because similar quality translations likely have close metric scores, so when the scores are converted to integer buckets, their scores become the same value. Figure 3 plots the group-byitem τ b (the τ coefficient used in WMT'22) and the number of non-NaN groups as a function of the number of buckets.\nWhen the number of buckets is small, the number of non-NaN segments is reduced, and the resulting correlations improve over the original values by very large margins. Because the correlations with different numbers of buckets are computed over different non-NaN subsets of the full dataset, their values are not fairly comparable. Indeed, in §8, we demonstrate that WMT'22 metrics submissions were evaluated on different non-NaN groups, and directly comparing their correlations leads to erroneous conclusions.\nA metric could have taken advantage of the NaN problem in order to game the WMT'22 metrics shared task since the number of non-NaN segments is not taken into account in the metric metaevaluation. A method for handling ties that made correlations for constant vectors well defined would close this loophole." }, { "figure_ref": [], "heading": "Evaluating with Pairwise Accuracy", "publication_ref": [ "b11" ], "table_ref": [ "tab_0" ], "text": "Instead of using Kendall's τ as the ranking-based meta-evaluation statistic, we propose to use a version of pairwise accuracy that includes ties. We define the pairwise accuracy to be the proportion of all pairs that the metric either ranks correctly or correctly predicts are tied. The equation for our proposal, denoted acc eq (\"eq\" for equality, as in ties), is included in Table 1. This statistic now directly incorporates ties in the human and metric scores.\nAlthough there is a modification of Kendall's τ that corresponds to acc eq (denoted τ eq in Table 1), we advocate for reporting accuracy instead. Accuracy is more intuitive than τ since its value is between 0 and 1 and it can be read as the proportion of pairs that the metric correctly ranks/predicts as ties. This stands in contrast to τ that is between -1 and 1, which does not have an easy-to-communicate interpretation. Pairwise accuracy has the additional benefit of aligning how metrics are meta-evaluated at the system-and segment-levels (Kocmi et al., 2021). The results related to metric rankings in this Metric Definition\nties precision T hm /(T hm + T m ) ties recall T hm /(T hm + T h ) correct-rank precision C/(C + D + T h ) correct-rank recall C/(C + D + T m )\nTable 5: Definitions of precision and recall on correctly predicting tied pairs or the correct ranking of non-tied pairs. See Table 2 for the notation definition.\nwork apply equally to acc eq and τ eq . Pairwise accuracy (and τ eq ) does not suffer from the same issues as the τ 's that were presented in §5: acc eq strongly prefers m 1 , the metric with fewer incorrectly ranked pairs (Figure 2; §5.1). Because its value is never NaN, it does not suffer from the NaN problem ( §5.2); all examples are always used for evaluation." }, { "figure_ref": [], "heading": "Evaluating Ties and Non-Ties", "publication_ref": [], "table_ref": [], "text": "Pairwise accuracy effectively evaluates the automatic metrics as 3-way classifiers that decide between predicting a tie or one of the two possible rankings for each pair. This formulation nicely allows for further decomposition into class-specific precision, recall, and F 1 , which can be used to further understand metric performance. Class-specific evaluations help to address a potential class imbalance problem between tied and non-tied pairs that may be hidden by accuracy.\nTable 5 contains the definitions of precision and recall with respect to \"ties\" and \"correct ranking.\"\nThe \"ties\" statistics calculate the precision of the metric when predicting a tie and its recall of human ties. The \"correct ranking\" statistics calculate the proportion of correctly ranked pairs out of all pairs it predicts are not tied and the proportion of all human non-tied pairs correctly ranked by the metric. These additional statistics help provide a more holistic view of metric performance." }, { "figure_ref": [], "heading": "Tie Calibration", "publication_ref": [ "b20" ], "table_ref": [ "tab_4" ], "text": "Although we argue that acc eq properly addresses ties in human and metric scores, some metrics do not frequently predict exact ties between translations. Regression metrics, such as BLEURT (Sellam et al., 2020) and COMET (Rei et al., 2020), practically never predict tied scores for two different translations (see Table 4), so they will not able to correctly predict a tie in the human score, putting them at a disadvantage. This is undesirable because it prevents a fair comparison between metrics that do and do not predict ties.\nTo address this shortcoming, we propose an algorithm called tie calibration for automatically introducing ties into metric scores so that metrics that do and do not predict ties can be fairly compared. The algorithm is based on the intuition that, although regression metrics do not frequently predict ties, the difference between two translations' scores is sometimes small enough to be considered a tie.\nTie calibration searches for an ϵ value that maximizes a rank-based correlation statistic (e.g., τ or acc eq ) such that any two translations with a difference in score less than ϵ is considered to be a tie. 7Our implementation considers all possible differences between the n 2 pairs of translations as candidates for ϵ and selects the one that maximizes the desired ranking-based statistic. The algorithm runs in O(n 2 log n), where n is the number of translations.8 Detailed psuedocode for tie calibration is included in Appendix D.\nBecause tie calibration introduces an optimal number of tie predictions, metrics are not penalized for under-predicting ties, and therefore metrics that do and do not predict ties can be fairly compared. An added benefit of tie calibration is that the resulting optimal ϵ improves the interpretability of metric scores. Its value can be understood as the threshold for which a difference in metric scores should be considered significant (at least with respect to a specific dataset; see §8.2).\nHenceforth we use * to denote a statistic that has been calculated with tie calibration (e.g., acc * eq ) and ϵ * the optimal tie threshold found by the algorithm.\nDiscussion. In principle, tie calibration can be used to find an optimal value of any correlation statistic in which the presence of ties changes the value, acc eq being one of them. However, care needs to be taken to ensure that the statistic handles ties in a desirable way. For example, τ 13 omits all ties from its formula, so tie calibration could convert a discordant pair into a tie to improve the value of τ 13 , which, if the human scores are not tied, is undesirable (acc eq would not reward this change). The combination of tie calibration and a Table 6: The correlations (and ranks) of the metrics as evaluated by τ b , τ 10 , and acc eq with tie calibration, denoted acc * eq , using the group-by-item segment-level correlation on the WMT'22 en-de dataset. ϵ * is the optimal threshold found by tie calibration. statistic that does not properly handle ties may lead to unexpected results." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze several different aspects related to our proposal of pairwise accuracy and tie calibration. We address the following questions:\n• §8.1: How does the choice of meta-evaluation statistic affect metric ranking?\n• §8.2: How does the selected value of ϵ generalize across datasets?\n• §8.3: Does the selected ϵ value introduce ties uniformly across score values for a metric?\n• §8.4: What insights can be drawn from evaluating metrics on predicting tied versus nontied pairs?" }, { "figure_ref": [], "heading": "Comparing Metric Rankings", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Table 6 shows group-by-item correlations calculated with various τ 's and pairwise accuracy. We also report the performance of a \"constant metric\" that predicts a tie for every pair as a baseline comparison. From the existing τ 's, we report τ b and τ 10 since they are the most recent and used most frequently by WMT.9 Clearly, the choice of metaevaluation statistic significantly affects the metric rankings, with the largest changes happening to MaTESe and GEMBA, the two metrics that output the most ties. Under τ b , GEMBA-GPT-4 and MaTESe are the top ranked metrics. However, this result can be partially explained by the NaN problem ( §5.2). MaTESe's correlation is calculated on 773 non-NaN segments, compared to 1133 for Metric-X. When both metrics are evaluated on the same 773 segments, Metric-X's correlation is higher (0.296 versus 0.281). This result highlights how correlations calculated on different source segments cannot be fairly compared.\nIf τ 10 is used to rank the metrics, MaTESe and GEMBA fall to the bottom at of the ranking. This result can be explained by the fact that τ 10 is systematically biased against metrics that output a large number of ties because ties are penalized as if they are discordant pairs. Predicting a tie can only decrease a τ 10 correlation. In fact, the τ 10 values of MaTESe and GEMBA can be improved by large margins simply by randomly breaking all ties since around half of the pairs will now become concordant, while the other half remain penalized as they were before. For example, randomly breaking ties improves MaTESe and GEMBA-GPT-3.5's correlations by around 0.5-0.6 points (MaTESe: -0.459 to ≈0.15, GEMBA: -0.344 to ≈0.15). In contrast, COMET-22's correlation only improves by ≈0.005 due to the fact that it predicts few ties (see Table 4).\nIn contrast, when the metrics are ranked by acc eq with tie calibration, denoted acc * eq , MaTESe and the GEMBA metrics are ranked 4th, 6th, and 15th. Because τ eq and acc eq are never NaN, all values are fairly comparable. Further, there is no systematic bias for or against ties; Randomly breaking or introducing ties runs the risk of changing a correct prediction of a tie or concordant pair into a discordant pair or an incorrect tie prediction. Clearly, the choice of correlation statistic matters, and we argue that acc * eq is the most fair and reliable method compared to the τ variants." }, { "figure_ref": [ "fig_3" ], "heading": "Generalization of Epsilon", "publication_ref": [ "b17" ], "table_ref": [], "text": "The previous analysis selected ϵ * on the same dataset that is used to rank the metrics. Here, we examine what happens if the ϵ value is selected on a held-out dataset. For this analysis, the MQM ratings from the WMT'21 metrics shared task (Freitag et al., 2021b) are used as a held-out set.\nFigure 4 shows the different ϵ * and acc * eq values for BLEURT-20 when ϵ is selected on one dataset and applied to the other for en-de and zh-en. For ende, the epsilon value changes by 0.03, and the acc eq calculated on the held-out ϵ changes by relative 2%, suggesting the results are rather stable. However, zh-en behaves quite differently. From the plots, it is clear that for WMT'21, there is almost never an incentive to predict a tie, as evidenced by the very low ϵ * , and the corresponding ϵ * does not generalize well to WMT'22 (or vice versa). Our hypothesis is that this result is due to the fact that the WMT'21 zh-en data has far fewer ties than the WMT'22 data (23% versus 41%).\nThese results indicate that ϵ * values are not likely to generalize across dissimilar datasets under current metrics. Such a property would be desirableand an interesting challenge for metric developerssince it would make score differences more interpretable. However, we argue that treating ϵ * as a latent variable calibrated on the current test set allows for fair comparisons of metrics even in the absence of this property. Other evaluation protocols have also involved optimizations on the test set, for example using an oracle sentence segmenter to evaluate MT for speech (Matusov et al., 2005)." }, { "figure_ref": [], "heading": "Where are Ties Introduced?", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Since most of the human score ties are for error free translations (see Table 3), it is worth understanding if the tie threshold introduces ties for high scoring translations to predict error-free translation ties or if the ties are introduced more uniformly across the score distribution.\nFigure 5 plots the distribution of the average score per pair where ties are introduced by ϵ * for Metric-X on the WMT'22 zh-en dataset. In com- parison to the distribution of all pairs' average scores, the tied distribution is skewed toward higher predicted scores. Since Metric-X has a relatively strong correlation to MQM scores, this suggests that the newly introduced ties mostly predict perfect translations, which are assigned high scores according to the metric. An extension of our tie calibration procedure could first identify a threshold to predict a perfect translation, then run tie calibration on the remaining pairs." }, { "figure_ref": [], "heading": "Class-Specific Statistics", "publication_ref": [], "table_ref": [], "text": "Figure 6 plots the ties-F 1 , correct-rank-F 1 (see §6.1), and pairwise accuracy for COMET-22 on en-de. The ties-F 1 is much higher than the correctrank-F 1 for almost every ϵ, demonstrating that the metric more reliably predicts tied pairs than the correct rank for non-tied pairs. This is likely due to the fact that the number of perfect translations is large, and the ϵ values are biased toward introducing ties to predict perfect translations ( §8.3). If a statistic other than pairwise accuracy is better aligned to how a metric is being used in practice, the tie calibration procedure can be used to select an ϵ that strikes the desired balance of performance with respect to the class-specific statistics.\nIn this work, we demonstrated the importance of taking ties into account when calculating rankbased correlation statistics. We argued existing variants of Kendall's τ are inadequate for the current state of meta-evaluation. We advocated to instead use pairwise accuracy, which rewards metrics for both predicting correct pair rankings and correctly predicting ties, in combination with a tie calibration procedure that allows for comparing metrics that do and do not predict ties. Although our experiments were specific to MT, the methods proposed are generally applicable to any metric meta-evaluation in NLP." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The tie calibration algorithm introduced in §7 makes an assumption that absolute differences in metric scores reflect the same amount of change in quality for any value of the metric. That is, the difference in predicted quality between translations with scores 0.2 and 0.1 is the same as with scores 100.2 and 100.1. An alternative version of the tie calibration algorithm could introduce ties based on relative differences between scores instead of absolute differences. We experimented with relative differences and did not see a significant different in results. However, it may be that a metric that we did not experiment with performs better with relative ϵ instead of an absolute difference.\nSince the tie decision operates at the pair level, the ϵ value does not induce a global ordering of translations. For example, if there are scores 1, 2, and 3 with ϵ = 1, pairs (1, 2) and (2, 3) are tied but (1, 3) is not. A similar limitation can also be observed in pairwise statistical significance testing.\nFinally, although we argue that our metaevaluation proposal is more fair, we are unaware of any way to prove that this is true. Instead, we rely on experimental results and the fact that our proposals are not susceptible to known issues with existing methodologies. " }, { "figure_ref": [], "heading": "A Correlation Definitions", "publication_ref": [], "table_ref": [], "text": "This section more explicitly defines the three different types of segment-level correlations. Let h ij and m ij denote the human and metric scores for the translation produced by system i ∈ 1, . . . , N on source segment j ∈ 1, . . . , M . Define Corr(•) to be a correlation coefficient, such as Pearson's r, Spearman's ρ, Kendall's τ , or any such function that calculates an agreement score over a set of paired observations, like the pairwise accuracy statistic proposed in this work. There are three different segment-level correlations that can be computed." }, { "figure_ref": [], "heading": "No-Grouping: Corr", "publication_ref": [], "table_ref": [], "text": "{(h ij , m ij )} N,M i=1,j=1(1)\n2. Group-by-Item:\n1 M M j=1 Corr {(h ij , m ij )} N i=1 (2)\n3. Group-by-System:\n1 N N i=1 Corr {(h ij , m ij )} M j=1 (3) Metric Notation < = > Human < C T m D = T h T hm T h > D T m C\nTable 7: A mapping between the notation in this paper and the tabular notation from WMT'14. " }, { "figure_ref": [], "heading": "Metric", "publication_ref": [], "table_ref": [], "text": "τ 10 < = > Human < 1 -1 -1 = X X X > -1 -1 1 Metric τ 13 < = > Human < 1 X -1 = X X X > -1 X 1 Metric τ 14 < = > Human < 1 0 -1 = X X X > -1 0 1" }, { "figure_ref": [], "heading": "B WMT Tabular Notation", "publication_ref": [ "b15" ], "table_ref": [ "tab_0", "tab_8" ], "text": "WMT'14 (Macháček and Bojar, 2014) developed a tabular notation to describe how Kendall's τ was calculated. For completeness, we include a mapping of the notation from this work in Table 2 to the tabular notation in Table 7. The tabular versions of τ 10 , τ 13 , and τ 14 are reproduced in Table 8. The tabular versions of τ eq and acc eq are included in Table 9.\nRe-using the notation from WMT'14, a τ value can be computed using the tabular notation via the following equation:\nτ = h,m∈{<,=,>} C h,m ̸ =X C h,m |S h,m | h,m∈{<,=,>} C h,m ̸ =X |S h,m |(4)\nC h,m is defined as the coefficient in the tabular notation and S h,m is the number of pairs that fall into the corresponding bucket." }, { "figure_ref": [], "heading": "Metric", "publication_ref": [], "table_ref": [], "text": "τ eq < = > Human < 1 -1 -1 = -1 1 -1 > -1 -1 1 Metric acc eq < = > Human < 1 0 0 = 0 1 0 > 0 0 1 Table 9:\nThe tabular versions of τ eq and acc eq ." }, { "figure_ref": [], "heading": "C Additional Results", "publication_ref": [], "table_ref": [ "tab_10", "tab_12" ], "text": "Table 10 contains more statistics related to the number of tied pairs in the WMT'22 MQM scores, including the number of pairs that are tied with a score of 0 (i.e., an error free translation).\nThe full correlation results and metric ranks according to the different τ s across different language pairs and segment-level correlations is included in this section. See Table 11 for the listing of the individual tables." }, { "figure_ref": [], "heading": "D Tie Calibration Psuedocode", "publication_ref": [], "table_ref": [], "text": "Algorithm 1 contains the pseudocode for the tie calibration procedure ( §7) when applied to two vectors of human and metric scores. The algorithm runs in O(n 2 log n) where n is the number of scored translations. The bottleneck is sorting all of the n 2 possible pairs. When n 2 is too large, we approximate the search for the optimal ϵ by downsampling the number of pairs. See Appendix E for an analysis of how lossy this approximation is.\nIn practice, the tie calibration is applied to matrices of human and metric scores, where each row corresponds to a group (see §2). The algorithm is very similar to Algorithm 1 except there is additional bookkeeping required to match each (i, j) pair to the group that it came from. The extra bookkeeping only adds an O(1) overhead." }, { "figure_ref": [], "heading": "E Epsilon Search Approximation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Finding the exact ϵ value that maximizes pairwise accuracy requires considering all possible n 2 choices of ϵ. For specific segment-level correlations, such as the no-grouping variant, n 2 can be prohibitively large, on the order of hundreds of millions of pairs (see Table 3). " }, { "figure_ref": [], "heading": "LP Correlation Table", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "en-de", "publication_ref": [], "table_ref": [], "text": "No-Grouping When the number of pairs is too large, we instead find an approximate best ϵ by sampling from all possible pairs. Figure 7 plots the ϵ * values calculated on a subset of the data and the value of acc eq for those ϵ * values. Even with as little as 10% of the possible pairs, the approximations are quite precise. The largest observed differences over 30 iterations for ϵ * were 2.5e-3 and for acc eq were 4.3e-5. Overall, downsampling appears to be a safe approximation to improve the run time of the tie calibration algorithm." }, { "figure_ref": [], "heading": "F Unbabel Normalization", "publication_ref": [], "table_ref": [ "tab_24" ], "text": "The experiments in this paper calculate MQM scores for translations using the normalization technique advocated for by Google: a translation's MQM score is the sum of the weights of each of the errors. The alternative method used by Unbabel normalizes the sum of error weights by the length of the translation. The Unbabel normalization will thus result in fewer human score ties than the Google normalization.\nWe repeated the analysis from §8.1 using the Unbabel normalization method and calculated the rankings of the different metrics under acc eq and τ variants for the en-ru language pair. The results Algorithm 1 An O(n 2 log n) algorithm that introduces metric ties to select an optimal τ value. C, D, T h , Tm, T hm ← 0, 0, 0, 0, 0 23:\nfor (i, j) ∈ {(i, j) : i, j = 1, . . . , n; i < j} do 24:\nif hi = hj and |mi -mj| ≤ ϵ then 25:\nT hm ← T hm + 1 26:\nelse if hi = hj then 27:\nT h ← T h + 1 28:\nelse if |mi -mj| ≤ ϵ then 29:\nTm ← Tm + 1 30:\nelse if Sign(hi -hj) = Sign(mi -mj) then 31:\nC ← C + 1 32: else 33: D ← D + 1 34:\nreturn C, D, T h , Tm, T hm 35: end function for the group-by-item segment-level correlation are shown in Table 21.\nOverall, the fewer ties did not make a significant impact on whether or not it was possible to demonstrate that the meta-evaluation statistics are biased toward or against ties. For instance, τ b favors metrics with ties, such as GEMBA-GPT-4, and τ 10 is still biased against metrics that predict ties. We suspect this is due to the fact that the majority of ties occur for perfect translations, which will remain tied in either normalization method. Further, the number of non-perfect ties (MQM score of 0) only decreased by 7% (from 44% to 37%). Therefore, " }, { "figure_ref": [], "heading": "Accuracy", "publication_ref": [], "table_ref": [], "text": "Figure 7: The plots show the best ϵ * found to maximize acc eq when the n 2 pairs are downsampled (percent shown on x-axis). The ϵ * is then used to calculate acc eq on all n 2 pairs. Each sampling rate was run 30 times and the maximum, minimum, and mean values are shown in the plots. Even with as little as 10% of the possible pairs, the approximation is very good.\nwe argue that the results presented in this work apply to either normalization technique, but larger changes will likely be observed under using the Google method due to the increase in number of ties. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank Juraj Juraska, Mara Finkelstein, Ricardo Rei, Chi-kiu Lo, Tom Kocmi, and Alon Lavie for their helpful discussions and feedback related to this work." } ]
Kendall's τ is frequently used to meta-evaluate how well machine translation (MT) evaluation metrics score individual translations. Its focus on pairwise score comparisons is intuitive but raises the question of how ties should be handled, a gray area that has motivated different variants in the literature. We demonstrate that, in settings like modern MT meta-evaluation, existing variants have weaknesses arising from their handling of ties, and in some situations can even be gamed. We propose instead to meta-evaluate metrics with a version of pairwise accuracy that gives metrics credit for correctly predicting ties, in combination with a tie calibration procedure that automatically introduces ties into metric scores, enabling fair comparison between metrics that do and do not predict ties. We argue and provide experimental evidence that these modifications lead to fairer ranking-based assessments of metric performance. 1
Ties Matter: Meta-Evaluating Modern Metrics with Pairwise Accuracy and Tie Calibration
[ { "figure_caption": "Figure 1 :1Figure 1: Pearson's r, Spearman's ρ, and Kendall's τ b calculated between hypothetical human scores and metric scores. Lines between data points are shown for visualization purposes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Burch et al. (2010) 2010-2012, 2017-2020 4 τ 13 = (C -D)/(C + D)Macháček and Bojar (2013) 2013τ 14 = (C -D)/(C + D + T m )Macháček andBojar (2014Bojar ( ) 2014Bojar ( -2016 ", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Dividing the Metric-X scores into equal width buckets can increase the group-by-item correlation by a large margin. However, at the same time, the number of groups used in the correlation (with non-NaN scores) decreases, meaning the corresponding correlations are not fairly comparable since they are computed on different sets of data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: The generalization of the selected ϵ * (dashed line) across datasets appears to depend on specific properties of the datasets. We suspect if the number of ties in the datasets is very different (as in zh-en), the ϵ is less likely to generalize well.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure5: The distribution of average pair scores where ties are introduced for Metric-X on WMT'22 zh-en using ϵ * as the tie threshold is skewed right with respect to the distribution of all pairs, suggesting the ϵ is biased toward introducing ties to predict perfect translations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": The number of segments, pairs, tied pairs, andpairs tied at MQM=0 (error free) across the differentWMT'22 language pairs for group-by-item correlations.The statistics for other segment-level correlations canbe found in Appendix C.", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": The percent of pairs that are tied when groupingby source segment is drastically different for regressionmetrics (Metric-X and COMET) versus metrics thateffectively act as multi-class classifiers (MaTESe andGEMBA).", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The tabular versions τ 10 , τ 13 and τ 14 .", "figure_data": "", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "The number of translations, pairs, tied pairs, and pairs tied at MQM=0 (perfect translations) across the different WMT'22 language pairs and segment-level correlations.", "figure_data": "", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Pointers to the full correlation and metric ranking results under different τ s for each language pair and type of segment-level correlation.", "figure_data": "", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the no-grouping correlation on the WMT'22 en-de dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.289 ( 2) 0.356 ( 2) 0.293 ( 2) 0.440 ( 2) 0.440 ( 4) 0.440 ( 2) 0.473 ( 4) 0.525 ( 1) 0.05UniTE0.288 ( 3) 0.356 ( 3) 0.292 ( 3) 0.438 ( 3) 0.438 ( 5) 0.438 ( 3) 0.473 ( 5) 0.519 ( 2) 0.11COMET-220.292 ( 1) 0.361 ( 1) 0.296 ( 1) 0.445 ( 1) 0.445 ( 3) 0.445 ( 1) 0.475 ( 3) 0.518 ( 3) 0.14MaTESe0.170 (14) 0.323 ( 6) 0.181 (14) -0.241 (16) 0.516 ( 2) 0.258 (14) 0.500 ( 1) 0.494 ( 4) 0.00GEMBA-GPT-40.199 (11) 0.347 ( 4) 0.214 (11) -0.153 (15) 0.555 ( 1) 0.302 (11) 0.481 ( 2) 0.493 ( 5) 4.00BLEURT-200.274 ( 4) 0.338 ( 5) 0.278 ( 4) 0.417 ( 4) 0.417 ( 8) 0.417 ( 4) 0.466 ( 6) 0.490 ( 6) 0.06MS-COMET-220.225 ( 7) 0.277 (10) 0.228 ( 7) 0.342 ( 8) 0.342 (11) 0.342 ( 7) 0.441 (11) 0.487 ( 7) 1.21UniTE-src0.229 ( 6) 0.283 ( 9) 0.232 ( 6) 0.349 ( 6) 0.349 (10) 0.349 ( 6) 0.444 (10) 0.479 ( 8) 0.10COMETKiwi0.230 ( 5) 0.283 ( 8) 0.233 ( 5) 0.349 ( 5) 0.349 ( 9) 0.349 ( 5) 0.444 ( 9) 0.473 ( 9) 0.13GEMBA-GPT-3.50.208 (10) 0.301 ( 7) 0.222 ( 9) 0.058 (14) 0.426 ( 6) 0.316 (10) 0.452 ( 8) 0.461 (10) 5.00COMET-QE0.225 ( 8) 0.277 (11) 0.228 ( 8) 0.342 ( 7) 0.342 (12) 0.342 ( 8) 0.441 (12) 0.457 (11) 0.01MaTESe-QE0.119 (16) 0.242 (13) 0.129 (15) -0.391 (17) 0.422 ( 7) 0.181 (16) 0.457 ( 7) 0.456 (12) 0.00MS-COMET-QE-220.184 (13) 0.226 (15) 0.186 (13) 0.279 (11) 0.279 (15) 0.279 (13) 0.421 (15) 0.456 (13) 1.46SEScore0.211 ( 9) 0.261 (12) 0.214 (10) 0.322 ( 9) 0.322 (13) 0.322 ( 9) 0.435 (13) 0.452 (14) 0.39MEE40.191 (12) 0.236 (14) 0.194 (12) 0.290 (10) 0.291 (14) 0.290 (12) 0.425 (14) 0.429 (15) 0.01HWTSC-Teacher-Sim 0.122 (15) 0.150 (16) 0.123 (16) 0.185 (12) 0.185 (16) 0.185 (15) 0.390 (16) 0.403 (16) 0.15REUSE0.046 (17) 0.057 (17) 0.047 (17) 0.070 (13) 0.070 (17) 0.070 (17) 0.352 (17) 0.354 (17) 0.01Constant-Metric0.000 (18) 0.000 (18) 0.000 (18) -1.000 (18) 0.000 (18) 0.000 (18) 0.342 (18) 0.339 (18) 0.00Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.174 ( 1) 0.270 ( 4) 0.269 ( 1) 0.381 ( 1) 0.385 ( 3) 0.384 ( 1) 0.325 (11) 0.605 ( 1)0.04UniTE0.162 ( 3) 0.278 ( 3) 0.255 ( 2) 0.322 ( 3) 0.382 ( 4) 0.368 ( 3) 0.425 ( 6) 0.595 ( 2)0.14COMET-220.163 ( 2) 0.258 ( 5) 0.255 ( 3) 0.366 ( 2) 0.371 ( 5) 0.370 ( 2) 0.325 (12) 0.594 ( 3)0.11MaTESe0.080 (12) 0.281 ( 2) 0.207 ( 6) -0.459 (16) 0.391 ( 2) 0.171 (13) 0.582 ( 1) 0.582 ( 4)0.00UniTE-src0.123 ( 5) 0.205 ( 9) 0.190 ( 7) 0.221 ( 8) 0.277 ( 9) 0.266 ( 6) 0.406 ( 8) 0.582 ( 5)0.12GEMBA-GPT-40.106 ( 9) 0.322 ( 1) 0.241 ( 4) -0.367 (15) 0.487 ( 1) 0.237 (10) 0.567 ( 3) 0.573 ( 6)4.00MaTESe-QE0.059 (15) 0.234 ( 7) 0.175 (10) -0.573 (17) 0.320 ( 8) 0.121 (15) 0.572 ( 2) 0.572 ( 7)0.00COMETKiwi0.116 ( 6) 0.181 (12) 0.178 ( 9) 0.254 ( 6) 0.259 (11) 0.259 ( 7) 0.301 (14) 0.572 ( 8)0.16BLEURT-200.149 ( 4) 0.254 ( 6) 0.233 ( 5) 0.289 ( 4) 0.344 ( 6) 0.334 ( 4) 0.419 ( 7) 0.568 ( 9)0.09MS-COMET-220.107 ( 8) 0.169 (13) 0.166 (11) 0.241 ( 7) 0.244 (13) 0.244 ( 9) 0.294 (15) 0.565 (10)4.65COMET-QE0.094 (11) 0.138 (14) 0.144 (14) 0.179 (10) 0.182 (14) 0.182 (12) 0.285 (17) 0.555 (11)0.01SEScore0.114 ( 7) 0.182 (10) 0.180 ( 8) 0.269 ( 5) 0.270 (10) 0.270 ( 5) 0.291 (16) 0.554 (12)1.30MS-COMET-QE-220.051 (16) 0.080 (16) 0.076 (16) 0.116 (12) 0.118 (16) 0.118 (16) 0.264 (18) 0.550 (13)6.50HWTSC-Teacher-Sim 0.067 (14) 0.106 (15) 0.103 (15) 0.123 (11) 0.149 (15) 0.147 (14) 0.328 (10) 0.545 (14)0.34GEMBA-GPT-3.50.078 (13) 0.209 ( 8) 0.151 (13) -0.344 (14) 0.324 ( 7) 0.189 (11) 0.509 ( 5) 0.545 (15) 15.00MEE40.100 (10) 0.182 (11) 0.157 (12) 0.201 ( 9) 0.252 (12) 0.246 ( 8) 0.394 ( 9) 0.539 (16)0.13REUSE-0.052 (18) -0.074 (18) -0.081 (18) -0.134 (13) -0.093 (18) -0.087 (18) 0.319 (13) 0.534 (17)0.47Constant-Metric0.000 (17) 0.000 (17) 0.000 (17) -1.000 (18) 0.000 (17) 0.000 (17) 0.534 ( 4) 0.534 (18)0.00", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-item correlation on the WMT'22 en-de dataset.", "figure_data": "0.0550.050 Epsilon0.0450.524750.524500.52425024 Sampling Percent 6810", "figure_id": "tab_15", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-system correlation on the WMT'22 en-de dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.370 ( 1) 0.420 ( 1) 0.372 ( 1) 0.477 ( 1) 0.477 ( 4) 0.477 ( 1) 0.573 ( 1) 0.579 ( 1) 0.02COMET-220.352 ( 2) 0.400 ( 2) 0.354 ( 2) 0.454 ( 2) 0.454 ( 5) 0.454 ( 2) 0.564 ( 2) 0.563 ( 2) 0.01UniTE0.329 ( 3) 0.374 ( 3) 0.331 ( 3) 0.425 ( 3) 0.425 ( 6) 0.425 ( 3) 0.553 ( 3) 0.553 ( 3) 0.03COMETKiwi0.316 ( 4) 0.359 ( 4) 0.318 ( 4) 0.408 ( 4) 0.408 ( 8) 0.408 ( 4) 0.546 ( 5) 0.548 ( 4) 0.02BLEURT-200.316 ( 5) 0.359 ( 5) 0.318 ( 5) 0.408 ( 5) 0.408 ( 9) 0.408 ( 5) 0.546 ( 4) 0.542 ( 5) 0.00MS-COMET-220.309 ( 6) 0.351 ( 7) 0.311 ( 6) 0.399 ( 6) 0.399 (10) 0.399 ( 6) 0.542 ( 6) 0.539 ( 6) 0.18UniTE-src0.301 ( 7) 0.342 ( 8) 0.303 ( 7) 0.388 ( 7) 0.388 (11) 0.388 ( 7) 0.538 ( 7) 0.536 ( 7) 0.01COMET-QE0.300 ( 8) 0.341 ( 9) 0.302 ( 8) 0.387 ( 8) 0.387 (12) 0.387 ( 8) 0.538 ( 8) 0.536 ( 8) 0.00MS-COMET-QE-220.269 ( 9) 0.305 (11) 0.271 (10) 0.347 ( 9) 0.347 (13) 0.347 ( 9) 0.522 ( 9) 0.519 ( 9) 0.00GEMBA-GPT-3.50.259 (10) 0.332 (10) 0.279 ( 9) 0.125 (12) 0.422 ( 7) 0.334 (10) 0.488 (10) 0.484 (10) 0.00GEMBA-GPT-40.245 (11) 0.358 ( 6) 0.262 (11) -0.046 (14) 0.496 ( 2) 0.316 (11) 0.483 (11) 0.483 (11) 2.00MEE40.185 (12) 0.210 (14) 0.186 (12) 0.238 (10) 0.239 (14) 0.239 (12) 0.481 (12) 0.477 (12) 0.00HWTSC-Teacher-Sim 0.126 (14) 0.143 (15) 0.127 (14) 0.163 (11) 0.163 (15) 0.163 (14) 0.451 (13) 0.448 (13) 0.00REUSE0.069 (16) 0.078 (16) 0.069 (16) 0.088 (13) 0.088 (16) 0.088 (16) 0.422 (14) 0.418 (14) 0.00MaTESe0.128 (13) 0.279 (12) 0.140 (13) -0.523 (15) 0.529 ( 1) 0.165 (13) 0.380 (15) 0.389 (15) 0.00MaTESe-QE0.093 (15) 0.229 (13) 0.103 (15) -0.636 (16) 0.493 ( 3) 0.120 (15) 0.341 (16) 0.349 (16) 0.00Constant-Metric0.000 (17) 0.000 (17) 0.000 (17) -1.000 (17) 0.000 (17) 0.000 (17) 0.225 (17) 0.230 (17) 0.00", "figure_id": "tab_17", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the no-grouping correlation on the WMT'22 en-ru dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.239 ( 1) 0.329 ( 2) 0.323 ( 1) 0.444 ( 1) 0.445 ( 3) 0.445 ( 1) 0.402 (10) 0.606 ( 1) 0.03COMET-220.230 ( 2) 0.315 ( 3) 0.309 ( 2) 0.420 ( 2) 0.421 ( 4) 0.421 ( 2) 0.396 (11) 0.577 ( 2) 0.07UniTE0.220 ( 3) 0.311 ( 4) 0.297 ( 3) 0.399 ( 3) 0.415 ( 5) 0.412 ( 3) 0.445 ( 8) 0.572 ( 3) 0.05COMETKiwi0.181 ( 5) 0.247 ( 8) 0.242 ( 7) 0.331 ( 5) 0.332 ( 9) 0.332 ( 6) 0.372 (13) 0.565 ( 4) 0.05UniTE-src0.180 ( 6) 0.267 ( 7) 0.245 ( 6) 0.322 ( 7) 0.347 ( 8) 0.343 ( 5) 0.463 ( 6) 0.554 ( 5) 0.07GEMBA-GPT-40.155 ( 8) 0.356 ( 1) 0.273 ( 4) -0.203 (13) 0.519 ( 1) 0.290 ( 8) 0.548 ( 1) 0.550 ( 6) 4.00MS-COMET-220.177 ( 7) 0.243 ( 9) 0.238 ( 8) 0.327 ( 6) 0.328 (10) 0.328 ( 7) 0.367 (14) 0.547 ( 7) 2.47BLEURT-200.202 ( 4) 0.291 ( 6) 0.269 ( 5) 0.347 ( 4) 0.369 ( 7) 0.367 ( 4) 0.474 ( 5) 0.540 ( 8) 0.05COMET-QE0.148 (10) 0.200 (13) 0.197 (11) 0.262 ( 9) 0.262 (13) 0.262 (10) 0.352 (15) 0.534 ( 9) 0.01MS-COMET-QE-220.131 (11) 0.180 (14) 0.177 (12) 0.243 (10) 0.243 (14) 0.243 (11) 0.344 (16) 0.528 (10) 3.14MaTESe0.075 (14) 0.293 ( 5) 0.224 ( 9) -0.630 (15) 0.472 ( 2) 0.126 (14) 0.520 ( 2) 0.520 (11) 0.00MaTESe-QE0.048 (15) 0.238 (10) 0.168 (13) -0.728 (16) 0.379 ( 6) 0.085 (15) 0.499 ( 3) 0.499 (12) 0.00GEMBA-GPT-3.50.098 (13) 0.201 (12) 0.158 (14) -0.237 (14) 0.308 (11) 0.196 (13) 0.475 ( 4) 0.494 (13) 5.00HWTSC-Teacher-Sim 0.098 (12) 0.147 (15) 0.137 (15) 0.185 (11) 0.199 (15) 0.198 (12) 0.387 (12) 0.481 (14) 0.05MEE40.149 ( 9) 0.222 (11) 0.201 (10) 0.267 ( 8) 0.289 (12) 0.287 ( 9) 0.448 ( 7) 0.473 (15) 0.03REUSE-0.076 (17) -0.090 (17) -0.097 (17) -0.119 (12) -0.099 (17) -0.097 (17) 0.335 (17) 0.446 (16) 0.14Constant-Metric0.000 (16) 0.000 (16) 0.000 (16) -1.000 (17) 0.000 (16) 0.000 (16) 0.444 ( 9) 0.444 (17) 0.00", "figure_id": "tab_18", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-item correlation on the WMT'22 en-ru dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.352 ( 1) 0.401 ( 1) 0.358 ( 1) 0.457 ( 1) 0.457 ( 4) 0.457 ( 1) 0.559 ( 1) 0.569 ( 1) 0.02COMET-220.337 ( 2) 0.384 ( 2) 0.343 ( 2) 0.438 ( 2) 0.438 ( 5) 0.438 ( 2) 0.552 ( 2) 0.548 ( 2) 0.03UniTE0.316 ( 3) 0.360 ( 3) 0.321 ( 3) 0.410 ( 3) 0.410 ( 6) 0.410 ( 3) 0.541 ( 3) 0.543 ( 3) 0.04COMETKiwi0.308 ( 4) 0.350 ( 4) 0.313 ( 4) 0.399 ( 4) 0.399 ( 8) 0.399 ( 4) 0.537 ( 4) 0.538 ( 4) 0.02COMET-QE0.299 ( 6) 0.340 ( 6) 0.304 ( 6) 0.387 ( 6) 0.387 (10) 0.387 ( 6) 0.533 ( 6) 0.529 ( 5) 0.00BLEURT-200.301 ( 5) 0.343 ( 5) 0.306 ( 5) 0.391 ( 5) 0.391 ( 9) 0.391 ( 5) 0.534 ( 5) 0.527 ( 6) 0.00UniTE-src0.291 ( 8) 0.332 ( 8) 0.296 ( 8) 0.378 ( 8) 0.378 (12) 0.378 ( 8) 0.529 ( 8) 0.526 ( 7) 0.02MS-COMET-220.296 ( 7) 0.337 ( 7) 0.301 ( 7) 0.384 ( 7) 0.384 (11) 0.384 ( 7) 0.531 ( 7) 0.524 ( 8) 1.00MS-COMET-QE-220.261 ( 9) 0.297 (11) 0.266 (10) 0.337 ( 9) 0.337 (13) 0.337 ( 9) 0.514 ( 9) 0.505 ( 9) 0.00GEMBA-GPT-3.50.250 (10) 0.321 (10) 0.272 ( 9) 0.113 (12) 0.410 ( 7) 0.324 (10) 0.482 (10) 0.475 (10) 0.00GEMBA-GPT-40.224 (11) 0.327 ( 9) 0.244 (11) -0.089 (14) 0.460 ( 3) 0.288 (11) 0.471 (11) 0.473 (11) 2.00MEE40.171 (12) 0.195 (14) 0.174 (12) 0.222 (10) 0.223 (14) 0.223 (12) 0.470 (12) 0.461 (12) 0.00HWTSC-Teacher-Sim 0.123 (13) 0.139 (15) 0.125 (14) 0.159 (11) 0.159 (15) 0.159 (13) 0.445 (13) 0.439 (13) 0.00REUSE0.084 (16) 0.095 (16) 0.085 (16) 0.108 (13) 0.108 (16) 0.108 (16) 0.425 (14) 0.422 (14) 0.00MaTESe0.120 (14) 0.258 (12) 0.139 (13) -0.544 (15) 0.504 ( 1) 0.153 (14) 0.381 (15) 0.387 (15) 0.00MaTESe-QE0.089 (15) 0.211 (13) 0.105 (15) -0.650 (16) 0.466 ( 2) 0.113 (15) 0.345 (16) 0.353 (16) 0.00Constant-Metric0.000 (17) 0.000 (17) 0.000 (17) -1.000 (17) 0.000 (17) 0.000 (17) 0.233 (17) 0.242 (17) 0.00", "figure_id": "tab_19", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-system correlation on the WMT'22 en-ru dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.362 ( 1) 0.421 ( 1) 0.364 ( 1) 0.489 ( 1) 0.489 ( 1) 0.489 ( 1) 0.551 ( 1) 0.565 ( 1) 0.06COMET-220.361 ( 2) 0.420 ( 2) 0.363 ( 2) 0.488 ( 2) 0.488 ( 2) 0.488 ( 2) 0.551 ( 2) 0.555 ( 2) 0.14MaTESe0.295 ( 7) 0.382 ( 3) 0.307 ( 4) 0.244 (12) 0.471 ( 5) 0.399 ( 7) 0.541 ( 3) 0.536 ( 3) 0.00COMET-QE0.306 ( 3) 0.356 ( 6) 0.308 ( 3) 0.414 ( 3) 0.414 ( 6) 0.414 ( 3) 0.523 ( 4) 0.520 ( 4) 0.00COMETKiwi0.303 ( 6) 0.352 ( 9) 0.304 ( 7) 0.409 ( 6) 0.409 (10) 0.409 ( 6) 0.522 ( 7) 0.520 ( 5) 0.03BLEURT-200.303 ( 5) 0.352 ( 8) 0.305 ( 6) 0.410 ( 5) 0.410 ( 9) 0.410 ( 5) 0.522 ( 6) 0.517 ( 6) 0.00UniTE0.305 ( 4) 0.354 ( 7) 0.306 ( 5) 0.412 ( 4) 0.412 ( 7) 0.412 ( 4) 0.523 ( 5) 0.516 ( 7) 0.05GEMBA-GPT-40.268 (11) 0.370 ( 4) 0.284 (10) 0.108 (16) 0.486 ( 4) 0.362 (11) 0.513 (11) 0.513 ( 8) 4.00MS-COMET-220.288 ( 8) 0.335 (10) 0.289 ( 8) 0.389 ( 7) 0.389 (11) 0.389 ( 8) 0.514 ( 8) 0.510 ( 9) 0.02UniTE-src0.286 ( 9) 0.332 (11) 0.287 ( 9) 0.386 ( 8) 0.386 (12) 0.386 ( 9) 0.513 (10) 0.508 (10) 0.00SEScore0.279 (10) 0.324 (13) 0.280 (11) 0.377 ( 9) 0.377 (13) 0.377 (10) 0.510 (12) 0.506 (11) 0.00MaTESe-QE0.251 (13) 0.328 (12) 0.261 (13) 0.164 (14) 0.411 ( 8) 0.339 (13) 0.513 ( 9) 0.506 (12) 0.00GEMBA-GPT-3.50.254 (12) 0.360 ( 5) 0.273 (12) 0.047 (17) 0.486 ( 3) 0.343 (12) 0.499 (13) 0.499 (13) 0.00MS-COMET-QE-220.238 (14) 0.277 (14) 0.239 (14) 0.322 (10) 0.322 (14) 0.322 (14) 0.489 (14) 0.486 (14) 0.00HWTSC-Teacher-Sim 0.227 (15) 0.264 (15) 0.228 (15) 0.307 (11) 0.307 (15) 0.307 (15) 0.484 (15) 0.477 (15) 0.00MEE40.163 (16) 0.189 (16) 0.164 (16) 0.220 (13) 0.220 (16) 0.220 (16) 0.452 (16) 0.449 (16) 0.00REUSE0.100 (17) 0.116 (17) 0.101 (17) 0.135 (15) 0.135 (17) 0.135 (17) 0.420 (17) 0.417 (17) 0.00Constant-Metric0.000 (18) 0.000 (18) 0.000 (18) -1.000 (18) 0.000 (18) 0.000 (18) 0.260 (18) 0.267 (18) 0.00", "figure_id": "tab_20", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the no-grouping correlation on the WMT'22 zh-en dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.191 ( 3) 0.255 ( 5) 0.245 ( 4) 0.343 ( 2) 0.344 ( 5) 0.344 ( 3) 0.389 (10) 0.544 ( 1) 0.06COMET-220.198 ( 1) 0.266 ( 2) 0.255 ( 1) 0.361 ( 1) 0.361 ( 3) 0.361 ( 1) 0.392 ( 9) 0.536 ( 2) 0.15GEMBA-GPT-40.175 ( 4) 0.321 ( 1) 0.252 ( 2) -0.071 (13) 0.450 ( 1) 0.314 ( 4) 0.518 ( 1) 0.527 ( 3) 4.00UniTE0.191 ( 2) 0.261 ( 3) 0.246 ( 3) 0.335 ( 3) 0.350 ( 4) 0.347 ( 2) 0.420 ( 5) 0.516 ( 4) 0.29MaTESe0.127 (10) 0.225 ( 7) 0.180 (10) -0.108 (14) 0.289 ( 8) 0.220 (11) 0.498 ( 2) 0.512 ( 5) 1.00COMETKiwi0.159 ( 6) 0.213 ( 9) 0.204 ( 6) 0.288 ( 6) 0.289 ( 9) 0.289 ( 7) 0.372 (11) 0.509 ( 6) 0.16UniTE-src0.164 ( 5) 0.226 ( 6) 0.211 ( 5) 0.293 ( 5) 0.306 ( 6) 0.304 ( 5) 0.407 ( 7) 0.508 ( 7) 0.24GEMBA-GPT-3.50.123 (12) 0.256 ( 4) 0.199 ( 8) -0.271 (16) 0.377 ( 2) 0.225 (10) 0.494 ( 3) 0.495 ( 8) 5.00MaTESe-QE0.097 (14) 0.181 (12) 0.141 (14) -0.195 (15) 0.226 (12) 0.169 (14) 0.484 ( 4) 0.494 ( 9) 1.00COMET-QE0.126 (11) 0.165 (13) 0.159 (12) 0.219 ( 9) 0.219 (13) 0.219 (12) 0.355 (15) 0.483 (10) 0.01MS-COMET-220.144 ( 8) 0.196 (10) 0.187 ( 9) 0.270 ( 7) 0.270 (10) 0.270 ( 8) 0.365 (13) 0.483 (11) 3.49MS-COMET-QE-220.117 (13) 0.158 (14) 0.150 (13) 0.214 (10) 0.214 (14) 0.214 (13) 0.351 (16) 0.479 (12) 1.99SEScore0.158 ( 7) 0.214 ( 8) 0.203 ( 7) 0.295 ( 4) 0.295 ( 7) 0.295 ( 6) 0.371 (12) 0.472 (13) 1.13HWTSC-Teacher-Sim 0.086 (15) 0.116 (15) 0.110 (15) 0.148 (11) 0.156 (15) 0.155 (15) 0.357 (14) 0.440 (14) 0.14MEE40.137 ( 9) 0.190 (11) 0.177 (11) 0.243 ( 8) 0.256 (11) 0.254 ( 9) 0.393 ( 8) 0.437 (15) 0.07REUSE-0.025 (17) -0.019 (17) -0.026 (17) -0.022 (12) -0.010 (17) -0.010 (17) 0.312 (17) 0.420 (16) 0.11Constant-Metric0.000 (16) 0.000 (16) 0.000 (16) -1.000 (17) 0.000 (16) 0.000 (16) 0.416 ( 6) 0.416 (17) 0.00", "figure_id": "tab_21", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-item correlation on the WMT'22 zh-en dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.353 ( 1) 0.411 ( 1) 0.358 ( 1) 0.478 ( 1) 0.478 ( 1) 0.478 ( 1) 0.544 ( 1) 0.557 ( 1) 0.05COMET-220.350 ( 2) 0.408 ( 2) 0.355 ( 2) 0.475 ( 2) 0.475 ( 2) 0.475 ( 2) 0.543 ( 2) 0.546 ( 2) 0.08MaTESe0.288 ( 7) 0.373 ( 3) 0.300 ( 4) 0.231 (12) 0.462 ( 4) 0.390 ( 7) 0.536 ( 3) 0.530 ( 3) 0.00COMET-QE0.306 ( 3) 0.355 ( 4) 0.310 ( 3) 0.414 ( 3) 0.414 ( 6) 0.414 ( 3) 0.521 ( 4) 0.518 ( 4) 0.00COMETKiwi0.295 ( 4) 0.343 ( 7) 0.300 ( 5) 0.399 ( 4) 0.399 ( 8) 0.399 ( 4) 0.515 ( 5) 0.516 ( 5) 0.09BLEURT-200.289 ( 6) 0.336 ( 9) 0.293 ( 7) 0.392 ( 6) 0.392 (10) 0.392 ( 6) 0.512 ( 7) 0.507 ( 6) 0.00MaTESe-QE0.247 (12) 0.323 (10) 0.257 (13) 0.154 (14) 0.405 ( 7) 0.333 (12) 0.510 ( 8) 0.506 ( 7) 0.00UniTE0.290 ( 5) 0.338 ( 8) 0.294 ( 6) 0.393 ( 5) 0.393 ( 9) 0.393 ( 5) 0.513 ( 6) 0.505 ( 8) 0.01GEMBA-GPT-40.250 (11) 0.347 ( 5) 0.269 (11) 0.070 (16) 0.461 ( 5) 0.338 (11) 0.502 (11) 0.504 ( 9) 4.00MS-COMET-220.277 ( 8) 0.322 (11) 0.281 ( 8) 0.376 ( 7) 0.376 (11) 0.376 ( 8) 0.506 ( 9) 0.502 (10) 0.00SEScore0.268 (10) 0.311 (13) 0.272 (10) 0.362 ( 9) 0.362 (13) 0.362 (10) 0.502 (12) 0.499 (11) 0.01UniTE-src0.276 ( 9) 0.321 (12) 0.280 ( 9) 0.374 ( 8) 0.374 (12) 0.374 ( 9) 0.506 (10) 0.498 (12) 0.00GEMBA-GPT-3.50.241 (13) 0.345 ( 6) 0.264 (12) 0.022 (17) 0.470 ( 3) 0.326 (13) 0.492 (13) 0.491 (13) 0.00MS-COMET-QE-220.231 (14) 0.268 (14) 0.234 (14) 0.312 (10) 0.312 (14) 0.312 (14) 0.483 (14) 0.479 (14) 0.01HWTSC-Teacher-Sim 0.228 (15) 0.265 (15) 0.231 (15) 0.308 (11) 0.308 (15) 0.308 (15) 0.482 (15) 0.475 (15) 0.00MEE40.150 (16) 0.174 (16) 0.152 (16) 0.202 (13) 0.202 (16) 0.202 (16) 0.443 (16) 0.441 (16) 0.00REUSE0.108 (17) 0.125 (17) 0.109 (17) 0.146 (15) 0.146 (17) 0.146 (17) 0.422 (17) 0.419 (17) 0.00Constant-Metric0.000 (18) 0.000 (18) 0.000 (18) -1.000 (18) 0.000 (18) 0.000 (18) 0.265 (18) 0.270 (18) 0.00", "figure_id": "tab_22", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "The correlations (and metric ranks) for the group-by-system correlation on the WMT'22 zh-en dataset.", "figure_data": "Metricτ aτ bτ cτ 10τ 13τ 14acc eqacc * eqϵ *Metric-X0.239 ( 1) 0.312 ( 2) 0.295 ( 1) 0.402 ( 1) 0.403 ( 3) 0.403 ( 1) 0.439 ( 8) 0.594 ( 1) 0.03UniTE0.224 ( 3) 0.299 ( 3) 0.275 ( 3) 0.366 ( 3) 0.380 ( 5) 0.378 ( 3) 0.483 ( 4) 0.569 ( 2) 0.03COMET-220.230 ( 2) 0.298 ( 4) 0.281 ( 2) 0.380 ( 2) 0.381 ( 4) 0.381 ( 2) 0.433 (10) 0.564 ( 3) 0.05COMETKiwi0.187 ( 5) 0.242 ( 7) 0.229 ( 6) 0.310 ( 4) 0.311 ( 8) 0.311 ( 5) 0.411 (11) 0.561 ( 4) 0.03UniTE-src0.187 ( 4) 0.262 ( 6) 0.231 ( 5) 0.302 ( 5) 0.324 ( 7) 0.320 ( 4) 0.504 ( 2) 0.547 ( 5) 0.02MS-COMET-220.180 ( 6) 0.234 ( 8) 0.221 ( 8) 0.300 ( 6) 0.301 ( 9) 0.301 ( 6) 0.405 (12) 0.540 ( 6) 1.14GEMBA-GPT-40.156 ( 7) 0.339 ( 1) 0.267 ( 4) -0.237 (12) 0.480 ( 1) 0.262 ( 8) 0.524 ( 1) 0.525 ( 7) 4.00COMET-QE0.151 ( 9) 0.192 (11) 0.184 (10) 0.241 ( 8) 0.241 (12) 0.241 ( 9) 0.390 (13) 0.513 ( 8) 0.00MS-COMET-QE-220.133 (10) 0.173 (13) 0.163 (12) 0.223 ( 9) 0.224 (13) 0.223 (10) 0.382 (14) 0.512 ( 9) 1.42MEE40.154 ( 8) 0.216 (10) 0.189 ( 9) 0.249 ( 7) 0.269 (10) 0.267 ( 7) 0.487 ( 3) 0.487 (10) 0.00HWTSC-Teacher-Sim 0.120 (11) 0.165 (14) 0.149 (13) 0.196 (10) 0.208 (14) 0.208 (11) 0.435 ( 9) 0.480 (11) 0.02MaTESe0.075 (13) 0.278 ( 5) 0.225 ( 7) -0.646 (14) 0.433 ( 2) 0.114 (13) 0.469 ( 5) 0.469 (12) 0.00GEMBA-GPT-3.50.093 (12) 0.179 (12) 0.146 (14) -0.264 (13) 0.261 (11) 0.171 (12) 0.456 ( 6) 0.460 (13) 5.00MaTESe-QE0.047 (14) 0.223 ( 9) 0.165 (11) -0.741 (15) 0.354 ( 6) 0.075 (14) 0.441 ( 7) 0.441 (14) 0.00REUSE-0.064 (16) -0.069 (16) -0.074 (16) -0.085 (11) -0.066 (16) -0.064 (16) 0.378 (15) 0.401 (15) 0.02Constant-Metric0.000 (15) 0.000 (15) 0.000 (15) -1.000 (16) 0.000 (15) 0.000 (15) 0.371 (16) 0.371 (16) 0.00", "figure_id": "tab_23", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "The group-by-item segment-level correlations on WMT'22 en-ru using the Unbabel MQM score normalization largely follow the results in the main body of this work, which use the Google normalization.", "figure_data": "", "figure_id": "tab_24", "figure_label": "21", "figure_type": "table" } ]
Daniel Deutsch; George Foster; Markus Freitag Google
[ { "authors": "Ondřej Bojar; Yvette Graham; Amir Kamran", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Results of the WMT17 Metrics Shared Task", "year": "2017" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language Models are Few-Shot Learners", "year": "2020" }, { "authors": "Chris Callison-Burch; Philipp Koehn; Christof Monz; Kay Peterson; Mark Przybocki; Omar Zaidan", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation", "year": "2010" }, { "authors": "Patrick Fernandes; António Farinhas; Ricardo Rei; G C José; Perez De Souza; Graham Ogayo; Andre Neubig; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Quality-Aware Decoding for Neural Machine Translation", "year": "2022" }, { "authors": "Markus Freitag; George Foster; David Grangier; Viresh Ratnakar; Qijun Tan; Wolfgang Macherey", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation", "year": "2021" }, { "authors": "Markus Freitag; David Grangier; Qijun Tan; Bowen Liang; ; ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "High Quality Rather than High Model Probability: Minimum Bayes Risk Decoding with Neural Metrics", "year": "2022" }, { "authors": "Markus Freitag; Ricardo Rei; Nitika Mathur; Chi-Kiu Lo; Craig Stewart; Eleftherios Avramidis; Tom Kocmi; George Foster; Alon Lavie; F T André; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Results of WMT22 Metrics Shared Task: Stop Using BLEU -Neural Metrics Are Better and More Robust", "year": "2022" }, { "authors": "Markus Freitag; Ricardo Rei; Nitika Mathur; Chi-Kiu Lo; Craig Stewart; George Foster; Alon Lavie; Ondřej Bojar", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain", "year": "2021" }, { "authors": "Maurice G Kendall", "journal": "Biometrika", "ref_id": "b8", "title": "A New Measure of Rank Correlation", "year": "1938" }, { "authors": "Maurice G Kendall", "journal": "Biometrika", "ref_id": "b9", "title": "The Treatment of Ties in Ranking Problems", "year": "1945" }, { "authors": "Tom Kocmi; Christian Federmann", "journal": "", "ref_id": "b10", "title": "Large Language Models Are State-of-the-Art Evaluators of Translation Quality", "year": "2023" }, { "authors": "Tom Kocmi; Christian Federmann; Roman Grundkiewicz; Marcin Junczys-Dowmunt; Hitokazu Matsushita; Arul Menezes", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation", "year": "2021" }, { "authors": "Arle Lommel; Hans Uszkoreit; Aljoscha Burchardt", "journal": "Tradumàtica", "ref_id": "b12", "title": "Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Translation Quality Metrics", "year": "2014" }, { "authors": "Qingsong Ma; Ondřej Bojar; Yvette Graham", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Results of the WMT18 Metrics Shared Task: Both characters and embeddings achieve good performance", "year": "2018" }, { "authors": "Matouš Macháček; Ondřej Bojar", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Results of the WMT13 Metrics Shared Task", "year": "2013" }, { "authors": "Matouš Macháček; Ondřej Bojar", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Results of the WMT14 Metrics Shared Task", "year": "2014" }, { "authors": "Nitika Mathur; Timothy Baldwin; Trevor Cohn", "journal": "", "ref_id": "b16", "title": "Tangled up in bleu: Reevaluating the evaluation of automatic machine translation evaluation metrics", "year": "2020" }, { "authors": "Evgeny Matusov; Gregor Leusch; Oliver Bender; Hermann Ney", "journal": "", "ref_id": "b17", "title": "Evaluating Machine Translation Output with Automatic Sentence Segmentation", "year": "2005" }, { "authors": "Stefano Perrella; Lorenzo Proietti; Alessandro Scirè; Niccolò Campolungo; Roberto Navigli", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "MaTESe: Machine Translation Evaluation as a Sequence Tagging Problem", "year": "2022" }, { "authors": "Ricardo Rei; G C José; Duarte De Souza; Chrysoula Alves; Ana C Zerva; Taisiya Farinha; Alon Glushkova; Luisa Lavie; Coheur; F T André; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "COMET-22: Unbabel-IST 2022 Submission for the Metrics Shared Task", "year": "2022" }, { "authors": "Ricardo Rei; Craig Stewart; Ana C Farinha; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "COMET: A Neural Framework for MT Evaluation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 75.72, 91.43, 388.05, 42.46 ], "formula_id": "formula_0", "formula_text": "τ a = (C -D)/(C + D + T h + T m + T hm ) Kendall (1938) - τ b = (C -D)/ (C + D + T h )(C + D + T m ) Kendall (1945) 2021-2022 τ c = (C -D)/(n 2 ( k-1 k )) Stuart (1953) - τ 10 = (C -D -T m )/(C + D + T m ) Callison-" }, { "formula_coordinates": [ 3, 75.72, 158.23, 351.17, 19.76 ], "formula_id": "formula_1", "formula_text": "τ eq = (C + T hm -D -T h -T m )/(C + D + T h + T m + T hm ) This work ( §6) - acc eq = (C + T hm )/(C + D + T h + T m + T hm )" }, { "formula_coordinates": [ 4, 369.33, 247.72, 91.89, 27.17 ], "formula_id": "formula_2", "formula_text": "m 1 = [0, 0, 0, 0, 2, 1] m 2 = [0," }, { "formula_coordinates": [ 6, 91.94, 96.21, 176.13, 51.47 ], "formula_id": "formula_3", "formula_text": "ties precision T hm /(T hm + T m ) ties recall T hm /(T hm + T h ) correct-rank precision C/(C + D + T h ) correct-rank recall C/(C + D + T m )" }, { "formula_coordinates": [ 10, 394.14, 601.79, 131, 15.7 ], "formula_id": "formula_4", "formula_text": "{(h ij , m ij )} N,M i=1,j=1(1)" }, { "formula_coordinates": [ 10, 358.76, 663.89, 166.38, 33.71 ], "formula_id": "formula_5", "formula_text": "1 M M j=1 Corr {(h ij , m ij )} N i=1 (2)" }, { "formula_coordinates": [ 10, 359.17, 743.25, 165.97, 33.71 ], "formula_id": "formula_6", "formula_text": "1 N N i=1 Corr {(h ij , m ij )} M j=1 (3) Metric Notation < = > Human < C T m D = T h T hm T h > D T m C" }, { "formula_coordinates": [ 11, 126.88, 202.78, 92.07, 217.48 ], "formula_id": "formula_7", "formula_text": "τ 10 < = > Human < 1 -1 -1 = X X X > -1 -1 1 Metric τ 13 < = > Human < 1 X -1 = X X X > -1 X 1 Metric τ 14 < = > Human < 1 0 -1 = X X X > -1 0 1" }, { "formula_coordinates": [ 11, 115.48, 656.22, 174.39, 63.67 ], "formula_id": "formula_8", "formula_text": "τ = h,m∈{<,=,>} C h,m ̸ =X C h,m |S h,m | h,m∈{<,=,>} C h,m ̸ =X |S h,m |(4)" }, { "formula_coordinates": [ 11, 323.55, 86.45, 129.26, 156.36 ], "formula_id": "formula_9", "formula_text": "τ eq < = > Human < 1 -1 -1 = -1 1 -1 > -1 -1 1 Metric acc eq < = > Human < 1 0 0 = 0 1 0 > 0 0 1 Table 9:" }, { "formula_coordinates": [ 12, 306.14, 524.99, 102.2, 38.78 ], "formula_id": "formula_10", "formula_text": "C ← C + 1 32: else 33: D ← D + 1 34:" } ]
10.1111/josl.12080
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b31", "b30", "b36", "b26", "b51", "b43", "b47", "b38", "b45", "b14", "b52" ], "table_ref": [], "text": "Empowerment -the act of supporting someone's ability to make their own decisions, create change, and improve their lives -is a goal in many social interactions. For instance, teachers aim to empower their students, social workers aim to empower their clients, and politicians aim to empower their supporters. A growing body of psychology and linguistics research shows how empowerment -and disempowerment -can impact people by increasing their sense of self-efficacy and self-esteem (Chamberlin, 1997;Osborne, 1994).\nUnderstanding how empowerment is conveyed in language becomes more important as language technologies are increasingly being used in interactive contexts like education (Molnár and Szüts, 2018), workplace communication (Prabhakaran Figure 1: Two examples of annotated conversations in TalkUp. Post 1 is straightforwardly empowering, but Post 2 is inherently ambiguous and could either be interpreted as helpful advice or as a dismissive, belittling comment. Social context can also affect Post 2's implications: the post might elicit different reactions if it were written by a woman to a man or vice versa. and Rambow, 2014a; Prabhakaran et al., 2012), and healthcare (Locke et al., 2021;Sharma et al., 2021a). Whether we are building dialogue agents for mental health support, supplementing children's education, or analyzing managers' feedback to their employees, language that empowers or disempowers the reader can have drastically different effects.\nWith a few exceptions (Ziems et al., 2022;Sharma et al., 2023), prior NLP research has focused on flagging harmful text, but there has been much less investigation of what makes text helpful. Other works have studied related concepts like condescension (Wang and Potts, 2019) and implicit toxicity (Breitfeller et al., 2019a;Sap et al., 2020;Upadhyay et al., 2022), and we build off of these to construct a dataset that complements those tasks.\nConsider the two examples of potentially empowering interactions in Figure 1. Empowerment exhibits the importance of social context in understanding the pragmatics of language: whether an exchange is interpreted as empowering or disempowering may depend on the participants' social roles and the power dynamics implied by their identities, including race, age, socioeconomic class, and many other social dimensions. Furthermore, empowerment cannot be easily detected with sentiment or emotion analyzers, since interactions with negative implicatures can be empowering (e.g., you can quit!!!), and messages that are positive on the surface can be disempowering (e.g., you are so articulate for a girl!) (Field and Tsvetkov, 2020). Modern language technologies do not model social context or deeper pragmatic phenomena, and thus are unable to capture or control for empowerment. This work makes concrete steps towards understanding these linguistic phenomena by investigating the following research questions: [RQ1] What makes language empowering, and how is it manifested in language? [RQ2] Can empowerment be detected with computational approaches?\nOur contributions are threefold: (1) We introduce the new task of empowerment detection, grounding it in linguistic and psychology literature.\n(2) We create TalkUp, a novel dataset of Reddit posts labeled for empowerment, the fine-grained type of empowerment felt by the reader, and the social relationships between posters and readers. (3) We analyze the data and demonstrate how it can be used to train models that can capture empowering and disempowering language and to answer questions about human behavior.\nUltimately, TalkUp aims to assist future researchers in developing models that can detect, generate, and control for empowerment, and to facilitate broader exploration of pragmatics. We have by no means covered every possible social dimension, but by focusing on a few social factors in the simplified setting of two-turn dialogues, we hope that TalkUp's framework can make strides toward understanding language in more complex social interactions, such as conversations involv-ing intersectionality as well as longer multi-turn dialogues." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b7", "b28", "b7" ], "table_ref": [], "text": "We discuss empowerment following its definitions in clinical psychology (Chamberlin, 1997). We find this most appropriate for studying language because clinical psychology practice is usually centered around dialogue between clinician and patient, and because it involves concrete implications about individuals rather than vague cultural phenomena. Thus, summarizing the different characteristics of empowerment described in psychology literature, we define empowering text as text that supports the reader's rights, choices, selffulfillment, or self-esteem.\nIncorporating empowerment in dialogue agents, mental health support chatbots, educational assistants, and other social-oriented NLP applications is clearly a desirable goal. However, empowerment is inherently challenging to operationalize for several reasons. First, it is a flexible term that describes a wide range of behaviors across many domains -empowerment in economics, for example, looks very different from empowerment in a therapy session (McWhirter, 1991). We follow recent literature outside of NLP in trying to distill these varied interactions into a concrete definition. Second, empowerment is implicit: it is often read in between the lines rather than declared explicitly. Text might be empowering by reminding someone of their range of options to choose from, encouraging them to take action, asking for and valuing their opinion, or even validating their feelings (Chamberlin, 1997). Third, empowerment is heavily dependent on social context: whether or not a person is empowered depends on who is saying what to whom. We incorporate these consideration in our data collection process described next." }, { "figure_ref": [], "heading": "The TalkUp Dataset", "publication_ref": [ "b7", "b46", "b6" ], "table_ref": [], "text": "We now discuss the TalkUp dataset's construction.\nAnnotation Scheme Our annotation task 2 was shaped through multiple pilot studies, where we learned that context is useful for judging a post, annotators' free-response descriptions of social roles lack consistency, and posts are often inherently ambiguous. We elaborate on these findings in Appendix D. Based on these insights, the final task, which is illustrated in Figure 1, consists of three main parts:\n(1) Rating the post on an empowerment scale. This scale has \"empowering\" on one end, \"disempowering\" one the other, and \"neutral\" in the middle. We define text to be empowering if it supports the reader's rights, choices, self-fulfillment, or self-esteem, and disempowering if it actively denies or discourages these things. Notably, posts that discuss an external topic without making any implications about the conversants, such as a comment about a celebrity's lifestyle, are defined as neutral.\n(2) Selecting why a post is empowering or disempowering. We adopt the 15 points from Chamberlin (1997), with slight modifications to adapt them to written text, as reasons why a post can be empowering to a reader. Refer to Appendix E for the complete list of 15 reasons and corresponding definitions provided to annotators. If a post is empowering, it should imply one or more of these reasons (e.g. that the reader is capable of creating change), and if it is disempowering, it should imply the opposite (e.g. that the reader is not capable of creating change).\n(3) Selecting whether the poster and commenter have agreeing or disagreeing stances. We define \"agreeing\" and \"disagreeing\" loosely in order to accommodate a wide range of social relationships: \"agree\" means that the poster and reader support the same point of view on a topic, whether it be politics, sports teams, or music preferences. \"Disagree\" means that they take opposing sides.\nData Source TalkUp consists of English Reddit posts from RtGender (Voigt et al., 2018), a collection of 25M comments on posts from five different domains, each labeled with the genders of the commenter and the original poster. We take advantage of the fact that these conversations are already annotated for gender, which provides contextual information about who is speaking to whom and allows us to explore at least one dimension of social context. 3Though RtGender contains posts from several platforms, given our focus on conversational language, we specifically selected RtGender posts from Reddit because they were the most generalizable and contained natural-sounding conversations. We manually chose five subreddits, aiming to include (1) a diverse range of topics and user demographics, and (2) discussions that are personal rather than about external events unrelated to the conversants. The subreddits are listed in Table 1.\nWe filtered data from these subreddits to exclude posts or responses that exceeded 4 sentences in length or were shorter than 5 words. Additionally, we excluded posts with \"Redditisms\", and posts that were edited after they were initially posted (marked \"EDIT:\" by the original poster) and posts that began with quoted text (marked \">\") were removed.\nFrom pilot studies, we found that models can help to surface potentially empowering posts and help increase the yield of posts that were actually labeled as empowering by annotators. We trained a RoBERTa-based regression model with the data we collected from the pilot studies to predict the level of empowerment (0 for disempowering, 0.5 for neutral, 1 for empowering) in Reddit posts. We used this model to rank and select the top-k posts for annotation, and continually updated the model as we collected more data. 4 To ensure we annotate a diverse range of posts, our final annotation task was done with half model-surfaced posts and half randomly-sampled posts.\nAnnotation on Amazon Mechanical Turk With 1k model-surfaced posts and 1k randomly-sampled posts spread evenly among the five subreddits, we collected annotations via Amazon Mechanical Turk (AMT). Appendix F shows a screenshot of the user interface displayed to annotators. Each example was annotated by 3 different workers.\nTo ensure high quality annotations, we required annotators to have AMT's Masters Qualification,5 a task approval rate of at least 95%, and a minimum of 100 prior tasks completed. Additionally, since our task requires English fluency, we limited annotators to those located in the US or Canada. Workers were compensated at $15/hour, and we calculated the reward per task based on the average time spent on each annotation in our pilot studies.\nFollowing best practices to increase annotator diversity (Casey et al., 2017) of data to be released at different times of day over multiple days. After each batch was completed, we manually quality-checked the responses and computed each annotator's standard deviation. We discarded data from unreliable annotators, including those who straightlined through many annotations with the same answer, those who clearly had not read instructions, and those whose alignment scores were more than 2 standard deviations from the mean. Annotator alignment scores were calculated by dividing the number of disagreements by the number of agreements between their label and the majority vote. We subsequently released new batches to re-label data previously annotated by the identified unreliable annotators." }, { "figure_ref": [], "heading": "Data Statistics", "publication_ref": [], "table_ref": [], "text": "We combined the maybe empowering with the empowering label, and did the same for the disempowering labels. We then used majority voting to aggregate the three annotations into the final labels for empowerment, ambiguity, and stance for each post. When all three annotators disagreed on the empowerment label (i.e., one vote each for empowering, neutral, and disempowering), we marked it as No Consensus and considered it an ambiguous case. For reason labels, where annotators can mark more than one categories per example, we only kept the reason labels that were marked by at least two annotators.\nTable 1 shows the overall size of our dataset and the distribution of labels, the number of ambiguous cases, and percentage of posts made by women across the entire dataset and also by different subreddits. We annotated 400 posts from 5 different subreddits resulting in a total of 2000 samples. Of these, 962 were labeled as empowering, 129 as disempowering, and 267 as ambiguous, with 642 being labeled as neutral. We note that 265 out of the 962 empowering cases had no final reason marked, indicating that there was no reason category annotators agreed on.\nThe inter-annotator agreement, Krippendorff's alpha, was 0.457, and the percentage agreement was 65.2%. These agreement scores are reasonable given the complexity and nuance of this task -we would neither expect nor want to have perfect annotator agreement because it is an inherently ambiguous problem even for humans, and there is often no objective \"ground truth\" on whether a text is empowering or not. Our agreement scores are comparable to those of other computational social science papers on tasks of similar nature, especially when concerning pragmatics. For example, our percentage agreement is higher than that of ElSherief et al. ( 2021)'s dataset on latent hatred, and our Fleiss's kappa is similar to that of the Microaggression dataset (Breitfeller et al., 2019b)." }, { "figure_ref": [], "heading": "Data Analysis", "publication_ref": [], "table_ref": [], "text": "We present preliminary analyses of TalkUp. Empowerment is a nuanced phenomenon in pragmatics and deeper exploration of social and linguistic variables remains open for future work. The analyses we present here provide some initial, surface-level insights into what makes language empowering." }, { "figure_ref": [ "fig_0" ], "heading": "Characteristics of Empowering Language", "publication_ref": [ "b2" ], "table_ref": [], "text": "We use the LIWC-22 software to compute LIWC features for all annotated posts (Boyd et al., 2022). These features measure the percentage of word overlap between the text and predefined lexicons that capture different social and psychological characteristics of language, such as prosocial words or words associated with positive tone. For a more concise and generalized analysis, some related features are combined into compound features: the I and You features are grouped into one feature I+You, We and They into We+They6 , and male and female into gendered words. We standardize LIWC feature scores using the mean and variance calculated from TalkUp's randomly sampled posts. Model-surfaced posts are excluded as they may not reflect the distribution of Reddit posts in the wild.\nTo understand how each of these features contributes to empowerment in language, we train a linear regression model to predict the likelihood of a post being empowering. Figure 2 shows the regression coefficients assigned to each feature. Looking at the positive coefficients reveals that empowerment is associated with lexical features like clout, allure, prosocial words, and exclamation marks. Meanwhile, disempowerment is associated with features that have negative coefficients, such as big words and words-per-sentence, which may indicate sentence complexity. We expand on a few of the most notable findings below.\nTone vs. Emotion. We find that the tone of language is more influential to empowerment than the emotion conveyed. Positive tone has a significantly higher coefficient than positive emotion; likewise, negative tone is highly associated with disempowerment, while negative emotion is not statistically significant. This suggests that the concept of empowerment is distinct from sentiment and cannot be captured by sentiment analysis models alone.\nPower. Power is not a statistically significant feature in predicting empowerment. This corroborates the idea that empowerment is not the same as power -empowerment is a more nuanced and subtle concept that extends beyond power-related lexicons, relying more on the implications between the lines like the tone of the message.\nSingular vs. Plural Pronouns. Interestingly, empowerment and disempowerment tend to use different types of pronouns. Singular pronouns (I, you) are positively associated with empowering language, while plural pronouns (we, they) are linked to disempowering language. Our manual inspections suggest one possible explanation: people who write empowering posts tend to speak directly to the listener, and also include elements of their own personal experience, hence the prevalence of you and I pronouns. Disempowering conversations are less personal and individualized, often making generalized assumptions or judgments about people." }, { "figure_ref": [], "heading": "Empowering Language by Gender", "publication_ref": [ "b0", "b48", "b18" ], "table_ref": [], "text": "As a preliminary analysis of empowerment across one social dimension, we explore the differences in empowering posts written by men and women. First, we standardize the LIWC feature values for men and women's empowering language over the entire dataset. We find that women's empowering language displays significantly higher levels of positive tone and positive emotions than men. Women also use more exclamation points, while men use more swear words. These findings align with prior works in sociolinguistics that have associated exclamation points with higher expressiveness and excitability (Bamman et al., 2014;Waseleski, 2017;Güvendir, 2015), which is usually more socially acceptable for women. Meanwhile, men's use of strong or offensive language is linked with masculinity or aggressiveness, and is less socially accepted in women. Additionally, there are other features where women and men's empowering posts diverge -women use more present tense than men, and men are much less likely to use gendered words.\nWe then control for gender, comparing men's empowering language with all men's posts, and likewise for women. The results show that positive tone, positive emotions, and exclamation marks remain strongly correlated with empowering language even after accounting for gender. However, considering gender does impact the degree of positivity and the use of exclamation marks. Men's empowering language, when compared to men's average language, displays a greater increase in positive tone, positive emotions, and the use of exclamation marks compared to women's empowering language in relation to their average language. This suggests that men tend to exhibit a more pronounced shift towards positive and expressive language when expressing empowerment, whereas women's empowering language already aligns closely with their overall language patterns. Our findings highlight the complex interplay between language, gender, and empowerment, motivating future research into the influence of social factors on communication of empowerment. More detailed analyses on empowerment differences by gender and subreddit can be found in Appendix A." }, { "figure_ref": [ "fig_1" ], "heading": "Reasons Why Posts Are Empowering", "publication_ref": [], "table_ref": [], "text": "Figure 3 illustrates the distribution of reasons selected by at least two annotators for why a post was empowering/disempowering, broken down by subreddit. The most common reasons a post was considered empowering are encouraging expression of emotions (40.6%), supporting the reader's self-image (26.8%), and supporting the reader's ability to grow (21.1%) and change (18.8%).\nNotably, there are significant differences in the reasons most commonly used in different subreddits. For example, the teenagers and relationships subreddits tend to empower users by promoting expression of emotions, while empowerment in Fitness was more focused on encouraging people to improve themselves and make changes. The unique distributions of reasons among different communities and topics of discussion suggests that empowerment serves diverse purposes and implies different meanings depending on the context. Future work could explore which techniques should be used to empower people in specific contexts, such as empowering clients in clinical psychology or students in educational settings, based on the desired interaction goals. " }, { "figure_ref": [], "heading": "Empowerment and Poster-Commenter Alignment", "publication_ref": [], "table_ref": [], "text": "While a commenter can take either an agree, neutral, or disagree stance with the poster, most empowering posts were in conversations where the poster and commenter agreed (79.6%). Likewise, most disempowering posts occurred when the poster and commenter disagreed (45.5%). Intuitively, this makes sense for the majority of cases -people often respond agreeably to empowerment and negatively to disempowerment. Importantly, however, this is not always the case: empowering posts can sometimes have commenters who disagree, and disempowering posts can have commenters who agree. These cases often involve more complex pragmatics. Empowering posts that contain toxic positivity are frequently met with disagreement, and sometimes commenters will reject or minimize empowering compliments for the sake of politeness. Empowerment can also be met with antagonism from an ill-intentioned commenter, regardless of how genuine the original post may be. Disempowering posts that disparage a particular group might receive an agreeing comment from someone who also shares that view of the group. We elaborate on these conversational patterns in Appendix A.3. Overall, the empowering-disagree and disempowering-agree cases provide a rich corpus for studying implicature and interactions in social contexts." }, { "figure_ref": [], "heading": "Modeling Empowering Language", "publication_ref": [ "b25", "b5", "b12" ], "table_ref": [], "text": "To explore how well empowerment can be captured by computational methods, we present empowerment detection experiments with two large language models: fine-tuned RoBERTa and zeroshot GPT-3. 7 We note that our goal here is not to build a state-of-the-art model, but to give a general picture of how well existing models work and to illustrate the usefulness of our dataset.\nFine-tuned RoBERTa. We assess how well empowerment can be identified by a pre-trained RoBERTa model (Liu et al., 2019) fine-tuned on TalkUp, and we conduct an ablation study to examine the importance of contextual information in helping the model classify a post as empowering, disempowering, or neutral. We test four model variants: post, +response (post and response), +context (post, posters' gender, subreddit), +all (post, response, context). We divide 1733 unambiguous samples from TalkUp into 60:20:20 for train:validation:test sets and select the model with best validation macro-f1 score. 8 Table 2 presents the average macro-f1 scores across 10 separate runs using different random seeds on the test set. The results show that additional context improves model performance.\nZero-Shot GPT-3. Additionally, we evaluate GPT-3 Davinci's (Brown et al., 2020) ability to detect empowerment using prompts. We design seven different prompts for each of the four combinations of post+context, and generate responses. While most of GPT-3's responses are single word (e.g. \"empowering\"), some are longer. To map GPT-3's responses to empowerment labels, we use a simple lexical counting method: if the generated text contains more empowering-related words (e.g. empowering, empowered, empower) than words related to other labels, it is classified as empowering. GPT-3's final classification for each post takes the majority vote over its responses to the seven prompts. A full list of our GPT-3 prompts can be found in Appendix C.2.\nOur results indicate that GPT-3 performs poorly in zero-shot settings compared to RoBERTa-based classifiers fine-tuned on TalkUp. This reveals that even large language models cannot effectively capture empowering language, highlighting the importance of having a carefully annotated dataset of nuanced examples like TalkUp.\nis impractical for most users, and because our preliminary experiments indicated that few-shot prompts resulted in lower performance than zero-shot. Although in-context examples often improve performance, there are cases in which few-shot underperforms zero-shot due to models becoming excessively fixated on the provided examples and struggling to generalize effectively. This phenomenon is documented in numerous previous studies (e.g. Fei et al., 2023), and we consistently observed this in our case. 8 Specific training details and hyper-parameters can be found in Appendix B.3." }, { "figure_ref": [], "heading": "Ambiguity of Empowering Language", "publication_ref": [], "table_ref": [], "text": "TalkUp contains 228 samples that either were labeled as \"ambiguous\" by at least two annotators, or were labeled \"no consensus\" because all three annotators marked different answers for the empowerment question. We qualitatively analyzed this subset of TalkUp, and we find that these ambiguous posts are not \"bad data,\" but rather are linguistically interesting because they are ambiguous -they are examples of language that could reasonably be interpreted in several different ways.\nFor example, the post \"Maybe call a relative or friend who has a car? Youll figure it out. I wish you luck, kid.\" was unanimously labelled as \"empowering\" and \"ambiguous\" by annotators. This makes sense -the post overall seems to provide a helpful suggestion, but calling the responder \"kid\" could be interpreted in different ways (e.g. as an endearing nickname vs. a condescending title) depending on the social relationship between the poster and the responder. Notably, many of the posts with inherent ambiguity display sarcasm, such as the posts \"i love you too?!\" and \"thats grimy as f*ck but sure you do that.\" Sarcasm, by design, disguises a negative message in positive words, and so a sarcastic post could be interpreted either way depending on whether the sarcasm was meant positively or negatively.\nWe also investigated how GPT-3 handles such ambiguous cases. We find that GPT-3 tends to classify them as neutral, even for explicitly empowering posts such the above example. Instances in which the posts carried a sarcastic tone were commonly interpreted by GPT-3 as neutral as well, indicating that simultaneously empowering and ambiguous language is poorly understood by the model. The fact that ambiguity is still challenging for large models motivates the need for further work in this area, and TalkUp provides diverse examples of ambiguous language that can be used to to work towards this end." }, { "figure_ref": [], "heading": "Example Application: Unearthing Empowerment Patterns on Reddit", "publication_ref": [], "table_ref": [ "tab_3", "tab_4" ], "text": "As a case study, we demonstrate how TalkUp and the trained empowerment classifier can be used to uncover interesting patterns in how people use empowering language. Specifically, we apply the trained classifier in §4. communicate. 9 We analyze empowering and disempowering posts in different subreddits and by different gender of the poster and responder.\nBy Subreddit Table 3 shows the percentage of empowering and disempowering posts and responses in thefive subreddits of TalkUp. The results indicate that the subreddits have significantly different degrees of empowerment, and that and certain subreddits (e.g. relationship, Fitness) are significantly more empowering than others (e.g. AskReddit). Our model can be used to monitor the overall empowerment level of communities and identify unusual patterns, such as a significant rise in disempowerment. Furthermore, we find that there are more empowering responses than posts in total. On the contrary, there are more disempowering posts than responses across all subreddits. This may be because responses are often directed towards specific posts or users, and as a result, the writer may be more conscious of their tone and try to be more empowering compared to posts.\nBy poster and responder gender Table 4 shows the percentage of empowering and disempowering context by the gender of posters and responders. Overall, women seem to post and interact with 9 Given that responses are only available for the posts and not for the responses, and that some samples in the data do not provide the gender of the responder, we used a model that only incorporates subreddit information as additional context to the text itself. more empowering content. Unsurprisingly, the results show that of all the posts predicted to be empowering, women wrote a considerably higher percentage of them than men. Interestingly, however, women are also responsible for a slightly higher percentage of disempowering posts than men. Another surprising finding is that posts written by men that were commented on by women tend to be more empowering or more disempowering than those commented on by men, suggesting that women not only post more empowerment-charged language, but they also engage with more empowermentcharged posts. This may be tied to factors like the topics or types of posts that women tend to engage with and could be used to answer sociological questions about gender and social media." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b27", "b27", "b27", "b39", "b32", "b19" ], "table_ref": [], "text": "To our knowledge, Mayfield et al. (2013) is the only prior work exploring empowerment in NLP, but the contributions of our works are quite different. Mayfield et al. (2013) primarily focus on an algorithm for predicting rare classes and use empowerment as an example. In contrast, we focus on understanding empowering language itself, before developing automated detection tools. We explore the reasons behind empowerment, considering multiple dimensions of social context such as gender, topic, and poster-commenter alignment. Mayfield et al. (2013) use non-public data from a specific cancer support group, while TalkUp spans diverse topics and user bases, making our scope broader and more generalizable.\nAs empowering language is not well understood in NLP, our work has also drawn insights from research on related concepts:\nPower. Danescu-Niculescu-Mizil et al. ( 2011) develop a framework for analyzing power differences in social interactions based on how much one conversant echoes the linguistic style of the other. Prabhakaran and Rambow (2014a,b) predict power levels of participants in written dialogue from the Enron email corpus, and several other of their works explore power dynamics in other contexts, such as gender (Prabhakaran et al., 2014b) and political debates (Prabhakaran et al., 2014a).\nOur work studies empowerment rather than power. Power is certainly a closely related concept, but empowerment is a distinct linguistic phenomenon -it concerns not just static power levels, but interactions that increase or decrease a person's power, and it is also a broader concept that encompasses things like self-fulfillment and self-esteem. While power has primarily been analyzed at the word level, such as by examining connotations of particular verbs (Sap et al., 2017;Park et al., 2021), our work attempts to look at higher-level pragmatics -implications that may not be captured by word choice alone, but suggested between the lines.\nCondescension. The closest concept to empowerment that has been more thoroughly studied in NLP is condescension. Prior works have defined condescension as language that is not overtly negative, but that assumes a status difference between the speaker and listener that the listener disagrees with (Huckin, 2002). Intuitively, condescension can be interpreted as roughly the opposite of empowerment: it implicitly suggests that the listener has lower status or worth.\nOur work particularly builds upon Wang and Potts (2019): they develop TalkDown, a dataset of Reddit posts labeled as \"condescending\" or \"not condescending.\" Specifically, they identify condescending posts by looking for replies that indicate the original post is condescending. Our approach is parallel to this work: we likewise surface Reddit posts whose responses indicate that the original post is empowering (thus aligning with our definition of empowerment in §2 as an effect on the listener). TalkUp complements TalkDown by focusing on the positive aspect of such language: instead of only identifying text as condescending or not condescending, we distinguish between disempowering, empower, and neutral posts." }, { "figure_ref": [], "heading": "Future Directions", "publication_ref": [ "b50", "b24", "b21", "b9", "b8", "b23" ], "table_ref": [], "text": "In this work, we focus only on empowerment classification and detection, with our primary contribution being the proposal of a new dataset to facilitate research in a new area of computational sociolinguistics. However, TalkUp not only can be used to detect empowerment, but also to generate more empowering language. As in Sharma et al. (2021b), we believe a classifier trained with our data can be used to assign rewards that tailor a generation model to produce more empowering outputs. An empowerment classifier can also be used for controllable text generation with constrained decoding, as in Yang and Klein (2021), Liu et al. (2021), and Kumar et al. (2021). Additionally, a model that can control for empowerment could be used to suggest edits to make human-written text more empower-ing, which has potential applications in real-world dialogue settings like education and psychotherapy.\nTalkUp focuses on simple two-turn interactions with 3 social variables (gender, alignment, and topic), but its framework can extend to more complex social interactions. For example, there are many other social roles that can influence power dynamics, including occupation (e.g. manager vs. employee), race (e.g. White vs Person of Color), and age (e.g. old vs. young person). Different combinations of these identities can result in further intersectional dynamics (Crenshaw, 1990;Collins and Bilge, 2020;Lalor et al., 2022). Additionally, since most real-world conversations are long backand-forth exchanges, we encourage future work to explore empowerment in multi-turn dialogues." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We explore the problem of empowerment detection, grounding it in relevant social psychology and linguistics literature. To facilitate studies of empowerment, we create TalkUp, a high-quality dataset of Reddit posts labeled for empowerment and other contextual information. Our preliminary analyses demonstrate that empowerment is not captured by existing NLP methods and models, but that it can be detected with our dataset. Furthermore, we demonstrate the importance of social context in understanding empowering language with different genders, poster-commenter alignments, and topics of discussion. In studying empowerment, we work towards bigger open challenges in pragmatics, implicature, and social context in NLP." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b44", "b29", "b16", "b1" ], "table_ref": [], "text": "In constructing our study, we took precautions to ensure the task design, data collection and handling are done ethically and according to current recommended practices and guidelines (Townsend and Wallace, 2016;Mislove and Wilson, 2018;Gebru et al., 2018;Bender and Friedman, 2018). Specifically, we ensured fair compensation by calculating the pay based on minimum wage in CA (higher than then the average pay worldwide, including most U.S. states). To avoid exposing the annotators to potentially offensive or otherwise harmful content from social media, we manually checked every data sample. Beyond scientific goal of our work to understand sociolinguistic characteristics of empowering language and open new directions to NLP research on deeper pragmatic phenomena, the practical goal is to advance NLP technologies with positive impact through understanding and incorporating empowerment in practical applications including education, therapy, medicine, and more." }, { "figure_ref": [ "fig_6", "fig_3" ], "heading": "Limitations", "publication_ref": [ "b17", "b13" ], "table_ref": [], "text": "We identify three primary limitations of our work. First, to protect the anonymity of annotators, we did not explicitly control for annotator demographics. It is thus possible that our annotator demographics is imbalanced which can impact annotation decisions and potentially incorporate biases in NLP models built on the dataset (Geva et al., 2019).\nSecond, with the goal to incorporate social context, we relied on gender annotations from RtGender, the corpus we draw from to annotate empowering conversations. Thus, TalkUp only centers on binary gender identities and is limited by the scarcity of data on nonbinary identities in the Rt-Gender dataset. Building resources and methods inclusive to queer identities is an important area for future work. Additionally, RtGender's gender labels were constructed by finding users who posted with a gender-indicating flair, which means that Rt-Gender only contains posts from a subset of users who voluntarily disclosed their gender; this may silence the voices of users who are less likely to share their gender, including nonbinary users. Further, future work on empowerment should incorporate broader social contexts, e.g. relationships involving inherent power hierarchies (Prabhakaran and Rambow, 2014a), more dimensions of identity like race (Field et al., 2021), and others.\nFinally, TalkUp is limited to the Reddit domain and only includes English posts. This data may not be generalizable to other domains, such as clinical psychology or education. Figure 9 illustrates the average standardized scores of empowering language by men and women. Figure 5 illustrates the average standardized scores of empowering language by men and women after controlling for gender. In other words, we comparing men's empowering language with all men's posts, likewise for women." }, { "figure_ref": [], "heading": "A Empowering Language by Group", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2 Empowering Language by Subreddit", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5" ], "heading": "A.3 Empowering and Disempowering", "publication_ref": [ "b20", "b15" ], "table_ref": [], "text": "Language and Poster-Commenter Stance Empowering+Disagree. Some posts labeled as empowering had commenters who disagreed with the poster. Figure 7 shows some notable features of these posts. Through qualitative analysis of em-powering+disagree posts, we observe a few conversation patterns:\n(1) Posts with toxic positivity, whether intentional or not, are often met with disagreement. et al., 2022). A post with a lot of encouragement or affirmations could come across as dismissive or invalidating of the recipient's struggles.\n(2) Commenters may disagree with an empowering post in an effort to be polite or humble rather than accepting the compliment. For example, one poster wrote, \"That's cool!,\" and a commenter replied with \"haha it's not as cool as it sounds.\" It is unlikely that the commenter actually thinks the topic of discussion is not that great; rather, rejecting compliments is a well-documented form of politeness that is most common in high-context languages (hui Eileen Chen, 2003;Gao et al., 2017). Reading between the lines to pick up on implications like this is an open area of research that involves cultural norms and values.\n(3) Some empowering posts are met with antagonism from the commenter -actively attacking the poster with insults like \"dummy\" or \"f*ck off\" without really engaging in conversation. This suggests that whether or not text is perceived as empowering depends partially on the attitude and intentions of the recipient. No matter how genuine an empowering post may be, a reader may still reject it for other contextual reasons, such as being unwilling to receive feedback or simply disliking the poster. Disempowering+Agree. Additionally, some disempowering posts had commenters who agreed with the poster. Figure 8 shows notable features of these posts, and we again inspect them qualitatively to identify two main patterns:\n(1) Some posts labeled as disempowering would certainly be disparaging to a particular audience (e.g. a post that makes fun of the eating habits of vegan people would likely be received negatively by a vegan person), but the particular commenter who responded happened to share their view and joined the poster in making fun of the other group together. This is manifested in the prevalence of the We+They feature -such posts include many \"we\" and \"they\" pronouns because they involve the poster and commenter taking the same side and making fun of some other group.\n(2) Other posts labeled as disempowering were instances where the poster was sharing very heavy or personal stories, and the commenter was validating their experience. This is exhibited particularly in the emotion and tone features: the emotion expressed in these posts is very negative because the topics themselves are heavy, but the tone is not negative because it is not negativity directed at the other person in the conversation. We note that some of these personal stories could be interpreted as neutral posts under our label definitions (i.e. the post only talks about the poster and is not relevant to the commenter), but these posts do not quite fall under this category because they were still direct conversations with the commenter. A commenteror an annotator labeling the conversation after the fact -may feel disempowered by the contents of such posts because empowerment has less to do with the literal words spoken and more to do with the way text impacts the feelings of the recipient, resulting in a label of \"disempowering\" even if the commenter is supportive of the poster. " }, { "figure_ref": [], "heading": "A.4 Ambiguous and Unambiguous Language", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B Implementation Details B.1 Empowerment Regression Model for Sample Selection", "publication_ref": [ "b49" ], "table_ref": [], "text": "We trained a RoBERTa-based regression model, using the ROBERTA-BASE model on Huggingface transformers library (Wolf et al., 2020), to rank the Reddit posts to surface more likely empowering examples in the data for annotation. We used the data we collected from pilot studies to train the first model and continually updated the model as we collect more data from AMT, resulting in a total of 9 updates. The data was split into train and test set at an 8:2 ratio. In order to have float values to predict for the model, we mapped disempowering, neutral, empowering labels to 0, 0.5, 1, respectively. We only used the text of the post as an input to the model and we set maximum input length to 512. The batch size was fixed to 8. In every update, the hyper-parameters were tuned through a grid search (gradient accumulation count: {1,2,4}, warm-up ratio: {0.05, 0.1, 0.2}, learning rate: {1e-5, 1e-4, 5e-4})." }, { "figure_ref": [ "fig_0" ], "heading": "B.2 Linear Regression Experiments", "publication_ref": [ "b40" ], "table_ref": [], "text": "We used the statsmodels package (Seabold and Perktold, 2010) to fit a ordinary least squares linear model with intercept. Same as the RoBERTa-based empowerment regression model, we mapped empowerment labels to float values, and only used by annotators. The R 2 of the fitted model with features in Figure 2 was 0.29." }, { "figure_ref": [], "heading": "B.3 Empowerment Classifier Fine-tuning", "publication_ref": [], "table_ref": [], "text": "We used a ROBERTA-BASE checkpoint on Huggingface, which has 123 million parameters, to finetune to train a empowerment classifier discussed in §4.5. The data was split into train, development, and test sets in a 60:20:20 ratio and stratified by subreddit. We set the batch size to 32 and maximum input length to 300 even though the longest input in our data was shorter. All other hyper-parameters was set to the default values provided by the Trainer and TrainingArguments class in the transformers library. We trained each model for 10 epochs and selected the model with the best F1 score on the development set as the final model for evaluation. The model was trained on one A6000 GPU and took about 15 minutes. We ran training with 10 different random seeds and averaged the test set performance for each model." }, { "figure_ref": [], "heading": "C Model Evaluation Details C.1 RoBERTa Input Type Examples", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "From preliminary experiments, we noticed that depending on how you format the additional input (e.g. response, subreddit, poster's gender) to RoBERTa , the performance varies. We used the input type with the best performance for each model in Section 4.5 and provide results of all templates we tried in Table 5." }, { "figure_ref": [], "heading": "C.2 GPT-3 Prompts", "publication_ref": [], "table_ref": [], "text": "As with all prompt-based language models, there is no straightforward way to determine the optimal prompt for a task, and the performance of GPT-3 can vary depending on the design of the prompt. To increase the robustness of the evaluation, we created seven templates for each model type and used the majority voting as the final output from GPT-3. We provide all templates and their corresponding performance in Table 6. While the performance of GPT-3 is not as high as the fine-tuned classifiers, practitioners can refer to this performance by template as a reference when using GPT-3 to probe empowerment in language." }, { "figure_ref": [], "heading": "D Pilot Studies", "publication_ref": [], "table_ref": [], "text": "Before crowdsourcing any data, we performed six internal pilot studies to iteratively refine our an-notation task. 10 After each pilot, we computed annotator agreement and manually walked through every example that annotators disagreed on in order to clarify confusing aspects of our definitions. We summarize the key findings of these initial pilot studies.\nContext is useful for judging a post. Annotator confidence was higher when we provided not just the text of the post, but additional contextual information like the poster's gender and the subreddit. Additionally, including the responder's comment helped to provide useful context by revealing how a real reader reacted to the post. Our final annotation task incorporated this contextual information.\nAnnotators' free-response descriptions of social roles lack consistency. Early iterations of our pilot studies asked annotators to specify what social group would be empowered or disempowered by a post. Answers varied dramatically -from general groups like \"Democrats\" to extremely specific descriptions like \"a person who likes soccer and supports this sports team\" -and were difficult to organize in any consistent way. However, our manual inspections of data samples revealed that most fell into two categories: (1) conversations where the poster and commenter agree/share the same stance (such as being members of the same political party or supporting the same sports team), and (2) conversations where they disagree/have opposing stances. This generalization of social relationships, while quite broad, allowed us to capture the diversity of possible social roles, and we used this stance agreement/disagreement question in the final annotation task.\nModels can help to surface potentially empowering posts. By training a model on the pilot data collected so far, we were able to significantly increase the yield of posts that were actually labeled as empowering by annotators. To ensure we annotate a diverse range of posts, our final annotation task was done with half model-surfaced posts and half randomly-sampled posts.\nPosts are often inherently ambiguous. Even with additional context, many posts could be reasonably interpreted as either empowering or disempowering due to inherently ambiguous linguistic phenomena like sarcasm. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We gratefully acknowledge support from NSF CAREER Grant No. IIS2142739, the Alfred P. Sloan Foundation Fellowship, and NSF grants No. IIS2125201 and IIS2203097. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily state or reflect those of the funding agencies." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b7" ], "table_ref": [], "text": "Table 6: All prompts used to generate responses of GPT-3 and their macro F-1 performance over TalkUp. Chamberlin (1997) Element" }, { "figure_ref": [], "heading": "Adapted Definition Provided to Annotators", "publication_ref": [], "table_ref": [], "text": "Having decision-making power The reader has the power to make their own decisions or influence decisions that affect them.\nHaving a range of options from which to make choices The reader has a range of options from which to make choices. They are not restricted to only having a few limited options." }, { "figure_ref": [], "heading": "Assertiveness", "publication_ref": [], "table_ref": [], "text": "The reader can be assertive and confidently express what they want, need, prefer, like, or dislike." }, { "figure_ref": [], "heading": "A feeling that the individual can make a difference", "publication_ref": [], "table_ref": [], "text": "The reader feels that they can make a difference as an individual.\nLearning to think critically; unlearning the conditioning; seeing things differently The reader can think critically and see things from different perspectives.\nLearning about and expressing anger The reader can express their own emotions, like anger and sadness, in a healthy way." }, { "figure_ref": [], "heading": "Not feeling alone; feeling part of a group", "publication_ref": [], "table_ref": [], "text": "The reader feels that they are part of a group.\nUnderstanding that people have rights The reader has rights. It can also mean broader rights, such as the right for everyone to be treated with dignity and respect." }, { "figure_ref": [], "heading": "Effecting change in one's life and one's community", "publication_ref": [], "table_ref": [], "text": "The reader is capable of creating change in their life or community." }, { "figure_ref": [], "heading": "Learning skills that the individual defines as important", "publication_ref": [], "table_ref": [], "text": "The reader is able to learn new knowledge and skills." }, { "figure_ref": [], "heading": "Changing others' perceptions of one's competency and capacity to act", "publication_ref": [], "table_ref": [], "text": "The reader can change how others perceive them / their competency and capacity." }, { "figure_ref": [], "heading": "Coming out of the closet", "publication_ref": [], "table_ref": [], "text": "The reader can come out of the closet / express who they really are.\nGrowth and change that is never ending and self-initiated\nThe reader can grow and change continuously and on their own volition." }, { "figure_ref": [], "heading": "Increasing one's positive self-image and overcoming stigma", "publication_ref": [], "table_ref": [], "text": "The reader can increase their positive self-image and feel good about themselves. " } ]
Empowering language is important in many real-world contexts, from education to workplace dynamics to healthcare. Though language technologies are growing more prevalent in these contexts, empowerment has seldom been studied in NLP, and moreover, it is inherently challenging to operationalize because of its implicit nature. This work builds from linguistic and social psychology literature to explore what characterizes empowering language. We then crowdsource a novel dataset of Reddit posts labeled for empowerment, reasons why these posts are empowering to readers, and the social relationships between posters and readers. Our preliminary analyses show that this dataset, which we call TalkUp, can be used to train language models that capture empowering and disempowering language. More broadly, TalkUp provides an avenue to explore implication, presuppositions, and how social context influences the meaning of language.
TalkUp: Paving the Way for Understanding Empowering Language
[ { "figure_caption": "Figure 2 :2Figure 2: Weights of LIWC features with 90% and 95% confidence intervals assigned by linear regression model trained with TalkUp. All features except for Negative Emotion, Power, Focus Present have statistically significant weights (p < 0.1).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Distribution of empowering reasons. One post can have more than one empowering reason.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Average of standardized LIWC score of empowering language by men and women. The error bar indicates the 90% confidence interval.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Average of standardized LIWC score of empowering language by men and women standardized by average of all men and women's post, respectively. The error bar indicates the 90% confidence interval.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: Average of standardized LIWC score of empowering language by subreddit. The error bar indicates the 90% confidence interval.", "figure_data": "", "figure_id": "fig_4", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Average of standardized LIWC score of disempowering language by stance of responder to the poster. The error bar indicates the 90% confidence interval.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Average of standardized LIWC score of samples that are ambiguous and unambiguous in their empowerment. The error bar indicates the 90% confidence interval.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "The percentage of empowering and disempowering posts and responses in each subreddit.", "figure_data": "% Empower% DisempowerPost Response Post Responser/AskReddit12.014.16.85.6r/relationship38.727.212.711.4r/Fitness30.028.37.25.6r/teenager24.224.86.35.7CasualConversation 25.629.22.82.3Overall15.216.56.95.8PostResponsePoster Responder %E %D %E %DManMan Woman13.4 6.5 13.8 5.9 16.2 7.1 18.1 6.0WomanMan Woman16.5 6.9 16.7 6.3 20.2 7.3 20.4 6.4", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The percentage of empowering (%E) and disempowering (%D) posts and responses in RtGender classified by the model trained with TalkUp, broken down by the gender of both the poster and responder.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Templates used to convert additional context as a text input to the classifier. The best-performing template for each model type was used in §4.5 These pilot studies were conducted with the authors and a small pool of computer scientists and NLP researchers.", "figure_data": "Input template", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" } ]
Lucille Njoo; Chan Young Park; Octavia Stappart; Marvin Thielk; Yi Chu; Yulia Tsvetkov
[ { "authors": "David Bamman; Jacob Eisenstein; Tyler Schnoebelen", "journal": "Journal of Sociolinguistics", "ref_id": "b0", "title": "Gender identity and lexical variation in social media", "year": "2014" }, { "authors": "Emily M Bender; Batya Friedman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b1", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "year": "2018" }, { "authors": "Ashwini Ryan L Boyd; Sarah Ashokkumar; James W Seraj; Pennebaker", "journal": "University of Texas at Austin", "ref_id": "b2", "title": "The development and psychometric properties of liwc-22", "year": "2022" }, { "authors": "Luke Breitfeller; Emily Ahn; David Jurgens; Yulia Tsvetkov; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts", "year": "2019" }, { "authors": "Luke Breitfeller; Emily Ahn; David Jurgens; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts", "year": "2019" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; T J Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeff Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Logan S Casey; Jesse Chandler; Adam ; Seth Levine; Andrew Proctor; Dara Z Strolovitch", "journal": "SAGE Open", "ref_id": "b6", "title": "Intertemporal differences among mturk workers: Time-based sample variations and implications for online data collection", "year": "2017" }, { "authors": "Judi Chamberlin", "journal": "Psychiatric Rehabilitation Journal", "ref_id": "b7", "title": "A working definition of empowerment", "year": "1997" }, { "authors": "Patricia Hill; Collins ; Sirma Bilge", "journal": "John Wiley & Sons", "ref_id": "b8", "title": "Intersectionality", "year": "2020" }, { "authors": "Kimberle Crenshaw", "journal": "Stan. L. Rev", "ref_id": "b9", "title": "Mapping the margins: Intersectionality, identity politics, and violence against women of color", "year": "1990" }, { "authors": "Cristian Danescu-Niculescu-Mizil; Lillian Lee; Bo Pang; Jon Kleinberg", "journal": "", "ref_id": "b10", "title": "Echoes of power: Language effects and power differences in social interaction", "year": "2011" }, { "authors": "Mai Elsherief; Caleb Ziems; David Muchlinski; Vaishnavi Anupindi; Jordyn Seybolt; Munmun De Choudhury; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Latent hatred: A benchmark for understanding implicit hate speech", "year": "2021" }, { "authors": "Yu Fei; Yifan Hou; Zeming Chen; Antoine Bosselut", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Mitigating label biases for in-context learning", "year": "2023" }, { "authors": "Anjalie Field; Su Lin Blodgett; Zeerak Waseem; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "A survey of race, racism, and anti-racism in NLP", "year": "2021" }, { "authors": "Anjalie Field; Yulia Tsvetkov", "journal": "", "ref_id": "b14", "title": "Unsupervised discovery of implicit gender bias", "year": "2020" }, { "authors": "Ge Gao; Sun Young Hwang; Gabriel Culbertson; Susan R Fussell; Malte F Jung", "journal": "Proc. ACM Hum.-Comput. Interact", "ref_id": "b15", "title": "Beyond information content: The effects of culture on affective grounding in instant messaging conversations", "year": "2017" }, { "authors": "Timnit Gebru; Jamie Morgenstern; Briana Vecchione; Jennifer Wortman Vaughan; Hanna Wallach; Hal Daumé; Iii ; Kate Crawford", "journal": "", "ref_id": "b16", "title": "Datasheets for datasets", "year": "2018" }, { "authors": "Mor Geva; Yoav Goldberg; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets", "year": "2019" }, { "authors": "Emre Güvendir", "journal": "Language Sciences", "ref_id": "b18", "title": "Why are males inclined to use strong swear words more than females? an evolutionary explanation based on male intergroup aggressiveness", "year": "2015" }, { "authors": "Thomas Huckin", "journal": "", "ref_id": "b19", "title": "Critical discourse analysis and the discourse of condescension", "year": "2002" }, { "authors": "Shu Hui; Eileen Chen", "journal": "Concentric: Studies in Linguistics", "ref_id": "b20", "title": "Compliment response strategies in mandarin chinese: Politeness phenomenon revisited", "year": "2003" }, { "authors": "Sachin Kumar; Eric Malmi; Aliaksei Severyn; Yulia Tsvetkov", "journal": "", "ref_id": "b21", "title": "Controlled text generation as continuous optimization with multiple constraints", "year": "2021" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "John Lalor; Yi Yang; Kendall Smith; Nicole Forsgren; Ahmed Abbasi", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Benchmarking intersectional biases in NLP", "year": "2022" }, { "authors": "Alisa Liu; Maarten Sap; Ximing Lu; Swabha Swayamdipta; Chandra Bhagavatula; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "DExperts: Decoding-time controlled text generation with experts and anti-experts", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b25", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Saskia Locke; Anthony Bashall; Sarah Al-Adely; John Moore; Anthony Wilson; Gareth B Kitchen", "journal": "Trends in Anaesthesia and Critical Care", "ref_id": "b26", "title": "Natural language processing in medicine: A review", "year": "2021" }, { "authors": "Elijah Mayfield; David Adamson; Carolyn Penstein Rosé", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Recognizing rare social phenomena in conversation: Empowerment detection in support group chatrooms", "year": "2013" }, { "authors": "Ellen Hawley; Mcwhirter ", "journal": "Journal of Counseling & Development", "ref_id": "b28", "title": "Empowerment in counseling", "year": "1991" }, { "authors": "Alan Mislove; Christo Wilson", "journal": "Oxford University Press", "ref_id": "b29", "title": "A practitioner's guide to ethical web data collection", "year": "2018" }, { "authors": "György Molnár; Zoltán Szüts", "journal": "", "ref_id": "b30", "title": "The role of chatbots in formal education", "year": "2018" }, { "authors": "Stephen P Osborne", "journal": "International Journal of Public Sector Management", "ref_id": "b31", "title": "The language of empowerment", "year": "1994" }, { "authors": "Chan Young; Park ; Xinru Yan; Anjalie Field; Yulia Tsvetkov", "journal": "", "ref_id": "b32", "title": "Multilingual contextual affective analysis of lgbt people portrayals in wikipedia", "year": "2021" }, { "authors": "Ashima Vinodkumar Prabhakaran; Owen Arora; Rambow", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Power of confidence: How poll scores impact topic dynamics in political debates", "year": "2014" }, { "authors": "Vinodkumar Prabhakaran; Owen Rambow", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Predicting power relations between participants in written dialog from a single thread", "year": "2014" }, { "authors": "Vinodkumar Prabhakaran; Owen Rambow", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Predicting power relations between participants in written dialog from a single thread", "year": "2014" }, { "authors": "Owen Vinodkumar Prabhakaran; Mona Rambow; Diab", "journal": "", "ref_id": "b36", "title": "Predicting overt display of power in written dialogs", "year": "2012" }, { "authors": "Emily E Vinodkumar Prabhakaran; Owen Reid; Rambow", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Gender and power: How gender and gender environment affect manifestations of power", "year": "2014" }, { "authors": "Maarten Sap; Saadia Gabriel; Lianhui Qin; Dan Jurafsky; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Social bias frames: Reasoning about social and power implications of language", "year": "2020" }, { "authors": "Maarten Sap; Marcella ; Cindy Prasettio; Ari Holtzman; Hannah Rashkin; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Connotation frames of power and agency in modern films", "year": "2017" }, { "authors": "Skipper Seabold; Josef Perktold", "journal": "", "ref_id": "b40", "title": "statsmodels: Econometric and statistical modeling with python", "year": "2010" }, { "authors": "Ashish Sharma; Inna W Lin; Adam S Miner; David C Atkins; Tim Althoff", "journal": "", "ref_id": "b41", "title": "Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach", "year": "2021" }, { "authors": "Ashish Sharma; Inna W Lin; Adam S Miner; David C Atkins; Tim Althoff", "journal": "", "ref_id": "b42", "title": "Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach", "year": "2021" }, { "authors": "Ashish Sharma; Kevin Rushton; Inna Wanyin Lin; David Wadden; Khendra G Lucas; Adam S Miner; Theresa Nguyen; Tim Althoff", "journal": "", "ref_id": "b43", "title": "Cognitive reframing of negative thoughts through humanlanguage model interaction", "year": "2023" }, { "authors": "Leanne Townsend; Claire Wallace", "journal": "", "ref_id": "b44", "title": "Social media research: A guide to ethics", "year": "2016" }, { "authors": "Ishan Sanjeev Upadhyay; Aditya Srivatsa; Radhika Mamidi", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Towards toxic positivity detection", "year": "2022" }, { "authors": "Rob Voigt; David Jurgens; Dan Vinodkumar Prabhakaran; Yulia Jurafsky; Tsvetkov", "journal": "European Language Resources Association (ELRA", "ref_id": "b46", "title": "RtGender: A corpus for studying differential responses to gender", "year": "2018" }, { "authors": "Zijian Wang; Christopher Potts", "journal": "", "ref_id": "b47", "title": "Talkdown: A corpus for condescension detection in context", "year": "2019" }, { "authors": "Carol Waseleski", "journal": "Journal of Computer-Mediated Communication", "ref_id": "b48", "title": "Gender and the Use of Exclamation Points in Computer-Mediated Communication: An Analysis of Exclamations Posted to Two Electronic Discussion Lists", "year": "2017" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Kevin Yang; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "FUDGE: Controlled text generation with future discriminators", "year": "2021" }, { "authors": "Caleb Ziems; Minzhi Li; Anthony Zhang; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Inducing positive perspectives with text reframing", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b52", "title": "about the post? Choose between empowered, disempowered", "year": "" }, { "authors": " Text}", "journal": "", "ref_id": "b53", "title": "", "year": "" }, { "authors": " Text}", "journal": "", "ref_id": "b54", "title": "", "year": "" }, { "authors": " Text}", "journal": "", "ref_id": "b55", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b56", "title": "Would a reader feel empowered", "year": "" }, { "authors": " Text}", "journal": "", "ref_id": "b57", "title": "", "year": "" } ]
[]
10.18653/v1/N19-1388
2024-03-23
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b22", "b41", "b23", "b27", "b8", "b7", "b10", "b5", "b16", "b2" ], "table_ref": [], "text": "Despite the notable success of MT systems in recent years, thanks to multilingual pre-trained language models (Aharoni et al., 2019), their performance when translating cultural-specific items (CSIs) remains poor. This is primarily due to the gap between the cultural differences associated with languages (Akinade et al., 2023;Liebling et al., 2022). The variation of cultural-specific items across cultures (Woolford, 1983) has been a long-standing aspect of translation studies (Newmark, 1988;Persson, 2015;Fernández Guerra, ). However, existing terminology translation datasets and methods mostly focus on popular domains (e.g., medicine, finance) (Dinu et al., 2019;Ghazvininejad et al., 2023), yet do not cover much on cultural-specific aspects. Distinguished from general terms, cultural-specific items are unique to particular cultural groups, making their literal translations difficult for people from other cultures to understand. Moreover, many CSIs have no translations into other languages, further increasing the difficulty of data collection and evaluation of the translation performance. For example, in Figure 1, a Taiwanese dish called \"五更肠旺\", which is chitterling in hot pot, has no well-known existing English translations. The translation of ChatGPT \"Wu Geng Intestine Soup\" is misleading and can not be easily understood by English native speakers. In contrast, a professional human translator would elicit an explanation to make the translation more understandable for readers. As a result, the nuanced characteristics of CSIs and the limited availability of data sources pose challenges in building cultureaware MT systems that can translate these terms into understandable ones in target languages.\nRecently, a new translation paradigm has emerged, which employs prompts to guide large language models (LLMs) to perform machine trans-lation (Brown et al., 2020) in a zero-shot or fewshot fashion. With this flexible paradigm, cultural knowledge can be seamlessly incorporated into LLM translation prompts. However, LLM-based translation is sensitive to prompting strategies and prone to generating hallucinations (Ji et al., 2023). As shown in Figure 1, the translation of \"A Few Good Men\" as \"壮志凌云\" (i.e., Top Gun) wrongly refers to a completely different film. Given that, the cultural sensitivity of LLM-based MT systems remains an open question, especially when dealing with culturally relevant content. Moreover, there is scant research comparing LLM-based translation versus NMT systems regarding their cultural awareness, partly due to the absence of a rich-annotated culturally sensitive parallel corpus and reliable automated metrics for evaluation. This scarcity further hinders the MT development in navigating cultural nuances, limiting the MT systems' efficacy in promoting cross-cultural communication.\nIn this study, we focus on evaluating the cultureawareness of MT systems, addressing two research questions: (1) NMT v.s. LLM-based MT, which group of MT systems embed more cultural awareness? (2) Diving into LLM-based MT, what prompting strategy benefits the most for guiding LLMs in cultural-specific translation? To this end, we propose a novel data curation pipeline to build a culture-centered parallel corpus with limited human annotation efforts. To capture challenging examples with geo-metadata, we further conduct cultural knowledge augmentation for CSIs. Our parallel corpora include 6 language pairs (i.e., English-Chinese, English-French, English-Spanish, English-Hindi, English-Tamil, and English-Telugu), covering 7,253 CSIs in 18 concept categories from over 140 countries and regions.\nTo assess the cultural nuance, we design new automatic metrics targeting the translation quality of cultural concepts, both on the translation accuracy and understandability. Different from existing evaluation metrics which mainly focus on semantic similarity with the reference (Anastasopoulos et al., 2021), we define a new metric called understandability to evaluate the CSI translation in a reference-free manner. We use our metrics to compare a series of MT systems, including LLMbased MT, traditional NMTs, and commercial MT systems, on translating cultural content. Moreover, we examine a set of prompting strategies to endow cultural knowledge into LLM-based MT. The traditional terminology translation method enhances the accuracy of translating CSIs that have well-known references but falls short of translating no-translation CSIs. However, incorporating CSI explanations in the prompt significantly improves the translation quality, especially for no-translation CSIs. Our further investigation into different MT systems' translation strategies reveals that employing more sophisticated approaches, such as describing CSIs and substituting them with similar content in the target language, effectively improves the readability of CSI translations. A wide range of well-structured multilingual knowledge gathered by various crowd workers in Wikipedia makes it an ideal source for collecting CSIs and their descriptions in aligned languages. The overall workflow of our data collection includes four stages: (1) building a wiki-centered cultural taxonomy ( §2.1); (2) curating parallel entities related to diverse sociocultures ( §2.2); (3) augmenting geo-metadata ( §2.3). The curation pipeline with one example is shown in Figure 2. We have more data examples in Appendix A." }, { "figure_ref": [], "heading": "Culturally Relevant Data Construction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Cultural Taxonomy Extraction", "publication_ref": [ "b23", "b4" ], "table_ref": [], "text": "Given that culture is an abstract concept, it is hard to directly capture fine-level cultural characteristics from texts. With this consideration in mind, we referred to an existing CSI classification framework (Newmark, 1988), which has been popularly used in the study of human translations of cultural concepts, to identify culturally relevant texts from Wikipedia. Specifically, there are five categories in this framework, including: 1) ecology; 2) ma-terial culture; 3) social culture; 4) organizations, customs, ideas; 5) gestures and habits. Following a primary investigation of Wikipedia texts, we find that each entity-centered Wikipedia page is labeled by a variety of Wikipedia categories. To save the efforts of matching each entity on Wikipedia with each CSI category, we turn to map CSI categories with Wikipedia categories (Asthana and Halfaker, 2018). To further guarantee the mapping quality, we manually create a mapping table for building connections on the categories from two resources. The resulting mapping table and category classification tool are described in Appendix B." }, { "figure_ref": [], "heading": "Cultural Parallel Text Collection", "publication_ref": [ "b32", "b4" ], "table_ref": [], "text": "To construct a cultural parallel text corpus (e.g., for En-Zh), we collect public text articles from Wikipedia's translation tool that cover a wide range of cultural topics, and conduct sentence alignment to get a parallel corpus from that (The data collection details are in Appendix §C). To expand the language coverage in our corpora, we also reuse open-source parallel datasets from OPUS (Tiedemann, 2016). These include Wikipediav1.0 for English-French and English-Spanish, as well as Samanantarv0.2 for English-Hindi, English-Tamil, and English-Telugu. To identify cultural-specific sentences, we perform entity-linking (Ringgaard et al., 2017) to identify Wikipedia entities on the source texts, and use Wikiproject classification tool (Asthana and Halfaker, 2018) to classify these entities into cultural categories which are further mapped to our CSI categories using the cultural taxonomy ( §2.1). Finally, we only keep the texts that contain items belonging to CSI categories." }, { "figure_ref": [], "heading": "Cultural Knowledge Augmentation", "publication_ref": [ "b3", "b14" ], "table_ref": [], "text": "Existing MT studies (Arthur et al., 2016;Hu et al., 2022) have been using external knowledge sources (e.g., Wikidata) to improve named entity translations. To enable future adaptations of these studies on our collected corpus, we parse Wikidata to extract the metadata of CSIs, which include their cultural labels, descriptions, and aliases in multiple languages. Furthermore, to identify items which are culture-specific, we collect information on the country of origin for each items and remove sentences containing items that do not have an associated origin country. This meticulous approach enabled us to enrich our dataset with supplementary information that can be utilized to evaluate the performance of machine translation models when handling culturally specific content." }, { "figure_ref": [], "heading": "Data Characteristic Summary", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 shows the statistics of our parallel corpora for the evaluation of MT systems on six language pairs. Particularly, for each language pair, we count the total number of detected CSIs by CSIs Counts and the number of unique CSIs by CSIs Types.\nIt's noted that not all the CSIs have translations on Wikidata, so we the number of CSIs containing the translations in WikiData by CSI Translations.\nConsidering that many CSIs only exist within a specific culture group, which can't be located in the parallel corpus, CSIs which don't have a translation in other languages should take a higher proportion in real-world corpus than in our dataset." }, { "figure_ref": [], "heading": "Cultural Awareness Evaluation", "publication_ref": [ "b2" ], "table_ref": [], "text": "Existing evaluation methods for terminology machine translation primarily focused on assessing the adequacy and fluency of the translated terms (Anastasopoulos et al., 2021). However, CSI usually has more than one translation using various strategies, leading to challenges of fine-grained evaluation." }, { "figure_ref": [], "heading": "CSI-Match:", "publication_ref": [], "table_ref": [], "text": "To evaluate the accuracy of translations, we introduce the CSI-Match metric as a modification to the traditional Exact Match (EM) evaluation metric. CSI-Match measures the accuracy of term translation using a more nuanced, fuzzy matching approach. CSI-Match is the maximal partial similarity ratio (PSR) between the reference CSI translations {t 1 , t 2 , . . . , t n } and the system output sentence S. CSI-Match is calculated by Eq. (1), resulting in a value from 0 to 100. A higher value indicates a stronger similarity of the predicted CSIs over a set of translation references." }, { "figure_ref": [], "heading": "CSI-Match = max", "publication_ref": [ "b21", "b29", "b23", "b27" ], "table_ref": [ "tab_2" ], "text": "t∈{t 1 ,t 2 ,...,tn}\nPSR(t, S)(1)\nPSR measures the maximum similarity between string t and any substring in S.\nPSR(t, S) = max s∈P (1 -d(t, s)) × 100 (2) P = {S i:j | 0 ≤ i ≤ j < |S|} (3)\nwhere S i:j is a substring of S from word index i to j, d(, ) is the normalized Levenshtein distance (Levenshtein et al., 1966) between two strings.\nUnderstandability in Human Evaluation: In the context of evaluating CSI translation, we believe that, in addition to adequacy and fluency, the translated terms should also be comprehensible to speakers of the target language, considering that certain terms may not exist in the target language group. So, we design a human evaluation targeting at understandability which we define as the degree to which native target-language speakers can understand and explain the CSI translations. Specifically in the human evaluation, we ask the native target-language speaker to rank the understandability of CSI generated by various MT systems and evaluate the quality of CSI translation accordingly.\nTo enhance our metric's practical application, we leverage GPT-4 to compare the understandability between the target and MT systems, which has been shown as an effective way to evaluate the generation performances (Rafailov et al., 2023). We calculate the understandability of each MT system as the win rate of the comparison. The evaluation prompt is detailed in Appendix §F.\nTranslation Strategies: Furthermore, we categorize the translation strategies of CSI based on prior translation theories (Newmark, 1988;Persson, 2015). These theories define different categorizations of strategies to improve the understandability of CSIs while keeping the cultural specialty. We select 4 strategies that are common to our dataset which are explained in Table 2. We then conduct a human evaluation to annotate the translation strategies employed by different MT models, and compare them with the strategies used for generating the reference translations." }, { "figure_ref": [], "heading": "Cultural Knowledge Prompting", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In this section, we explore various prompting strategies to introduce cultural knowledge for LLM MT. We elucidate our strategies for generating incontext examples from external knowledge, which involve employing CSI translation pairs and CSI explanations. Additionally, we delve into several prompting strategies eliciting from LLMs' internal knowledge. Table 3 shows examples of various prompting strategies." }, { "figure_ref": [], "heading": "External CSI Translation (CT)", "publication_ref": [ "b10" ], "table_ref": [], "text": "We examine an external CSI dictionary for improving LLM-based MT. In particular, bilingual translation dictionaries play a vital role in the workflow of human translators (Ghazvininejad et al., 2023).\nHere, we assess the impact of incorporating a CSI dictionary within the prompts. Specifically, we incorporate CSIs along with their corresponding translations prior to a basic translation instruction." }, { "figure_ref": [], "heading": "External CSI Explanation (CE)", "publication_ref": [], "table_ref": [], "text": "CSIs may not have a direct equivalent in the target language's culture when the concepts are not commonly used by the target language speakers. Therefore, it becomes necessary to translate based on the explanation of the CSI to assist the target audience in better understanding the content. To assess the impact of explanations, we include the CSI description obtained from Wikipedia in the prompt before the basic translation instructions. This allows us to study whether additional explanations of CSIs can enhance the MT performance." }, { "figure_ref": [], "heading": "Self-Explanation (SE)", "publication_ref": [ "b40", "b20", "b42" ], "table_ref": [], "text": "We also examine LLMs' internal knowledge for explaining the meaning of CSIs. Notably, chainof-Thought (CoT) prompting has shown effective in eliciting LLMs' internal knowledge (Wei et al., 2022;Kojima et al., 2022). Inspired by this, we treat the explanation of CSIs in a source sentence as intermediate reasoning steps before translating the whole sentence. We then design the explanation prompting strategy in two steps for machine translation. In the first step, we prompt the LLM to explain the meaning of all CSIs in the source sentence. In the second step, we ask the LLM to translate the whole sentence by combining the LLM's explanation with another prompt instruction. 4.4 Self-Ranking (SR)\nFinally, LLMs can be prompted to sample different translation outputs. Even for multiple semantically equivalent prompts, the word choice in the prompts can have a significant impact on the performance of LLM models in translation tasks. To achieve more consistent and reliable results, we examine a selfranking method (Wu et al., 2023) to instruct LLMs to rank their translation outputs. Specifically, we prompt the model to first generate a fixed number of potential translations." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b7" ], "table_ref": [], "text": "Methods in Comparison: To fully evaluate the efficacy of LLM translations for cultural nuances, we compare the different prompting strategies ( §4) on a tuning-free LLMs, as well as an open-sourced NMT model and a commercial MT system.\n• Prompting LLMs: We examine the different prompting strategies on OpenAI's ChatGPT(gpt-3.5-turbo-1106) which is finetuned from GPT-3.5 with instructions. We also include LLaMA2-7B for comparison.\n• NLLB: We use the NLLB 1.3B (Team et al., 2022) model, which is a state-of-the-art multilingual MT model.\n• Commercial MT: We use the Google Translate engine in our comparison.\nFor LLaMA and NLLB, we experimented with addtional two methods proven to be highly effective in previous research on terminology machine translation (Dinu et al., 2019). Specifically, we employ the 1) Append: append the CSI dictionary before the input, and 2) Replace: replace the CSIs in the source sentence by their translation in target language. Specifically, we examine the vanilla NLLB model and its two variations by Append (NLLB-A) and Replace (NLLB-R). For open-sourced LLMs, we use LLaMA2 (7B) with in-context learning to perform machine translations. For each language pair, we provide 8-shot in-context exemplars using the format \" " }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "Our analysis includes three parts: 1) automatic evaluations between LLM-based translation and popular NMT systems ( §6.1) on six language pair translations; 2) assessing LLM prompting strategies using ChatGPT and LLaMA for En-to-Zh translation ( §6.2); and 3) fine-grained human evaluation on a subset of En-to-Zh pairs ( §6.3)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Overall Automatic Evaluation", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Replacement and Append Work for NLLB and LLM-MT in CSI-Match. Figure 3 shows the CSI-Match scores ( §3) for eight methods across cultural-related parallel corpora in six language pairs, which are collected by our pipeline ( §2). 3a shows scores of LLM-based MT systems including LLaMA2, LLaMA2-A, LLaMA2-R and ChatGPT. We find that straightforward strategies using dictionaries of CSIs, such as Replace (the green solid line) and Append (the orange solid line), are effective on metrics that rely on string-matching, such as CSI-match, as well as other semantic matching metrics detailed in Appendix §G. However, the appending strategy significantly benefits LLaMA more than NLLB. This suggests that LLaMA's capacity of in-context learning and instruction-following enables the flexible integration of cultural knowledge at test time, a capability not presented in traditional NMT systems like NLLB. Additionally, we measure the CSI-Match of ChatGPT using basic instructions (BI), and Google Translate (the red solid line). We find that Google Translate's performance across different languages is generally more consistent compared to ChatGPT. Specifically, Google Translate performs better in non-Latin languages such as Chinese, Hindi, Tamil, and Telugu, when compared to ChatGPT. Understandability Gap on Non-translation CSIs Table 4 presents the understandability win rates assessed by GPT-4 for various MT systems compared to the reference in English-Chinese translation. Appendix §F has the detailed evaluation prompt. The evaluation metrics include both the overall win rates across the entire dataset (Overall-U) and the win rates of samples containing complex source items (CSIs) with no translations (NT-U).\nThe results indicate that traditional terminology translation methods, such as Appending and Replacing, effectively improve the overall understandability across the entire dataset. However, they still encounter challenges in improving the comprehensibility of translations that contain CSIs without a well-established reference. Notably, ChatGPT performs better than any other MT systems, which potentially suggests that the instruction-tuning of LLMs beyond the translation task may be beneficial for the model to generate explanation-based translations for these non-translation CSIs." }, { "figure_ref": [], "heading": "Prompting Strategy Evaluation", "publication_ref": [], "table_ref": [ "tab_6", "tab_8", "tab_8" ], "text": "As LLM-based MTs open up the opportunity to incorporate free-form external knowledge for improving the understandability of non-translation CSIs, we explore various prompting strategies by incorporating additional cultural information alongside dictionaries to improve translation. We compare different prompting strategies on ChatGPT, focusing on English-to-Chinese translation. We examine both zero-shot and 2-shot approaches to obtain results from ChatGPT. Especially for complex prompting strategies including CE, SE, and SR, we use 2-shot examples that are tailored for culturally-aware translation. Table 5 shows the results of different prompting methods.\nEnternal Knowledge Prompting Comparing the strategy of using external knowledge in prompts (i.e., CT and CE), we observe that LLMs can effec-tively leverage both direct translations and indirect descriptions. Specifically, CT greatly enhances the CSI-Match score for CSI translation. While 2-shot CE demonstrates limited improvements on CSI-Match, it improves the overall understandability of the entire dataset. More importantly, 2-shot CE significantly improves the non-translation CSIs' understandability to 73.0. This suggests that CSI explanations offer significant assistance in translating CSIs, especially for the ones without wellknown translations. In Table 6, we quantitatively show the output examples of \"Polenta,\" an Italian corn porridge. Interestingly, its Chinese translation on Wikidata is merely a transliteration. Under the CT strategy, ChatGPT directly copies this transliteration into the output, which may be considered correct yet not comprehensible for native speakers of the target language. In contrast, ChatGPT using CE integrates an explanation into the translation, describing the CSI as \"corn porridge\" in Chinese. This translation is much easier for readers to understand the nature of Polenta.\nInternal Knowledge Prompting Next, we compare prompting strategies to elicit LLMs' internal knowledge (i.e., SE, SR). We find that 2-shot SE improves translation performance on CSI-Match and Overall-U but doesn't improve the NT-U, which suggests that ChatGPT itself still lacks sufficient knowledge of non-translation CSIs which are typically very unique to the target-language culture. On the other hand, SR proves beneficial on Chat-GPT for CSI-Match but does not improve the understandability of CSI translations, indicating that LLMs inherently possess internal knowledge of low-frequency CSIs but still can not capture cultural nuances through the SR prompting strategy examined in this study. In Table 6, 2-shot SR translates \"Polenta\" as \"cornmeal,\" similar to the baseline (BI). However, SE successfully translates it into \"corn porridge.\" This suggests that ChatGPT possesses background knowledge of Polenta but struggles to spontaneously elicit it to generate and select the most understandable translations." }, { "figure_ref": [ "fig_4" ], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [ "tab_9", "tab_2" ], "text": "GPT-4 and Human Evaluation Are Consistent.\nTable 7 shows the Overall-U evaluated by both GPT-4 and human annotators. Specifically, we randomly select 200 samples from the English-to-Chinese translation dataset and have human annotators evaluate the understandability of output Translation Strategies To explore potential factors benefiting understandability, we let a human annotator examine the models' translation strategies. In addition to the four strategies (mentioned in §3; Table 2), we propose three extra considerations 1) Copy that directly copies the source language of CSI into the target language; 2) Wrong that indicates entirely incorrect translations; and 3) Other that employs other strategies in translation. Figure 4 shows the ratio of each strategy in four MT systems. We find that models with higher understandability (i.e., ChatGPT and Google translate) use description and substitution at a significantly higher rate, indicating that these two strategies help improve the understanding of CSI for target-language speakers. Notably, LLaMA2 incorporates a higher frequency of substitution and description methods compared to traditional NLLB. However, this increased diversity in translation output comes at the cost of reduced stability in the outputs. As a result, LLaMA2 tends to yield more inaccurate translations, whereas NLLB relies more on Literal Translation and Transliteration to translate CSIs.\nTo compare the consistency between automatic and human evaluation, we also calculate the correlation between automatic evaluation metrics and human evaluation, where we consider the translation correct in human evaluation if the translation is categorized into all the translation strategies except Copy and Wrong. The Pearson's cor- " }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b24", "b25", "b13", "b1", "b22", "b19", "b28", "b31", "b7", "b10", "b5", "b12", "b17", "b9", "b36", "b33", "b3", "b18", "b15", "b39", "b14" ], "table_ref": [], "text": "Cultural-aware Machine Translation: As languages and cultures are highly intertwined, there is a growing desire towards empowering cultural awareness of machine translation systems (Nitta, 1986;Ostler, 1999;Hershcovich et al., 2022). However, as cultural nuances are subtle, collecting culturally sensitive data (Akinade et al., 2023) remains costly and time-consuming. Besides, it is also challenging to perform a human-centered evaluation of the cultural nuances (Liebling et al., 2022). Existing studies have proposed strategies to evaluate cultural awareness of traditional MT systems by grounding images (Khani et al., 2021), adapting entities (Peskov et al., 2021) or targeting at dilates (Riley et al., 2022). Different from evaluating traditional MT systems, we focus on evaluating the cultural awareness of LLM-based translation.\nMT with Terminology Dictionaries Previous studies have proposed to integrate terminology dictionaries into NMT models (Dinu et al., 2019) and LLM-based MT systems (Ghazvininejad et al., 2023), proving effective for terminology translation. A major thrust of methods either modify the model architectures to integrate the dictionary or feed the translation dictionary as parts of inputs to NMT models. In this study, we focus on evaluating LLM-based MT methods with conventional NMT methods without modifying the underlining models, as the parameters of LLMs (e.g., Chat-GPT) may not be accessible. Moreover, compared to traditional terminology translation from popular domains (e.g., finance, medicine), translating cultural-specific items carries its unique challenge. CSIs are unique to specific cultural groups, leading to difficulty in understanding the cultural nuance of CSIs for individuals from other cultures. Thus, beyond accuracy-focused metrics, assessing CSI understandability is also crucial and underexplored in the literature.\nLLM-based MT: Large language models, such as GPT-3 (Brown et al., 2020), have proven effective in machine translation for various highresource languages (Hendy et al., 2023;Jiao et al., 2023). Particularly, a few recent studies have investigated the performance of LLM-based MT, including formality control of translation outputs (Garcia and Firat, 2022), in-context translation ability during pre-training (Shin et al., 2022), and multilingual translation (Scao et al., 2022). However, the exploration of LLM-based MT on culturally sensitive texts is still missing.\nExternal Knowledge for MT: There have been multiple threads of research efforts on integrating external knowledge such as bilingual translation lexicons for neural machine translation systems, including probability interpolation of lexicons (Arthur et al., 2016;Khandelwal et al., 2021), data augmentation by back-translations (Hu et al., 2019), decoding with a phrase memory (Wang et al., 2017), and pre-training with an entity-based denoising objective (Hu et al., 2022). Despite the effectiveness, these methods require further finetuning of the original MT systems, while we focus on tuning-free methods for integrating external knowledge for LLM-based MT in this study." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce a novel data curation pipeline aimed at constructing a culturally sensitive parallel corpus to assess the cultural awareness of MT systems. Additionally, we propose a referencefree metric to evaluate the understandability of translations for cultural-specific content by GPT-4. We also devise simple yet effective prompting strategies to enhance the understandability of LLMbased translations. Despite the effectiveness, several challenging questions remain open. First, it is non-trivial to incorporate cultural-specific information beyond a single entity such as discourse information. Besides, our prompting strategies leverage mostly external cultural knowledge in the form of texts. How to leverage multimodal knowledge from images and structured knowledge graphs to resolve cultural ambiguity deserves further investigation." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b13" ], "table_ref": [], "text": "Our work provides one step further toward understanding and evaluating the cultural awareness of LLM-based machine translation. We provide a culturally sensitive parallel corpus with rich annotations on cultural-specific items. Thereby, prompting strategies in our study, due to the cost ChatGPT API, is limited to the language pair (i.e., English-Chinese). To manually verify the quality of the data, we control the data size in a small one. We will provide our code repository to facilitate the future adaptation of our pipeline for more language pairs in diverse language families. Besides, as we focus on the evaluation of cultural-specific items in this study, the evaluation of cultural awareness beyond a single entity deserves further investigation. In addition to cultural-specific items, there are many other types of cultural errors that exist in the translation process, such as linguistic style and slang (Hershcovich et al., 2022). Our work aims to mitigate cultural errors by starting from CSI, in order to promote advancements in culturalaware machine translation datasets, models, and evaluation methods. This is of great importance for enabling machine translation to play a larger role in cross-cultural contexts. Further, we only try several promoting strategies in our study, which is because our work focuses on benchmarking the cultural awareness of current LLM-based MT systems, in the future, we'll test other methods such as instruction tuning to improve the performance." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Although our study designs a suite of simple but effective prompting strategies to enhance the cultural awareness of LLM-based machine translation, we still observe the weakness of LLM-based translation on cultural concepts in certain regions (e.g., Asia) and hallucinations on low-frequency entities. Potential usage of these LLM translation outputs may still result in misinformation spread. Before deploying our methods to create reliable content such as creating translations of Wikipedia articles, practitioners should ensure another round of human post-editing. In the future, we'll make our dataset public to assist further research on cultural-awareness of MT systems. During the annotation process, the annotators (native speakers of the target languages) consist of the authors of this article, who know the goals of the study." }, { "figure_ref": [], "heading": "A Data Examples", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "In Table 8, we present a data example from the English-Chinese corpus. Each data point consists of a pair of sentences. We meticulously annotate all cultural-specific items (CSI) within the sentences.\nFor each cultural-specific item, we provide information including its category, country of origin, translations in the target language, descriptions in both the source and target languages, and an explanation. To illustrate the challenges that culturalspecific items pose for current Machine Translation (MT) systems, we provide translations from both Google Translate and ChatGPT for this example. It is noted that both Google and ChatGPT erroneously rendered the Chinese translation of \"Wiener Schnitzel\" as \"pork chops\" instead of the correct translation, which is \"steak.\" This misinterpretation not only misleads Chinese readers but also introduces confusion to the entire sentence. " }, { "figure_ref": [], "heading": "C Wikipedia Parallel Corpus Collection", "publication_ref": [ "b35" ], "table_ref": [], "text": "To collect English-Chinese parallel corpus from Wikipedia, we use the bilingual Wikipedia articles translated through Wikipedia's content translate tool4 . This tool allows confirmed editors to translate Wikipedia articles from a source language to a target language with a machine translation system. By tracking their editing logs, we obtain the text triples consisting of the original text in a source language, the machine-translated text, and the human post-edited text in a target language. We then use a sentence alignment tool bleu-align5 (Sennrich and Volk, 2010) to obtain a sentence-level parallel corpus.\nThen we use open-source data from OPUS, which includes Wikipediav1.06 for English-French and English-Spanish, as well as Samanantarv0.27 for English-Hindi, English-Tamil, and English-Telugu. Culture is intricately linked with specific regions, and its manifestations can exhibit substantial variations across diverse regions and categories. Therefore, our dataset encompasses cultural-specific elements sourced from a wide array of regions and" }, { "figure_ref": [], "heading": "D Data Characteristics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "G Overall Automatic Evaluation", "publication_ref": [ "b26", "b34", "b30", "b11" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Automatic Evaluation Metrics: We first evaluate the translation outputs using traditional automatic metrics such as BLEU (Papineni et al., 2002), BLEURT (Sellam et al., 2020), and COMET (Rei et al., 2020). To be consistent with the evaluation method of NLLB, we calculate spBLEU (Goyal et al., 2022) for BLEU scores. In addition to traditional machine translation evaluation metrics, we also use CSI-Match to evaluate the translation quality of CSIs (described in §3). Table 10 shows the comparison results of six MT methods across six language pairs in both directions.\nAs shown in Table 10, NLLB displays high performance in both directions of translation over six language pairs. Both CSI dictionary incorporation (NLLB-A) and term replacement strategies (NLLB-R) enhance the translation quality of CSI for most language pairs, without significantly compromising the overall sentence translation regarding other metrics. Notably, NLLB-R outperforms other MT systems on CSI-Match, even including LLM-based MT. Interestingly, LLaMA2-7B shows an obvious drop in both traditional evaluation metrics and CSI-Match scores when translating English to three Indian languages and vice versa. One possible explanation is because of the insufficient Indian data during the pre-training of LLaMA2. Both CSI-involving translation strategies are beneficial for LLaMA-based translation. In non-Latin languages (i.e., Chinese, Hindi, Tamil, and Telugu), LLaMA2-A tends to yield better performances, whereas LLaMA2-R performs better in Latin languages (i.e., French and Spanish), which potentially suggests that injecting cultural knowledge through code-switching similar Latin languages works better than distant languages for LLM-based models. Furthermore, we assess the translation performances of ChatGPT and Google Translate. Both MT systems exhibit commendable performance in CSI translation, with Google Translate demonstrating superior translation results. Notably, Google Translate showcases consistent translation abilities, particularly in handling relatively low-resource languages like Tamil and Telugu." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "H Automatic Evaluation of Prompting", "publication_ref": [], "table_ref": [], "text": "Strategies on LLaMA2 Culture is often associated with a specific region, and its expressions can vary significantly across different regions and categories. To gain a deeper understanding of the influence of region on CSI translation, we categorized CSI into three groups: CSI originating from countries primarily using the source language, countries predominantly using the target language, and countries utilizing languages other than the source and target languages. On the six groups of English-to-XX translations, we calculated the average CSI-Match values of these three CSI groups respectively, and the results are shown in Figure 6.\nGiven that target CSIs must have the translation in the target language. Translating target CSIs is akin to back translation. However, when translat- tently achieves superior translations across all languages. ChatGPT demonstrate better translation performance in Chinese and Tamil, while LLaMA2 succeeds in Hindi and Tamil for target CSI translation. In contrast, traditional translation models NLLB struggle with all non-Latin languages, failing to outperform the source CSI translation. This suggests that LLMs may possess enhanced learning capabilities for translating culture-related content. However, it is important to note that the current translation performance is not consistently stable. In order to further compare the impact of different translation strategies on understandablity, we analyzed the comparison between different translation strategies based on the ranking results of human evaluation. Specifically, we also rank the different translation strategies used by the MT systems according to the rank of the MT system's understandablity given by humans, which is shown in Figure 7. It's shon that the winning rate of descriptions for all methods surpasses 0.5, and the winning rate of substitutions, excluding descrip-tions, significantly exceeds 0.5. This implies that translations employing these two strategies are generally deemed more comprehensible by human annotators. Moreover, Literal Translation outperforms Transliteration, highlighting that transliteration may diminish the clarity of CSI in translation compared to a literal approach. Notably, the winning rate of copying for both Literal Translation and Transliteration hovers around 50%, indicating that these two methods may introduce confusion, and their readability underperforms directly copying the original word." }, { "figure_ref": [], "heading": "J Comparison of Translation Strategies", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": " ", "publication_ref": [], "table_ref": [], "text": "5\n, which shows the distribution of the English-Chinese corpus. This inclusive approach allows us to comprehensively evaluate the performance of machine translation models across a broad spectrum of cultural contexts." }, { "figure_ref": [], "heading": "E Experiment Settings", "publication_ref": [], "table_ref": [], "text": "The experiment settings of different models included in our paper are as follows:\n• NLLB We use NLLB-200-1.3B=distilled 8 for our experiments. We use fairseq to conduct the inference. The beam is set as 4, and the length penalty is set as 1.0.\n• LLaMA2 We use LLaMA-2-7B-hf 9 for testing. The sampling is set as True, leading to a multinomial sampling searching method.\n• ChatGPT We examine the version at gpt-3.5turbo-1106. We use the ChatCompletion 10 API provided by OpenAPI For the generation, we set the parameters as default, for which the temperature is 1, top_p is 1, and frequency_penalty as 0.\n8 https://github.com/facebookresearch/ fairseq/tree/nllb?tab=readme-ov-file 9 https://huggingface.co./meta-llama/ Llama-2-7b-hf 10 https://platform.openai.com/ docs/guides/text-generation/ chat-completions-api\n• GPT-4 For GPT-4, we use the latest version gpt-4-0125-preview by ChatCompletion, and we set the parameters as default, for which the temperature is 1, top_p is 1, and fre-quency_penalty as 0.\n• Google translate We call the API 11 of Google Translate to get translations from it." }, { "figure_ref": [], "heading": "F Evaluation Prompts of GPT-4", "publication_ref": [], "table_ref": [], "text": "It's shown that GPT-4 could be an effective tool for evaluating the generation task's quality in DPO, we try the same prompt manner with the edition towards understandability of CSIs, the prompt is as follows:\nAssuming ing the source CSI or other CSIs, the translation may either not exist in the target language or exist with lower word frequency. Consequently, the model is expected to yield better results for the tar-get CSI. Surprisingly, our analysis reveals that most models excel at translating the target CSI back into the target language in Latin languages (i.e., French and Spanish). Notably, Google Translate consis-" } ]
Translating cultural-specific content is crucial for effective cross-cultural communication. However, many MT systems still struggle to translate sentences containing cultural-specific entities accurately and understandably. Recent advancements in in-context learning utilize lightweight prompts to guide large language models (LLMs) in machine translation tasks. Nevertheless, the effectiveness of this approach in enhancing machine translation with cultural awareness remains uncertain. To address this gap, we introduce a new data curation pipeline to construct a culturally relevant parallel corpus, enriched with annotations of cultural-specific items 1 . Furthermore, we devise a novel evaluation metric to assess the understandability of translations in a reference-free manner by GPT-4. We evaluate a variety of neural machine translation (NMT) and LLM-based MT systems using our dataset. Additionally, we propose several prompting strategies for LLMs to incorporate external and internal cultural knowledge into the translation process. Our results demonstrate that eliciting explanations can significantly enhance the understandability of cultural-specific entities, especially those without well-known translations.
Benchmarking LLM-based Machine Translation on Cultural Awareness
[ { "figure_caption": "Figure 1 :1Figure 1: Cultural translation errors made by Google Translate and ChatGPT systems 2012). However, existing terminology translation datasets and methods mostly focus on popular domains (e.g., medicine, finance)(Dinu et al., 2019;Ghazvininejad et al., 2023), yet do not cover much on cultural-specific aspects. Distinguished from general terms, cultural-specific items are unique to particular cultural groups, making their literal translations difficult for people from other cultures to understand. Moreover, many CSIs have no translations into other languages, further increasing the difficulty of data collection and evaluation of the translation performance. For example, in Figure1, a Taiwanese dish called \"五更肠旺\", which is chitterling in hot pot, has no well-known existing English translations. The translation of ChatGPT \"Wu Geng Intestine Soup\" is misleading and can not be easily understood by English native speakers. In contrast, a professional human translator would elicit an explanation to make the translation more understandable for readers. As a result, the nuanced characteristics of CSIs and the limited availability of data sources pose challenges in building cultureaware MT systems that can translate these terms into understandable ones in target languages.Recently, a new translation paradigm has emerged, which employs prompts to guide large language models (LLMs) to perform machine trans-", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: A data curation pipeline for constructing a cultural-specific English-Chinese parallel corpus.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[Source language]:[Source sentence]=[Target language]:[Target sentence].\" The two variants targeting CSI translations are: 1) LLaMA2-A that appends the CSI translation lexicon to the source sentence in the prompt, whose format is \"<CSI 1 >:<CSI 1 Translations>,...,<CSI n >:<CSI n Translations>[Source language]:[Source sentence] = [Target language]:[Target sentence]\"; and 2) LLaMA2-R that replaces the CSIs in the source sentence with their target translation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: CSI-Match on Different Language Pair Translations: The coordinate scale range is from 40 to 100.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Percentage of Translation Strategies relation between the human evaluation and BLEU, BLEURT and COMET are 88.8, 90.5, and 89.5 respectively, whereas CSI-Match has the highest correlation score at 94.7 with human evaluation, suggesting that CSI-Match can be an efficient evaluation metric for CSIs with a reference translation.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Data characteristics on regions (Outside) and categories (Inside) for the English-Chinese corpus", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Avg.CSI-Match by Regions", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "WFigure 7 :7Figure 7: Comparison of Translation Strategies: The value in each grid represents the winning rate of the method on the x-axis in comparison to the method on the y-axis.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Dataset Statistics on Six Language Pairs.", "figure_data": "PairSent. CSIs Counts CSIs Types CSI TranslationsEn-Zh778794601730En-Fr 2,0732,2132,2131,130En-Es 1,5811,6521,652817En-Hi 1,0861,1271,127168En-Ta677695695118En-Te75469569566", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Translation Strategies", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The Chinese translation of culture entities in the sentence is as following: cannoli:里考塔芝士卷(Ricotta cheese rolls), 奶油甜馅煎饼卷 (Sweet Cream pancake rolls) Translate the following English text to Chinese CSI Explanation (CE)The explanation of culture entities in the sentence is as following: Cannoli are Italian pastries consisting of tube-shaped shells of fried pastry dough ... Please select the best translation result of entities: cannoli. The translation of entitis in the sentence should be the closest to its explanation, and is easy to be understood by Chinese readers.SourceThey are also commonly available at Italian-American bakeries in the United States, alongside other Italian pastries like cannoli and sfogliatelle.KnowledgeTranslations:cannoli:里考塔芝士卷(Ricotta cheese rolls), 奶油甜馅煎饼卷 (Sweet Cream pancake rolls) Explanation: Cannoli are Italian pastries consisting of tube-shaped shells of fried pastry dough ...", "figure_data": "StrategyPromptBasic Instruction (BI) Translate the following English text to ChineseCSI Translation (CT)Translate the following English text to ChineseUser: Please explain cannoli in [Source Sentence]Self-Explanation (SE)LLM: [Explanation]User: According to your explanations to cannoli, only translate the following English text to ChineseUser: Translate the following English text to Chinese: [Source Sentence].Please give [Generated Number] most likely translations, and ensure \"cannoli\" in each result to correspond to different translationsSelf-Ranking (SR)LLM: [Translations]User:", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Prompting strategy examples (Top) and a source with cultural knowledge for En-Zh translation (Bottom)", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "ModelOverall-U NT-U Overall-U NT-UEN-ZHZH-ENNLLB22.417.76.380NLLB-A23.1-11.1-NLLB-R22.9-17.3-LLaMA216.322.218.111.1LLaMA2-A20.2-29.3-LLaMA2-R19.6-26.8-ChatGPT55.861.954.472.2Google56.658.739.922.2", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Evaluation of different strategies by ChatGPT on English-Chinese translations.", "figure_data": "Shot Method CSI-Match Overall-U NT-UBI67.255.861.9CT84.059.4-ZeroCE66.655.558.7SE65.155.758.7SR69.258.547.6CE67.157.773.0TwoSE67.758.758.7SR68.256.657.1", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Output Examples of Prompting Strategy (Top)and a source-reference sentence pair with culturalknowledge for En-Zh translation (Bottom)translations by the four different methods. The re-sults indicate a high level of consistency betweenhuman annotations and GPT-4's evaluationsMethodGPT-4 Overall-U Human Overall-UNLLB19.016.0LLaMA213.014.0ChatGPT52.551.0Google54.055.5", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The Shanghai-style Fried Pork Chop is a modification from Wiener Schnitzel the national dish of Austria, and a fried pork chop is more a street food than a beef steak.", "figure_data": "AspectContentSource (EN)Target (ZH)上海炸猪排的做法改良自奥地利国菜维也纳炸牛排 (Wiener fried steak),而炸猪排与牛排不同,它显得更加市井。Cultural-Specific Item Wiener SchnitzelCategoryCulture.Food and drinkCountry of OriginAustriaTranslation (ZH)维也纳炸牛排 (Wiener fried steak)Description (EN)breaded veal schnitzelDescription (ZH)面包屑小牛肉炸肉排ExplanationThe entity, sometimes spelled Wienerschnitzel,is a type of schnitzel made of a thin,breaded, pan-fried veal cutlet. It is one ofthe best known specialities of Viennese cuisine,and one of the national dishes of Austria.NLLB上海风格的炸猪肉切片是从奥地利国家菜的维也纳施尼切尔(transliteration)改制而成,LLaMA2上海炒猪排是一种来自奥地利的牛肉炒肉块 (Beef stir-fried cubes)的改良型,而炒猪排更像是一道街头小吃而非牛肉炒肉块.Google Translate海派炸猪排是奥地利国菜维也纳炸猪排 (Wiener fried pork chops)的改良版,炸猪排与其说是牛排,不如说是街头小吃。ChatGPT上海式炸猪排是从奥地利的国菜维也纳炸猪排 (Wiener fried pork chops)改编而来,而炸猪排比牛排,更像是街边食物。", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A Data Example of the English-Chinese Corpus: In parentheses, we explain what the Chinese translation means. ) are shown in Table9. Additionally, we provide illustrative examples for each category to clarify the respective meanings. The tool we used for Wikiproject category classification is drafttopic 3 .", "figure_data": "B CSI vs. Wikiproject Mapping TableThe mapping table between CSI definition (5 cate-gories in all) and Wikiproject categories 2 (18 cate-2 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Categories", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Evaluation of different strategies by LLaMA2 on English-Chinese translations.", "figure_data": "Shot Method CSI-Match Overall-U NT-UBI35.116.322.2CT70.316.0-twoCE35.511.114.2SE35.111.815.8SR37.910.211.1", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" } ]
Binwei Yao; Ming Jiang; Diyi Yang; Junjie Hu
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Idris Akinade; Jesujoba Alabi; David Adelani; Clement Odoje; Dietrich Klakow", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Varepsilon kú mask: Integrating Yorùbá cultural greetings into machine translation", "year": "2023" }, { "authors": "Antonios Anastasopoulos; Laurent Besacier; James Cross; Matthias Gallé; Philipp Koehn; Vassilina Nikoulina", "journal": "", "ref_id": "b2", "title": "On the evaluation of machine translation for terminology consistency", "year": "2021" }, { "authors": "Philip Arthur; Graham Neubig; Satoshi Nakamura", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Incorporating discrete translation lexicons into neural machine translation", "year": "2016" }, { "authors": "Sumit Asthana; Aaron Halfaker", "journal": "CSCW", "ref_id": "b4", "title": "With few eyes, all hoaxes are deep", "year": "2018" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "Georgiana Dinu; Prashant Mathur; Marcello Federico; Yaser Al-Onaizan", "journal": "", "ref_id": "b7", "title": "Training neural machine translation to apply terminology constraints", "year": "2019" }, { "authors": "Ana Fernández; Guerra ", "journal": "Sic: časopis za književnost, kulturu i književno prevo denje", "ref_id": "b8", "title": "Translating culture: problems, strategies and practical realities", "year": "2012" }, { "authors": "Xavier Garcia; Orhan Firat", "journal": "", "ref_id": "b9", "title": "Using natural language prompts for machine translation", "year": "2022" }, { "authors": "Marjan Ghazvininejad; Hila Gonen; Luke Zettlemoyer", "journal": "", "ref_id": "b10", "title": "Dictionary-based phrase-level prompting of large language models for machine translation", "year": "2023" }, { "authors": "Naman Goyal; Cynthia Gao; Vishrav Chaudhary; Peng-Jen Chen; Guillaume Wenzek; Da Ju; Sanjana Krishnan; Marc'aurelio Ranzato; Francisco Guzmán; Angela Fan", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b11", "title": "The flores-101 evaluation benchmark for low-resource and multilingual machine translation", "year": "2022" }, { "authors": "Amr Hendy; Mohamed Abdelrehim; Amr Sharaf; Vikas Raunak; Mohamed Gabr; Hitokazu Matsushita; Young ; Jin Kim; Mohamed Afify; Hany Hassan Awadalla", "journal": "", "ref_id": "b12", "title": "How good are gpt models at machine translation? a comprehensive evaluation", "year": "2023" }, { "authors": "Daniel Hershcovich; Stella Frank; Heather Lent; Mostafa Miryam De Lhoneux; Stephanie Abdou; Emanuele Brandl; Laura Cabello Bugliarello; Ilias Piqueras; Ruixiang Chalkidis; Cui", "journal": "", "ref_id": "b13", "title": "Challenges and strategies in cross-cultural nlp", "year": "2022" }, { "authors": "Junjie Hu; Hiroaki Hayashi; Kyunghyun Cho; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "DEEP: DEnoising entity pretraining for neural machine translation", "year": "2022" }, { "authors": "Junjie Hu; Mengzhou Xia; Graham Neubig; Jaime Carbonell", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Domain adaptation of neural machine translation by lexicon induction", "year": "2019" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Computing Surveys", "ref_id": "b16", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Wenxiang Jiao; Wenxuan Wang; Jen-Tse Huang; Xing Wang; Zhaopeng Tu", "journal": "", "ref_id": "b17", "title": "Is chatgpt a good translator? a preliminary study", "year": "2023" }, { "authors": "Urvashi Khandelwal; Angela Fan; Dan Jurafsky; Luke Zettlemoyer; Mike Lewis", "journal": "", "ref_id": "b18", "title": "Nearest neighbor machine translation", "year": "2021" }, { "authors": "Nikzad Khani; Isidora Tourni; Mohammad Sadegh Rasooli; Chris Callison-Burch; Derry Tanti; Wijaya ", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Cultural and geographical influences on image translatability of words across languages", "year": "2021" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b20", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": " Vladimir I Levenshtein", "journal": "Soviet Union", "ref_id": "b21", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "Daniel Liebling; Katherine Heller; Samantha Robertson; Wesley Deng", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Opportunities for humancentered evaluation of machine translation systems", "year": "2022" }, { "authors": "Peter Newmark", "journal": "Prentice hall", "ref_id": "b23", "title": "A textbook of translation", "year": "1988" }, { "authors": "Yoshihiko Nitta", "journal": "Future Generation Computer Systems", "ref_id": "b24", "title": "Problems of machine translation systems: Effect of cultural differences on sentence structure", "year": "1986" }, { "authors": "Nicholas Ostler", "journal": "", "ref_id": "b25", "title": "the limits of my language mean the limits of my world\": is machine translation a cultural threat to anyone", "year": "1999" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Ulrika Persson", "journal": "", "ref_id": "b27", "title": "Culture-specific items: Translation procedures for a text about australian and new zealand children's literature", "year": "2015" }, { "authors": "Denis Peskov; Viktor Hangya; Jordan Boyd-Graber; Alexander Fraser", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Adapting entities across languages and cultures", "year": "2021" }, { "authors": "Rafael Rafailov; Archit Sharma; Eric Mitchell; Stefano Ermon; Christopher D Manning; Chelsea Finn", "journal": "", "ref_id": "b29", "title": "Direct preference optimization: Your language model is secretly a reward model", "year": "2023" }, { "authors": "Ricardo Rei; Craig Stewart; Ana C Farinha; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "COMET: A neural framework for MT evaluation", "year": "2020" }, { "authors": "Parker Riley; Timothy Dozat; Jan A Botha; Xavier Garcia; Dan Garrette; Jason Riesa; Orhan Firat; Noah Constant", "journal": "", "ref_id": "b31", "title": "Frmt: A benchmark for few-shot region-aware machine translation", "year": "2022" }, { "authors": "Michael Ringgaard; Rahul Gupta; Fernando Cn Pereira", "journal": "", "ref_id": "b32", "title": "Sling: A framework for frame semantic parsing", "year": "2017" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilić; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Gallé", "journal": "", "ref_id": "b33", "title": "Bloom: A 176bparameter open-access multilingual language model", "year": "2022" }, { "authors": "Thibault Sellam; Dipanjan Das; Ankur Parikh", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "BLEURT: Learning robust metrics for text generation", "year": "2020" }, { "authors": "Rico Sennrich; Martin Volk", "journal": "Association for Machine Translation in the Americas", "ref_id": "b35", "title": "MT-based sentence alignment for OCR-generated parallel texts", "year": "2010" }, { "authors": "Seongjin Shin; Sang-Woo Lee; Hwijeen Ahn; Sungdong Kim; Hyoungseok Kim; Boseop Kim; Kyunghyun Cho; Gichang Lee; Woomyoung Park; Jung-Woo Ha; Nako Sung", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "On the effect of pretraining corpora on in-context learning by a large-scale language model", "year": "2022" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b37", "title": "No language left behind: Scaling humancentered machine translation", "year": "2022" }, { "authors": "Jörg Tiedemann", "journal": "Baltic Journal of Modern Computing", "ref_id": "b38", "title": "Opus-parallel corpora for everyone", "year": "2016" }, { "authors": "Xing Wang; Zhaopeng Tu; Deyi Xiong; Min Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Translating phrases in neural machine translation", "year": "2017" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b40", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Ellen Woolford", "journal": "Linguistic inquiry", "ref_id": "b41", "title": "Bilingual code-switching and syntactic theory", "year": "1983" }, { "authors": "Yang Wu; Yanyan Zhao; Zhongyang Li; Bing Qin; Kai Xiong", "journal": "", "ref_id": "b42", "title": "Improving cross-task generalization with step-by-step instructions", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 452.63, 758.17, 72.51, 13.23 ], "formula_id": "formula_0", "formula_text": "PSR(t, S)(1)" }, { "formula_coordinates": [ 4, 98.06, 112.9, 191.81, 32.96 ], "formula_id": "formula_1", "formula_text": "PSR(t, S) = max s∈P (1 -d(t, s)) × 100 (2) P = {S i:j | 0 ≤ i ≤ j < |S|} (3)" } ]
10.18653/v1/2021.naacl-main.46
2023-11-15
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b24", "b22", "b36", "b13", "b23", "b11", "b6", "b31", "b0" ], "table_ref": [], "text": "Open-Retrieval Question Answering (ORQA) delivers promising performance for informationseeking question answering in about 20 languages (Asai et al., 2021b(Asai et al., , 2022;;Muller et al., 2022). ORQA models typically consist of a retriever that retrieves documents in a large corpus, followed by a generator that generates a short answer based on the top-ranked documents.\nRecent work in ORQA reached a new state of the art by not only retrieving documents in the same language as the query but by also retrieving passages cross-lingually, in additional languages (Asai et al., 2021b). This approach is particularly beneficial for languages with limited online written content (Kornai, 2013;Valentim et al., 2021), potentially allowing users to access information that may not be available in their language.\nORQA models are typically evaluated with string-matching metrics (e.g., Exact-Match) based on extensive collections of question and answer pairs in multiple languages (Clark et al., 2020;Longpre et al., 2021). However, these metrics are limited in three ways. First, they are inherently hard to scale to real-world applications, as they require the collection of gold answers for all the queries. Second, in some cases, the answer can be correct without any overlap with a gold reference (Bulian et al., 2022). Third, short answers are usually not enough to provide a trustworthy answer, and users may prefer to access the underlying source document. To address this last challenge, Bohnet et al. (2022) framed a new task called Attributed Question Answering (AQA). Given a query, AQA consists of predicting a short answer along with a supporting document retrieved from a large corpus (e.g. Wikipedia).\nThis work is the first study on Attributed Question Answering in the cross-lingual setting. 2 Mea-Figure 1: Attribution scenarios for cross-lingual Open-Retrieval Question Answering (XORQA). Given a query in a source language, a retrieval model (MDPR) retrieves source language and cross-lingual documents, which are then used by a generation model, MGEN, to produce a source language answer. For S1, in-language annotators assess attribution directly in the user's language while for S2, annotators validate attribution in English. We collect data for both scenarios in Bengali, Finnish, Japanese, Russian and Telugu.\nsuring attribution (Rashkin et al., 2021) in the crosslingual setting is more complex than in the monolingual case. Indeed, in this case, the document supporting the generated answer may be in a language different from the query and answer. Hence, attribution can be defined in various ways depending on the query, document, and answer languages. In this work, we introduce two attribution scenarios, namely (S1) in-language attribution for which the attribution is measured in the language of the query, and (S2) in-English attribution for which attribution is measured in English. We note that both scenarios may require translating a portion of the documents, the query, or the answer. We illustrate these scenarios in Figure 1.\nBased on this framework, we first measure the attribution level of CORA (Asai et al., 2021b), a stateof-the-art cross-lingual ORQA system. We collect data in 5 languages: Bengali, Finnish, Japanese, Russian, and Telugu. To our surprise, a large portion of generated answers was found not attributable to any retrieved passage. For instance, in Japanese, up to 47% of answers exactly matching the gold reference are not attributable. This poor attribution may hurt the trust into our QA systems and limit their deployment.\nTo improve the attribution level of cross-lingual QA systems, we experiment with a wide range of attribution detection models. We show that PaLM 2 in any language, possibly in a language different from the query, as a cross-lingual QA system. (Anil et al., 2023) outperforms all the other models despite being fine-tuned on a very small sample of our collected data (250 examples). This result shows the potential of using large language models to create state-of-the-art cross-lingual attribution models using very little annotated data, allowing them to be inexpensively created for use in many of the languages of the world. Additionally, we find that for Bengali, Finnish, Japanese and Russian, a T5 model fine-tuned on a large natural language inference corpora reaches very high accuracy compared to the baselines.\nOur analysis shows that PaLM 2 can detect more than 86% of attributed answers that are not exactly matching the gold reference, showing that it is a useful alternative to exact match evaluation. These answers may be answers that are not captured by the gold references but that are alternative correct answers or they can be answers that are semantically equivalent to the gold reference but that do not overlap with the gold reference (e.g. different units). We discuss these cases in Section 4.2.\nIn summary, we make the following four contributions: (i) Our work is the first to study attribution for question answering in a cross-lingual framework. We define two attribution scenarios, inlanguage attribution and in-English Attribution and annotate approximately 10,000 examples in five languages; (ii) Using this data, we evaluate the attribution of CORA, a state-of-the-art cross-lingual QA system. We show that a large portion (7%-47%, depending on the language) of the answers are not attributable-neither to in-language passages nor to cross-language ones; (iii) We show that PaLM 2 and NLI models can accurately detect attribution in the cross-lingual setting, significantly outperforming all other baselines and reaching above 90% accuracy for all 5 languages. Moreover, our work is the first to approach it with large language models, and we show that with scarce amounts of data (250 examples), they can outperform NLI models trained on millions of examples; and (iv) Using our attribution detection model as a reranker, we show that we reach an average of +55% in attribution compared to a model with no reranking.\n2 Attribution for Cross-Lingual Question Answering" }, { "figure_ref": [], "heading": "Attribution of Generative Language Models", "publication_ref": [ "b28", "b29", "b9", "b12", "b5", "b7", "b38", "b31" ], "table_ref": [], "text": "Generative language models have made impressive progress in the past few years (Radford et al., 2019;Raffel et al., 2020;Brown et al., 2020;Chowdhery et al., 2022). They can now perform most NLP tasks with relatively high accuracy in zero-shot and few-shot settings. However, generating text without reference to human-written trusted sources can be harmful in the real world. Indeed, even the largest models may assign a high probability to false and potentially harmful utterances (Bender et al., 2021;Bommasani et al., 2021;Weidinger et al., 2022). To overcome this challenge, Rashkin et al. (2021) introduced the Attributable to Identified Sources (AIS), a human evaluation framework for identifying whether a source document supports the generated text, or in other words, whether the generations can be attributed to a given document." }, { "figure_ref": [], "heading": "Attributed Question Answering", "publication_ref": [ "b6", "b6", "b24" ], "table_ref": [], "text": "The need for attribution is particularly vivid for information-seeking use cases. To address this need, Bohnet et al. (2022) defined the Attributed Question Answering (AQA) task. Given a query q and a large corpus of text C (e.g. Wikipedia), AQA consists of predicting an answer a and a passage p ∈ C to attribute the predicted answer.\n(AQA) (q, C) -→ (a, p) (1) Bohnet et al. (2022) experimented with the AQA task where the questions, answers, and passages were in English. Our work is the first to study this task in a cross-lingual framework.\nWe build upon previous work that showed that for many languages (e.g., Japanese), crosslingual QA systems outperform monolingual systems (Asai et al., 2021b;Muller et al., 2022). In this setting, given a query in a language L, the goal is to generate an answer a using evidence passages from a multilingual corpus C." }, { "figure_ref": [], "heading": "2.3", "publication_ref": [ "b2", "b14", "b20", "b39" ], "table_ref": [], "text": "Modeling Attributed Cross-Lingual QA Asai et al. (2022) showed that the best systemsaccording to the Exact-Match metric-for question answering in languages other than English are based on a cross-lingual open-retrieval (XORQA) pipeline. We thus model attributed cross-lingual QA with CORA (Asai et al., 2021b), a state-of-theart XORQA model. Figure 1 (left-panel) illustrates a typical XORQA pipeline.\nCORA consists of two components: a multilingual dense retriever (MDPR) which is based on mBERT (Devlin et al., 2019) fine-tuned for dense passage retrieval (Karpukhin et al., 2020), and a multilingual generator (MGEN) based on mT5-Base (Xue et al., 2021) fine-tuned for question answering. Given a query, MDPR ranks all the passages from Wikipedia regardless of their language. In practice, most of the top-ranked passages are either in the same language as the query or in English (we report the language distribution in Table 8 in the Appendix). Then, the top passages are fed to MGEN, which generates the answer. Depending on the languages (and their associated subword tokenization), the average number of passages varies between 5 (for Bengali) and 10 (for Japanese).\nCORA is designed to generate a short answer using multiple passages. To use CORA for AQA, we must select a single passage supporting the answer. In this work, we consider the passages that have been ranked highest by MDPR and fed to the generator as our pool of potential attribution passages. We measure and report the attribution level of answers and passages (a, p) by taking the TOP-1-retrieved passage by MDPR as well as ALL the passages retrieved and fed to the generator. Finally, we report in Section 5.3 the attribution level of answers and passages (a, p) after reranking the top passages with our NLI-based attribution detection model.\nRecall that the selected passage can be in any language but is typically in English or in the query language (cf. Table 8). This leads us to define two attribution evaluation scenarios." }, { "figure_ref": [], "heading": "Cross-Lingual QA Attribution Evaluation", "publication_ref": [ "b6", "b30", "b26" ], "table_ref": [], "text": "We introduce two attribution evaluation scenarios illustrated in Figure 1.\n(S1) In-Language Attribution Evaluation In this scenario, attribution is assessed in the language of the query, while the query, answer, and passage (q, a, p) are in the same language. From an application perspective, this scenario evaluates directly what a potential user of an attributed QA system would experience by receiving the answer to their question and the attributed source document in their language. As illustrated in Figure 1, this scenario involves automatically translating the portion of the passages retrieved in languages different from the query into the query language.\n(S2) In-English Attribution Evaluation In this scenario, the query, answer, and passage (q, a, p) are all in English during human annotation; we automatically translate the query and answer into English along with the passages retrieved in languages other than English (cf. Figure 1). We implement this scenario as it favors scalability, since collecting data in English is usually easier than in other languages due to the availability of raters. Moreover, a significant portion of the passages retrieved by cross-lingual QA systems are in English, so assessing attribution directly in English is most straightforward for these passages. For polyglot users, this scenario is also appealing as they may understand English and be interested in accessing the attributed document in English along with the answer in their language. 3 For both scenarios, translation is performed automatically using the Google Translate API. 4 Evaluation Metric For both scenarios, we collect evaluation data to assess if a predicted answer can be attributed to a retrieved passage. Following Bohnet et al. (2022), we measure the accuracy of a system by counting the proportion of answers with an attributed passage. We refer to this score as AIS.\nWe note that this evaluation method fundamentally differs from traditional QA system metrics, which are usually based on string-matching methods, e.g., Exact-Match (EM; Rajpurkar et al., 2016;Petroni et al., 2021). Indeed, given a query, answer 1: Each example (q, a, p) is annotated by 3 independent raters. We report the agreement with consensus which measures the proportion of examples that agrees with the majority vote. We report statistics on the in-language attribution scenario (S1) and in the in-English attribution scenario (S2). We also report in the ratings collected in (S1) and (S2) ((S1)̸ =(S2)) and the disagreement on the portion of examples that have been translated from English ((S1)̸ =(S2) TR.).\nand passage triplet (q, a, p), attribution evaluation measures the portion of answers a attributed to p.\nIn contrast, Exact-Match requires a gold answer ã to compare to the predicted answer a.\nIn Section 4.2, we show how Exact-Match differs from attribution. We show that some correct answers according to exact-match are not attributed to any retrieved passage, while some non-exactlymatching answers are legitimate answers attributed to reference passages." }, { "figure_ref": [], "heading": "The XOR-AttriQA Dataset", "publication_ref": [ "b31" ], "table_ref": [], "text": "To the best of our knowledge, our work is the first to study the problem of attribution in the crosslingual setting. To make this study feasible, we collect the first multilingual attribution dataset. We use the attribution evaluation framework defined by Rashkin et al. (2021). We hire Bengali, Finnish, Japanese, Russian, and Telugu-speaking raters for the in-language scenario (S1) and English-speaking raters for the in-English scenario (S2). Our analysis is based on the XOR-TyDiQA dataset (Asai et al., 2021a) in Bengali, Finnish, Japanese, Russian and Telugu. To limit cost, we randomly sample about 50% of the validation set except for Bengali and Telugu (S1) annotations for which we take the entire set. We retrieve the passages and predict the answers using the CORA system. We only evaluate the passages that are fed to the generator. For each (query, answer, passage) triplet, we ask three raters to answer \"Is the answer attributed to the passage?\". 5 To ensure the quality of the data collected we report in Table 1 the inter-annotator agreement (IAA). The agreement is above 90% for both the in-language scenario and the In-English scenario for all languages except Japanese. Appendix B provides more detail on the annotation process as well as the agreement with expert annotations on a small sample of the data (cf. Table 9). For each example, we assign the attribution label based on the majority vote of the three raters.\nIn-English vs. In-Language Attribution As reported in Table 1, the inter-annotator agreement observed is similar whether we collect the data in the in-English scenario (S1) compared to the in-language scenario (S2). The only large differences are observed for Telugu, for which the IAA is 8 points above when we collect the data in Telugu.\nIn consequence, we will use the annotation from the in-language scenario (S1) as the gold labels to evaluate all our models (cf. 4 and 5). Indeed, (S1) evaluates what a potential user may experience. So given the fact that (S1) is as good (or better) as (S2) concerning data quality, we decide to select the data from (S1).\nImpact of Translation on Attribution Assessment Both scenarios require translating a portion of the data automatically (cf. Fig. 1). We hypothesize that translating passages from English to the user language may lead, in some cases, to losing attribution. Indeed, assuming that a passage in English supports the answer, we can easily imagine cases in which translation errors could cause the translated passage not to carry the information that supports the answer. However, the disagreement between the annotation in (S1) and (S2) is not higher when we look at passages that have been translated compared to passages that have not for 4/5 languages (cf. comparison between row (S1)̸ =(S2) and row (S1)̸ =(S2) TR.) as reported in Table 1). In addition, after manually reviewing the data, we do not find any cases where translation errors cause disagreement.\n2021) for an exhaustive definition of interpretability." }, { "figure_ref": [], "heading": "Raters' Demographic and Cultural Background", "publication_ref": [], "table_ref": [], "text": "Even though translating passages does not lead to higher disagreement compared to the original passages, we do observe disagreement between (S1) and (S2) (between 4.1% and and 10.5% as reported in Table 1). We partially explain the disagreement by the demographic and cultural context of the raters. For instance, English speakers rated the example <Query: How many countries are there in the United States of America? Answer: 50> as attributed to the passage \"The United States of America (USA), commonly known as the United States (U.S. or US) or America, is a country composed of 50 states, a federal district, five major self-governing territories, and various possessions.\" while Telugu speakers did not do the same for the example translated into Telugu. We hypothesize that a familiarity with the USA and the concept of states made the raters understand the question more loosely and accept the \"50 states\" mention as supportive of the answer. We leave for future work the careful quantification of this phenomenon." }, { "figure_ref": [], "heading": "Attribution Evaluation of CORA", "publication_ref": [], "table_ref": [], "text": "Based on our newly collected XOR-AttriQA dataset, we now evaluate the attribution level of CORA." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Lack of Attribution of XORQA Predictions", "publication_ref": [ "b27", "b32" ], "table_ref": [ "tab_1" ], "text": "We start by focusing on the subset of answers that match the gold reference based on Exact Match (EM). We hypothesize that these answers are attributable in most cases and that non-attributable answers should be the exception, not the rule. Indeed, by design, CORA uses retrieved passages to generate answers. Intuitively, it is hard to conceive how the model could generate a correct answer without supporting passages. However, it is known that language models \"memorize\" knowledge in their parameters (Petroni et al., 2019;Roberts et al., 2020), which could enable this ability. We report in Table 2 the proportion of answers that match the gold reference and that are attributable to a retrieved passage. To our surprise, we find a very large number of non-attributable answers. For Japanese, only 53.1% of the answers are attributed to at least one passage.\nWe provide examples of non-attributed exactlymatching answers in Figure 2. We find that these non-attributed answers exactly-matching the refer- Burmese Gold Answer: Burmese Passage:\nThe Burmese language or the language of Myanmar is a language of the Lolo-Burmese sub-branch of the Tibeto-Burmese branch of the Sino-Tibetan language family. Exactly when the Burmese people came to Myanmar cannot be said. However, the oldest religious texts written in Burmese date back to the 10th century AD. Standard Burmese is thought to have originated from a dialect of the lower valleys of central Myanmar. Most people in present-day Myanmar speak some regional dialect of this Burmese language. Burmese was influenced first by Pali and then by Mon (12th-13th centuries). Then, from the 16th to the 19th century, the language came into contact with various European languages, such as Portuguese, Dutch, English and French. this ence are of various types. Some of these answers seem to be random guesses from the generator that happen to be matching the gold reference regardless of the quality of the retrieved passages. This is usually the case for Yes/No answers. Some answers are correct and seem to be using the information in the passage provided to the generator. However, in most cases, the information provided in the passage is incomplete to support the answer. This is the case for the second example in Figure 2: \"What is the name of the mother tongue of the Marma people?\" was answered with \"Burmese\". While the passage contains relevant information about the Burmese language, it does not draw a connection with the \"Marma people\" mentioned in the question.\nThese results show-without ambiguity-that ORQA systems, even when they generate correct answers, do not always provide a relevant source passage to support the generated answer. In other words, this means that for a significant portion of answers, ORQA systems are right but without any evidence-they are right for the wrong reasons." }, { "figure_ref": [], "heading": "Analysis of CORA's Attribution Level", "publication_ref": [], "table_ref": [ "tab_2", "tab_2", "tab_2" ], "text": "We now analyze the attribution level of all answers predicted by CORA, not only the correct ones. We report in Table 3 the attribution level of CORA. Depending on the language, between 11.8% (for JA) and 38.7% (for FI) of answers are attributed to the TOP-1 passage retrieved by MDPR. In addition, for all languages, we find that between 31.7-50.9% of answers are attributed to at least one passage in the ones provided to the generator (ALL).\nImpact of Cross-Language Attribution One of the key ingredients of the performance of CORA is its ability to use passages cross-lingually (mainly in English) (Asai et al., 2021b). We now look at how often the generated answers are attributable to these cross-lingually retrieved passages. We find that between 0.3% and 4.0% of answers in Telugu and Finnish respectively can be attributed to an English passage (while not being attributed to any passage in the same language as the query; cf. EN row in Table 3).\nAttribution vs. Exact-Match In Section 4.1, we found that a large portion of answers exactly matching the gold reference are not attributable. We now look at the answers that are not exactly matching the reference (cf. column non-EM in Table 3). We hypothesize that attribution can potentially complement string-matching metrics and find answers that otherwise would be considered incorrect. In Telugu, we find that 13.2% of such answers are attributed to the TOP-1 passage. We provide such examples in the Appendix in Figure 3. Some answers are semantically equivalent to the gold reference but are spelled differently or employ different measuring units (e.g., \"crore\" used in Telugu vs. \"ten million\"). Some answers are semantically different for the gold reference but are attributable to a passage (e.g., the liver as the largest organ)." }, { "figure_ref": [], "heading": "Attribution Detection for XORQA", "publication_ref": [ "b17", "b0" ], "table_ref": [], "text": "So far, we found that state-of-the-art cross-lingual question answering systems lack attribution. We showed that a large portion of answers are not attributed to any passages, by collecting a large collection of attribution data in five languages.\nHowever, collecting attribution data is costly and time consuming. In a deployment setting, it would simply be infeasible to annotate every (query, answer, passage) triplet. In order to address this issue, we explore automatic attribution detection techniques. We build upon previous work on grounding and factual consistency in English (Honovich et al., 2022). We also experiment with PaLM 2 (Anil et al., 2023) a new state-of-the-art multilingual large language model (LLM) in few-shot and scarce data (250 examples) settings." }, { "figure_ref": [], "heading": "Attribution Detection Models", "publication_ref": [ "b17", "b13", "b16", "b39", "b17", "b6", "b17", "b39", "b0", "b37", "b39", "b0", "b18", "b34" ], "table_ref": [], "text": "Given a query q, a short answer a and a passage candidate p, we frame attribution detection as a binary classification task:\n(q, a, p) - → ais(2)\nwith ais ∈ {0, 1}. 1 corresponds to the attributed class (i.e., the answer is attributed to the passage) and 0 corresponds to the non-attributed class. We note that query and answers are always in the same language (in Bengali, Finnish, Japanese, Russian or Telugu), while the passage may be in a different language (mainly English). Following Honovich et al. (2022), we model this task by prompting the models as follows: premise: \"$p\" hypothesis: the answer to the question \"$q\" is \"$a\" where p, q, and a are inserted appropriately.\nMT5-QA We use the training splits of the TyDi QA dataset (Clark et al., 2020) to train the attribution detection model. We employ the query, passage, answer triplets from TyDi QA as our attributed examples (our positive class). For nonattributed examples, we mine negative passages as follows: given a query, we start with the entire Wikipedia document from TyDi QA that answers the query. We sample from this document 10 passages that are different from the positive passage (i.e. the passage that answers the query). This technique provides strong negative passages by providing passages that are topically closely related to the positive passage but that do not answer the question. It was used successfully by Garg et al. (2020). We fine-tune mT5-XXL (Xue et al., 2021) on the concatenation of the training data in English, Bengali, Finnish, Japanese, Russian and Telugu.\n(M)T5-NLI Following Honovich et al. (2022) who found that NLI-fine-tuned T5 is accurate for factual consistency detection, we experiment with several English and multilingual NLI models. Similar to Bohnet et al. (2022), we make use of the best English NLI model from Honovich et al. (2022), a T5-11B model fine-tuned on a mixture of natural language inference datasets, fact verification, and paraphrase detection datasets. 6 We experiment with it in the translate-test setting (noted T5-NLI TRANSLATE-TEST) for which we translate the queries, passages, and answers to English. 7 To model attribution detection in multiple languages, we fine-tuned the mT5-XXL model (Xue et al., 2021) on translations of the mixture of the NLI datasets to the non-English languages (noted MT5-NLI TRANSLATE-TRAIN). To better model the portion of passages in a language different from the query, we also fine-tune the model by adding examples for which only the hypothesis has been translated while the premise is kept in English (noted MT5-NLI X-TRANSLATE-TRAIN).\nPALM 2 FEW SHOT To avoid costly fine-tuning, we experiment with in-context learning using the 83.7 / 86.8 78.8 / 85.3 71.7 / 80.4 81.9 / 88.0 84.7 / 88.6 Table 4: Performance of Attribution detection models. We report the Accuracy / ROC AUC scores on the XOR-AttriQA dataset. The Accuracy is computed with the probability threshold that maximizes it on an independent set. For each language, the best Accuracy / ROC AUC scores are bolded and the second best scores are underlined.\nPaLM 2 large language model (Anil et al., 2023). We use the Small version and evaluate the model after prompting the model with 4-shots with and without chain-of-thought prompting (Wei et al., 2022). Each language is evaluated with its own prompts, and two negative examples and two positive examples are sampled for each language. For each pair, one passage is chosen to be in-language while the other is chosen to be in-English. Chain-of-thought is done by manually writing a rationale that explains the attribution (or lack of attribution) of a given answer in English.\nMT5 / PALM 2 -ATTRIBUTION Finally, we experiment with fine-tuning directly on a small sample of the attribution data we collected. We sample 250 examples in the 5 languages and finetune mT5-XXL (Xue et al., 2021) and PaLM 2 Small (Anil et al., 2023). For mT5 we fine-tune the entire model, while for PaLM 2, we both fine-tune on the whole dataset and also fine-tune with Low-Rank Adaptation (LoRA) (Hu et al., 2021) to avoid overfitting and reduce fine-tuning cost. For these experiments, we use the same constant learning rate of 0.0001, dropout rate (Srivastava et al., 2014) of 0.1, and batch size of 128 for tuning both mT5 and PaLM 2. For fine-tuning with LoRA, we used a learning rate of 0.00005 and tuned the model over ranks in {4, 16, 64, 256}. For all models, we used the validation set for checkpoint selection." }, { "figure_ref": [], "heading": "STRING-MATCH", "publication_ref": [], "table_ref": [], "text": "We define a simple baseline. For answers that are not \"Yes\"/\"No\", if the string a is included in the passage p, we predict 1, otherwise 0. This means that we consider the answer to be attributed to the passage if it is included in it. For Yes/No answers, we predict 0 (the majority class). We also use it after translating the query, answer and passage to English. 8" }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b15" ], "table_ref": [ "tab_4" ], "text": "We report the accuracy and ROC-AUC (Flach et al., 2011) scores in Table 4. We compute the prediction with a decision threshold tuned on an independent validation dataset on which we measure the accuracy of the model. PaLM 2 outperforms all the other models despite being fine-tuned on a very small sample of data, which is encouraging as it shows we can leverage LLMs for crosslingual attribution with very little annotated data that is expensive to produce. We also found fewshot performance to also have strong results, that could probably be improved with more shots and leveraging larger LLMs. NLI fine-tuned models outperform MT5-QA and STRING-MATCH for all the languages in the translate-test setting. Finally, we report in Table 5 the portion of attributed answers not matching the gold references that the best AIS detection model accurately predicts. We find that PaLM 2 accurately predicts more than 86% of these answers (between 86.3 and 96.4% depending on the language). This shows the potential of using attribution detection to expand the space of legitimate answers beyond relying only on string-matching metrics. " }, { "figure_ref": [], "heading": "NLI Model for Reranking", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Using our best T5-based attribution detection model (T5-NLI TRANSLATE-TEST), we now come back to our original goal of improving the attribution level of our cross-lingual question answering system. We leave for future work the use of PaLM 2 for reranking. Given our pool of candidate passages, we use our attribution detection model as a reranker and select the passage which is the most likely to attribute the answer according to the model. We report the reranking attribution score in Table 6. We find that our NLI model can accurately rerank the passages. For instance for Telugu, we are able to increase the top-1 performance from 23.3 to 31.7, an improvement of +30.0%. Across all the languages, reranking with our NLI model leads to am average relative increase of 55.9% across the 5 languages." }, { "figure_ref": [], "heading": "Discussion and Future Directions", "publication_ref": [ "b31", "b32", "b19" ], "table_ref": [], "text": "Language model-based NLP systems are making fast and continuous progress in generating fluent and helpful text. Despite this progress, these models still make a lot of factual errors, specifically in languages different from English. Attribution is the most promising approach in addressing this issue (Rashkin et al., 2021). Our work finds that even the best XORQA system predictions lack attribution. These results can be explained by the tendency of these models to memorize facts (Roberts et al., 2020) and to hallucinate answers (Ji et al., 2023), which are in some cases correct. This shows that we need to make progress in detecting and selecting attributed sources that support the generated answer of cross-lingual QA systems. In this work, we proposed to use a large language model (PaLM 2) and natural language inference models to detect and 8 Using translation ensures that everything is in the same language potentially improving the string-matching accuracy. rerank passages to improve the attribution-level of a state-of-the-art XORQA system.\nOur result points to two critical research directions to further make progress in informationseeking QA. First, we observed that in some languages (e.g., Telugu), cross-lingual passages contribute very moderately to the attribution level. This shows that more progress is needed in cross-lingual retriever systems. Second, we showed that stringmatching metrics based on gold references are inherently imperfect evaluation methods for QA and showed that PaLM 2 can be used to detect relevant attributed passages accurately with only small amounts of training data. This means that large language models-based attribution detection can potentially be used as evaluation metrics for QA in multiple languages. Further work is needed to design robust LLM-based metrics for cross-lingual information-seeking QA." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "By ensuring that the model predictions are supported by human-written text, attribution is one of the most promising ways to deploy NLP systems safely. In this work, we introduced and released the XOR-AttriQA dataset that includes approximately 10,000 examples in Bengali, Finnish, Japanese, Russian and Telugu. Thanks to XOR-AttriQA , we observe that state-of-the-art QA systems lack attribution in the cross-lingual setting. We showed that PaLM 2 and NLI models are promising methods to detect attributed passages in 5 typologically diverse languages for information-seeking QA. Having provided evidence for the lack of attribution in academic generative QA system, built tooling to detect and mitigate these issues, and releasing our collected attribution data in 5 languages, we hope to enable trustworthy cross-lingual QA systems to meet the information needs of people around the world." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b32" ], "table_ref": [], "text": "Our work focused on evaluating and improving the attribution level of a state-of-the-art XORQA pipeline. Given recent progress, LLMs are now increasingly used in a closed-book setting (Roberts et al., 2020) for question answering, i.e., without relying on any retrieved passages to answer a question. Attributing the generations of these models is therefore becoming critical. In addition to improving the attribution level of open-retrieval question answering pipeline, we hope XOR-AttriQA and the attribution detection experiments we presented will also be used to design attribution detection models for closed-book QA systems." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the raters involved in the data collection process for their work. In addition, we want to thank Michael Collins, Dipanjan Das, Vitaly Nikolaev, Jason Riesa, and Pat Verga for the valuable discussion and feedback they provided on this project." }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "In this section, we provide more detail about the contributions of each author." }, { "figure_ref": [], "heading": "General Overview Primary Contributors Benjamin Muller, John Wieting", "publication_ref": [ "b39", "b13" ], "table_ref": [], "text": "Major Contributors Jonathan H. Clark To run inference with CORA (Asai et al., 2021b), we used the original codebase released by the authors available at https://github.com/ AkariAsai/CORA. To build attribution detection models with T5, we used the original checkpoints from (Xue et al., 2021) All our experiments are based on the XOR-TyDiQA dataset (Asai et al., 2021a) available at https://nlp.cs.washington.edu/xorqa/. We focused on Bengali, Finnish, Japanese, Russian and Telugu data. We only used the query and gold answers from XOR-TyDiQA (and ignored the gold passages for which we use a retriever). XOR-TyDiQA answers are of two types: Yes/No answers or short spans extracted from a Wikipedia passages (Clark et al., 2020). We report in Table 7 the difference in attribution between Yes/No and short span answers. We find that for most languages, Yes/No answers are less attributable compared to short answer spans. We report in Table 9 the agreement between expert raters and hired raters on a small number of examples. We find that this agreement is above 90% for all languages in the in-language scenario (S1)." }, { "figure_ref": [], "heading": "A.3 Languages Distribution of MDPR", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.2 AIS Score", "publication_ref": [], "table_ref": [], "text": "The AIS data collection framework (Rashkin et al., 2021) consists of two annotation steps. First, the raters are shown a question and answer and asked \"Is the answer interpretable to you\". If the response is positive, the rater is shown the source passage and asked \"Is the answer attributed to the passage\". At each step, the rater is asked to answer Yes, or No or to flag the example if it is corrupted. For each question, answer, and passage triplet (q, a, p), we collect the rating of three raters (for each annotation scenarios). These three ratings are aggregated to get a single label 0 or 1 for each (q, a, p) triplet with the following criterion:\n• We only keep the examples that received at least two ratings. This means that we exclude examples flagged by two raters or more.\n• We assign the label \"attributable\" (1) to the triplet if the example received at least two votes to the question \"Is the answer attributed to the passage\"; otherwise, we set the label to non-attributable (0).\nThe number of examples collected is available in Table 1." }, { "figure_ref": [], "heading": "C Examples of Attribution without Exact-Match", "publication_ref": [], "table_ref": [], "text": "Query:\nWhat is the capital of Kenya? Answer Nairobi Gold answer: Nairobi Passage:\nKenya (English Republic of Kenya) The Republic of Kenya is a country in East Africa. It is bordered by Ethiopia to the north, Somalia to the northeast and Tanzania to the south. Its capital is Nairobi. Query: How many people died on average in World War II? Answer:\nSix crores\nGold answer: 70-85 millions Passage:\ncrores. The countries involved faced a kind of perfect war situation (ie, all available, regardless of military-civilian distinctions, were involved in the war in some way). As a result, all the economic, industrial and technological resources of the respective countries had to be used for war purposes. This war is known as the bloodiest in the history of the world, which caused the death of about six crore people." }, { "figure_ref": [], "heading": "Query:", "publication_ref": [], "table_ref": [], "text": "Which is the largest organ in the human body? Answer:\nthe skin Gold answer: the liver Passage:\nSkin is the largest organ in our body. It has three important layers. The skin covers the entire body and protects the internal parts. Skin is lacking at the pores. It comes in different colors. The science of skin is called 'Dermatology'. The skin mainly has two layers namely epidermis and dermis. The epidermis is formed from the epidermis. Hairs and sweat glands belong to the epidermis. Nails are also formed from it. Africans are black. Northern Europeans are white. The people of some other parts of Asia are in between the two. The cause of these color differences is the pigment called 'melanin' in the skin. low melanin is called 3). We display a single passage fed to the generator, to which the answer is attributed." } ]
Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems-yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers much promise, yet their raw generations often fall short in factuality. To improve trustworthiness in these systems, a promising direction is to attribute the answer to a retrieved source, possibly in a content-rich language different from the query. Our work is the first to study attribution for cross-lingual question answering. First, we introduce the XOR-AttriQA dataset to assess the attribution level of a state-of-theart cross-lingual question answering (QA) system in 5 languages. To our surprise, we find that a substantial portion of the answers is not attributable to any retrieved passages (up to 47% of answers exactly matching a gold reference) despite the system being able to attend directly to the retrieved text. Second, to address this poor attribution level, we experiment with a wide range of attribution detection techniques. We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. Overall, we show that current academic generative cross-lingual QA systems have substantial shortcomings in attribution and we build tooling to mitigate these issues. 1 1 The XOR-AttriQA dataset is available at https: //github.com/google-research/google-research/ tree/master/xor_attriqa. XOR-AttriQA includes approximately 10,000 annotated examples to foster research in the modeling and evaluation of attribution in cross-lingual settings.
Evaluating and Modeling Attribution for Cross-Lingual Question Answering
[ { "figure_caption": "Figure 2 :2Figure 2: Examples of CORA correct answers not attributed to any passage. We illustrate how the model can be guided to generate correct answers which are not fully supported by the passage.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "% of answers exactly-matching the gold answer attributed to at least one passage fed to the MGEN.", "figure_data": "% Attributable predictions (AIS) of EMBNFIJARUTE67.380.453.167.593.1", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "% of attributed answers to the TOP-1/ALL passages fed to MGEN. We report AIS (cf. sec. 2.4) on XOR-TyDiQA validation split. of EM corresponds to the % of attributable answers among the Exact-Matched answers. non-EM corresponds to the % of attributable answers among the non-Exact-Matched answers (i.e. that differ from the gold answer). We report the attribution-level considering passages in any languages (row ANY), only the English passages as our candidates (row EN), only the in-language passages (row LANG.) as our candidates.", "figure_data": "BNFIJARUTEAISof EMnon-EMAISof EMnon-EMAISof EMnon-EMAISof EMnon-EMAISof EMnon-EMANY27.9/45.6 41.8/67.3 25.2/40.5 38.7/50.9 67.9/80.4 27.1/39.6 11.8/37.3 22.4/53.1 8.2/23.7 27.5/40.9 45.0/67.5 24.8/37.9 23.3/31.7 72.4/93.1 13.2/19.2LANG 25.0/40.2 41.8/65.5 22.3/36.1 36.5/46.0 64.3/75.0 25.7/34.7 11.8/34.8 22.4/51.0 8.2/20.6 26.4/39.8 45.0/67.5 23.4/36.6 22.9/31.4 69.0/93.1 13.2/18.9EN2.3/3.30.0/0.02.6/3.81.0/4.03.6/5.30.0/3.50.0/2.00.0/0.00.0/3.11.1/1.10.0/0.01.4/1.40.3/0.31.7/0.00.0/0.3Passage:マルクス主義(マルクスしゅぎ、)とは、カール・マルクスとフリードリヒ・エンゲルスによって展開された思想をベースとして確立された社会主義思想体系の一つである。しばしば科学的社会主義(かがくてきしゃかいしゅぎ)とも言われる。マルクス主義は、資本を社会の共有財産に変えることによって、労働者が", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "% of attributed non-EM examples that are accurately detected by PALM 2.", "figure_data": "BNFIJARUTEAcc. 88.2 94.5 86.3 96.4 91.6", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": % of attributed answers based on the top-1MDPR-retrieved passage, ALL the passages retrievedfed to the generator, and the TOP-1 reranked passage(T5-NLI reranked) with T5-NLI-TRANSLATE-TEST,our best NLI fine-tuned model.", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Benjamin Muller; John Wieting; Jonathan H Clark; Tom Kwiatkowski; Sebastian Ruder; Livio Baldini Soares; Roee Aharoni; Jonathan Herzig; Xinyi Wang
[ { "authors": "Rohan Anil; Andrew M Dai; Orhan Firat; Melvin Johnson; Dmitry Lepikhin; Alexandre Passos; Siamak Shakeri; Emanuel Taropa; Paige Bailey; Zhifeng Chen", "journal": "", "ref_id": "b0", "title": "Palm 2 technical report", "year": "2023" }, { "authors": "Akari Asai; Jungo Kasai; Jonathan Clark; Kenton Lee; Eunsol Choi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "XOR QA: Cross-lingual open-retrieval question answering", "year": "2021" }, { "authors": "Akari Asai; Shayne Longpre; Jungo Kasai; Chia-Hsuan Lee; Rui Zhang; Junjie Hu; Ikuya Yamada; Jonathan H Clark; Eunsol Choi", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "MIA 2022 shared task: Evaluating cross-lingual openretrieval question answering for 16 diverse languages", "year": "2022" }, { "authors": "Akari Asai; Xinyan Yu; Jungo Kasai; Hanna Hajishirzi", "journal": "", "ref_id": "b3", "title": "One question answering model for many languages with cross-lingual dense passage retrieval", "year": "2021" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "Association for Computing Machinery", "ref_id": "b5", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Bernd Bohnet; Pat Vinh Q Tran; Roee Verga; Daniel Aharoni; Andor; Baldini Livio; Jacob Soares; Kuzman Eisenstein; Jonathan Ganchev; Kai Herzig; Hui", "journal": "", "ref_id": "b6", "title": "Attributed question answering: Evaluation and modeling for attributed large language models", "year": "2022" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b7", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b9", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "Jannis Bulian; Christian Buck; Wojciech Gajewski; Benjamin Börschinger; Tal Schuster", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Tomayto, tomahto. beyond token-level answer equivalence for question answering evaluation", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b12", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jonathan H Clark; Eunsol Choi; Michael Collins; Dan Garrette; Tom Kwiatkowski; Vitaly Nikolaev; Jennimaria Palomaki", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Peter A Flach; José Hernández-Orallo; C Ferri", "journal": "", "ref_id": "b15", "title": "A coherent interpretation of auc as a measure of aggregated classification performance", "year": "2011" }, { "authors": "Siddhant Garg; Thuy Vu; Alessandro Moschitti", "journal": "", "ref_id": "b16", "title": "Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection", "year": "2020" }, { "authors": "Or Honovich; Roee Aharoni; Jonathan Herzig; Hagai Taitelbaum; Doron Kukliansy; Vered Cohen; Thomas Scialom; Idan Szpektor; Avinatan Hassidim; Yossi Matias", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "TRUE: Re-evaluating factual consistency evaluation", "year": "2022" }, { "authors": "Edward Hu; Yelong Shen; Phil Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b18", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Comput. Surv", "ref_id": "b19", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Tushar Khot; Ashish Sabharwal; Peter Clark", "journal": "", "ref_id": "b21", "title": "Scitail: A textual entailment dataset from science question answering", "year": "2018" }, { "authors": "András Kornai", "journal": "PLOS ONE", "ref_id": "b22", "title": "Digital language death", "year": "2013" }, { "authors": "Shayne Longpre; Yi Lu; Joachim Daiber", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b23", "title": "MKQA: A linguistically diverse benchmark for multilingual open domain question answering", "year": "2021" }, { "authors": "Benjamin Muller; Luca Soldaini; Rik Koncel-Kedziorski; Eric Lind; Alessandro Moschitti", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Cross-lingual open-domain question answering with answer sentence generation", "year": "2022" }, { "authors": "Nikita Nangia; Adina Williams; Angeliki Lazaridou; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "The RepEval 2017 shared task: Multi-genre natural language inference with sentence representations", "year": "2017" }, { "authors": "Fabio Petroni; Aleksandra Piktus; Angela Fan; Patrick Lewis; Majid Yazdani; Nicola De Cao; James Thorne; Yacine Jernite; Vladimir Karpukhin; Jean Maillard; Vassilis Plachouras; Tim Rocktäschel; Sebastian Riedel", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "KILT: a benchmark for knowledge intensive language tasks", "year": "2021" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Sebastian Riedel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander Miller", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b28", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b29", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Vitaly Hannah Rashkin; Matthew Nikolaev; Michael Lamm; Dipanjan Collins; Slav Das; Gaurav Petrov; Iulia Singh Tomar; David Turc; Reitter", "journal": "", "ref_id": "b31", "title": "Measuring attribution in natural language generation models", "year": "2021" }, { "authors": "Adam Roberts; Colin Raffel; Noam Shazeer", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "How much knowledge can you pack into the parameters of a language model", "year": "2020" }, { "authors": "Tal Schuster; Adam Fisch; Regina Barzilay", "journal": "", "ref_id": "b33", "title": "Get your vitamin C! robust fact verification with contrastive evidence", "year": "2021" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "The journal of machine learning research", "ref_id": "b34", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "FEVER: a large-scale dataset for fact extraction and VERification", "year": "2018" }, { "authors": "Rodolfo Vieira Valentim; Giovanni Comarela; Souneil Park; Diego Sáez-Trumper", "journal": "", "ref_id": "b36", "title": "Tracking knowledge propagation across wikipedia languages", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Huai Hsin Chi; F Xia; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b37", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Laura Weidinger; Jonathan Uesato; Maribeth Rauh; Conor Griffin; Po-Sen Huang; John Mellor; Amelia Glaese; Myra Cheng; Borja Balle; Atoosa Kasirzadeh; Courtney Biles; Sasha Brown; Zac Kenton; Will Hawkins; Tom Stepleton; Abeba Birhane; Lisa Anne Hendricks; Laura Rimell; William Isaac; Julia Haas; Sean Legassick; Geoffrey Irving; Iason Gabriel", "journal": "Association for Computing Machinery", "ref_id": "b38", "title": "Taxonomy of risks posed by language models", "year": "2022" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "mT5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Yuan Zhang; Jason Baldridge; Luheng He", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "PAWS: Paraphrase adversaries from word scrambling", "year": "2019" } ]
[ { "formula_coordinates": [ 7, 146.85, 644.2, 143.02, 9.81 ], "formula_id": "formula_0", "formula_text": "(q, a, p) - → ais(2)" } ]
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b44", "b4", "b34", "b32", "b7", "b1", "b37", "b22", "b20", "b5", "b24" ], "table_ref": [], "text": "Much of the recent progress in computer vision has been facilitated by representations learned by deep models. Features from ConvNets [45,5,35,33], Vision Transformers [8,2], and GANs [38,23] have demonstrated great utility in a number of applications, even when compared to hand-crafted feature descriptors [21,6]. Recently, diffusion models have shown impressive results for image generation, suggesting that they too contain rich internal representations that can be used for downstream tasks. Existing works that use features from a diffusion model typically select a particular subset of layers and timesteps that best model the properties needed for a given task (e.g., features for semantic-level correspondence may be most prevalent in middle layers, whereas textural content may be at later layers). This selection not only requires a laborious discovery process hand-crafted for a specific task, but it also leaves behind potentially valuable information distributed across other features in the diffusion process.\nIn this work, we propose a framework for consolidating all intermediate feature maps from the diffusion process, which vary both over scale and time, into a single per-pixel descriptor which we dub Diffusion Hyperfeatures. In practice, this consolidation happens through a feature aggregation network that takes as input the collection of intermediate feature maps from the diffusion process and produces as output a single descriptor map. This aggregation network is interpretable, as it learns mixing weights to identify the most meaningful features for a given task (e.g., semantic correspondence). Extracting Diffusion Hyperfeatures for a given image is as simple as performing the diffusion process for that image (the generation process for synthetic images, and inversion for real images) and feeding all the intermediate features to our aggregator network.\nFigure 1: Unlike prior work that hand-selects a subset of raw diffusion features, we extract all feature maps from the diffusion process, varying across both timesteps and layers, and use a lightweight aggregation network to consolidate them into Diffusion Hyperfeatures. For real images, we extract these features from the inversion process, and for synthetic images we extract these features from the generation process. Given a pair of images, we find semantic correspondences by performing a nearest-neighbor search over their Diffusion Hyperfeatures.\nWe evaluate this approach by training and testing our descriptors on the task of semantic keypoint correspondence, using real images from the SPair-71k benchmark [25]. We present an analysis of the utility of different layers and timesteps of diffusion model features. Finally, we evaluate our trained feature aggregator on synthetic images generated by the diffusion model and show that our Diffusion Hyperfeatures generalize to out-of-domain data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b14", "b11", "b21", "b38", "b16", "b11", "b42", "b23", "b26", "b36", "b25", "b0" ], "table_ref": [], "text": "Hypercolumn Features. The term hypercolumn, originally from neuroscience literature [15], was first coined for neural network features by Hariharan et al. [12] to refer to the set of activations corresponding to a pixel across layers of a convolutional network, an idea that has also been studied in the context of texture [22], optical flow [39], and stereo [17]. One central idea of this line of work is that for precise localization tasks such as keypoint detection and segmentation [12,43], it is essential to reason at both coarse and fine scales rather than the typical coarse-to-fine setting. The usage of hypercolumns has also been popular for the task of semantic correspondence, where approaches must be coarse enough to be robust to illumination and viewpoint changes and fine enough to compute precise matches [24,27,37,26,1]. Our work revisits the idea of hypercolumns to leverage the features of a recently popular network architecture, diffusion models, which primarily differ from prior work in that the underlying feature extractor is trained on a generative objective and offers feature variation along the axis of time in addition to scale." }, { "figure_ref": [ "fig_1" ], "heading": "Deep Features for Semantic Correspondence.", "publication_ref": [ "b28", "b19", "b28", "b35", "b3", "b1", "b10", "b8", "b27", "b9", "b35", "b41", "b2", "b40", "b40" ], "table_ref": [], "text": "There has been a recent interest in transferring representations learned by large-scale models for the task of semantic correspondence. Peebles et al. [29] addressed the task of congealing [20] by supervising a warping network with synthetic GAN data produced from a learned style vector representing shared image structure. While prior work [29,36] has demonstrated that generative models can produce aligned image outputs, which is useful supervision for semantic correspondence, we study how well the raw intermediate representations themselves would perform for the same task. Utilizing self-supervised representations, in particular from DINO [4], has also been especially popular. Prior work has demonstrated that it is possible to extract high-quality descriptors from DINO that are robust and versatile across domains and challenging instances of semantic correspondence [2]. These descriptors have been used for downstream tasks such as semantic segmentation [11], relative camera pose estimation [9], and dense visual alignment [28,10]. While DINO does indeed contain a rich visual representation, diffusion features are under-explored for the task of semantic correspondence and likely contain enhanced semantic representations due to training on image-text pairs. Diffusion Model Representations. There have been a few works that have analyzed the underlying representations in diffusion models and proposed using them for downstream tasks. Plug-and-Play [36] injects intermediate features from a single layer of the diffusion UNet during a second generation process to preserve image structure in text-guided editing. FeatureNeRF [42] distills diffusion features from this same layer into a neural radiance field. DDPMSeg [3] and ODISE [41] aggregate features from a hand-selected subset of layers and timesteps for semantic and panoptic segmentation respectively. While these works also consider a subset of features across layers and/or time, our work primarily differs in the following ways: first, rather than hand-selecting a subset of features, we propose a learned feature aggregator building on top of Xu et al. [41] that weights all features and distills them into a concise descriptor map of a fixed channel size and resolution. Furthermore, all of these methods solely use the generation process to extract representations, even for real images. In contrast, we use the inversion process, where we are able to extract higher-quality features at these same timesteps, as seen in Figure 3." }, { "figure_ref": [], "heading": "Diffusion Hyperfeatures", "publication_ref": [ "b24", "b39" ], "table_ref": [], "text": "In a diffusion process, one makes multiple calls to a UNet to progressively noise the image (in the case of inversion) or denoise the image (in the case of generation). In either case, the UNet produces a number of intermediate feature maps, which can be cached across all steps of the diffusion process to amass a set of feature maps that vary over timestep and layers of the network. This feature set is rather large and unwieldy, varying in resolution and detail across different axes, but contains rich information about texture and semantics. We propose using a lightweight aggregation network to learn the relative importance of each feature map for a given task (in this paper we choose semantic correspondence) and consolidate them into a single descriptor map (our Diffusion Hyperfeatures). To assess the quality of this descriptor map, we compute semantic correspondences for an image pair, by using a nearest-neighbors search on the extracted descriptors, and compare these correspondences to the ground-truth annotations [25,40]. An overview of our method is shown in Figure 1.\nOur approach is composed of two core components. Extraction (Section 3.1): We formulate a simplified and unified extraction process that accounts for both synthetic and real images, which means we are able to use the same aggregation network on features from both image types. Aggregation (Section 3.2): We propose an interpretable aggregation network that learns mixing weights across the features, which highlights the layers and timesteps that provide the most useful features unique to the underlying model and task." }, { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "Diffusion Process Extraction", "publication_ref": [ "b33" ], "table_ref": [], "text": "One popular sampling procedure for a trained diffusion model is DDIM sampling [34] of the form\nx t = √ α t x 0 + √ 1 -α t ϵ t where ϵ t ∼ N (0, 1)\nwhere x 0 is the clean image, ϵ t is the noise prediction from the diffusion model conditioned on the timestep t and noisy image x t+1 from the previous timestep, and x t is the prediction for the next timestep. To run generation, one runs the reverse process from t = T to 0, with the input x T set to pure noise sampled from N (0, 1). To run inversion, one runs the forward process from t = 0 to T , with the input x 0 set to the clean image.\nGeneration. When synthesizing an image, we can cache the intermediate feature maps across the generation process, which already contain shared representations that can be used to relate the image to other synthetic images, as seen in the PCA visualization of the feature maps in Figure 2. In this example, we see that the head and body of the two cats share a corresponding latent representation throughout almost the entire generation process, even as early as the first ten steps, where the inputs to the UNet are almost pure noise. This can be explained by the preliminary prediction for the final image x 0 , which already lays out the general idea of the image including the structure, color, and main subject. As generation progresses, the principal components of the features also evolve, with Layer 4 changing from displaying coarse to more refined common semantic sub-parts and Layer 10 changing from displaying no shared characteristics to high-frequency image gradients. These observations indicate that the diffusion model provides coarse and fine features that capture different image characteristics (i.e. semantic or textural information) throughout different combinations of layers and timesteps. Hence, we find it important to extract features from all layers and timesteps in order to adequately tune our final descriptor map to represent the appropriate level of granularity needed for a given task.\nInversion. These same useful features can be extracted for real images through the inversion process.\nAlthough inversion is a process that destructs the real image into noise, we observe that its features contain useful information akin to the generation process for synthetic images. In Figure 3, we can see that our inversion features are able to reliably capture the full body of both cats and their common semantic subparts (head, torso, legs) in Layer 4 and their edges in Layer 10 even at a timestep when the input to the model is relatively noisy. In contrast, using the generation process to analyze real images (as done in prior work) leads to hyperparameter tuning and tradeoffs. For example, at timesteps close to t = T where in-distribution inputs are close to noise, the features start to diverge from information present in the real image and may even hallucinate extraneous details, as seen in Figure 3. In this example, because the color of the top cat's stomach is white like the background, the generation features from Layer 4 merge the stomach with the background. Similarly, because there is low contrast between the bottom cat and the background, the generation features from Layer 10 fail to capture the silhouette of the cat and instead depict random texture edges. Intuitively, inversion features are more trustworthy because of the notion of chaining, where at every step the input is some previous output of the model rather than a random mixture of an image and noise, and every extracted feature map is therefore interrelated. Extracting features from a continuous inversion process is also nice because it induces symmetry with the generation process, which in Section 4.3 we demonstrate allows us to use both feature types interchangeably with the same aggregation network." }, { "figure_ref": [], "heading": "Diffusion Hyperfeatures Aggregation", "publication_ref": [ "b12", "b40", "b29" ], "table_ref": [], "text": "Given a dense set of feature maps from the diffusion process, we now must efficiently aggregate them into a single descriptor map without omitting the information from any layers or timesteps.\nA naive solution would be to simply concatenate all feature maps into one very deep feature map, but this proves to be too high dimensional for most applications. We address this issue with our aggregation network, which standardizes the feature maps with tuned bottleneck layers and sums them according to learned mixing weights. Specifically, for a given feature map r we upsample it to a standard resolution, pass through a bottleneck layer B [13,41] to a standard channel count, and weight it with a mixing weight w. The final Diffusion Hyperfeatures take on the form\nS s=0 L l=1 w l,s • B l (r l,s )\nwhere L is the total number of layers and S is the total number of timesteps. Note that we run the diffusion process for a total number of T timesteps but only select a subsample of S timesteps for aggregation to conserve memory. We opt to share bottleneck layers across timesteps, meaning we use a total of L bottleneck layers. However, we learn L • S unique mixing weights for every combination of layer and timestep. We then tune these bottleneck layers and mixing weights using task-specific supervision. For semantic correspondence, we flatten the descriptor maps for a pair of images and compute the cosine similarity between every possible pair of points. We then supervise with the labeled corresponding keypoints using a symmetric cross entropy loss in the same fashion as CLIP [30]. During training, we downscale the labels according to the resolution of our descriptor maps. When running inference, we upsample the descriptor maps before performing nearest neighbors matching to predict semantic keypoints. We demonstrate that this lightweight parameterization of our aggregation network is performant (see Section 4.1) and interpretable (see Section 4.2) without degrading the open-domain knowledge represented in the diffusion features (see Section 4.3)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b3", "b6", "b1", "b25", "b24", "b12", "b30", "b31", "b17", "b1", "b1" ], "table_ref": [], "text": "Baselines. We compare against descriptors derived from DINO [4], a self-supervised model trained on ImageNet [7]. Specifically, we compare against the method from Amir et al. [2], which extracts key features from specific DINO ViT layers. We also compare against hypercolumn features from DHPF [26], a supervised model for semantic correspondence trained on SPair-71k [25]. The method composes features across relevant layers of a ResNet-101 [13] backbone. Our method extracts descriptors from Stable Diffusion v1-5 [31], a generative latent diffusion model trained on LAION-5B [32]. We extract Diffusion Hyperfeatures aggregated from features across UNet layers and timesteps. We tune our bottleneck layers and mixing weights on SPair-71k for up to 5000 steps with a learning rate of 1e-3 and Adam optimizer [18]. This means that we tune on at most 5k out of the 53k possible training samples, where we find that training on additional samples does not further improve performance. Experimental Details. We extract features from the UNet decoder layers (denoted as Layers 1 to 12), specifically the outputs of the residual block before the self-and cross-attention blocks. When running either the inversion or generation process we use T = 50 scheduler timesteps, except when we denote that the process is \"one-step\" (T = 1). We subsample every 5 timesteps when aggregating features across time to conserve memory. Metrics. We report results in terms of PCK@α, or the percentage of correct keypoints at the threshold Otherwise we run a fifty-step inversion process and aggregate features from S = 11 timesteps. We ablate pruning to the single feature map with the highest mixing weight selected by our method, either as the raw feature map (SD-Layer-Pruned) or after the bottleneck layer (Ours-Pruned). We also ablate tuning our method with only one timestep (One-Step) or features from another Stable Diffusion variant (SDv2-1). Otherwise we extract features from SDv1-5.\nα. The predicted keypoint is considered to be correct if it lies within a radius of α * max(h, w) of the ground truth annotation, where h, w denotes the dimensions of the entire image (α img ) or object bounding box (α bbox ) in the original image resolution. Following Amir et al. [2] we use the threshold α = 0.1 [2]." }, { "figure_ref": [ "fig_2" ], "heading": "Semantic Keypoint Matching on Real Images", "publication_ref": [ "b24", "b1", "b39", "b18", "b1", "b25", "b35" ], "table_ref": [ "tab_0" ], "text": "We evaluate on the semantic keypoint correspondence benchmark SPair-71k [25], which is composed of image pairs from 18 object categories spanning animals, vehicles, and household objects. Following prior work [2], we evaluate on 360 random image pairs from the test split, with 20 pairs per category. We also evaluate on CUB [40], which is composed of semantic keypoints on a variety of bird species. We evaluate on 360 random image pairs from the validation split used by Kulkarni et al. [19]. The CUB dataset is completely unseen to all methods, meaning that we only tune with SPair-71k and evaluate transferring the same model onto CUB.\nWe evaluate our method in Table 1, comparing against DINO descriptors [2] and task-specific hypercolumns [26]. We also compare against two Stable Diffusion baselines, selecting a single feature map from a known semantic layer [36] (SD-Layer-4) and concatenating feature maps from all layers (SD-Concat-All). For our diffusion baselines we perform one-step inversion, meaning we collapse the time dimension by setting the total number of scheduler timesteps to 1. Selecting a single diffusion feature map (SD-Layer-4) already outperforms DINO and DHPF by a margin of at least 4% in [email protected] img , likely due to the underlying model's larger and more diverse training set. Conversely, naively concatenating all feature maps (SD-Concat-All), degrades this improvement, likely because each map captures a different level of granularity and therefore should not be equally weighted for a coarse-level semantic similarity task. When aggregating all feature maps across time and space using our Diffusion Hyperfeatures, we see a significant boost of 14% in [email protected] img . This sizeable performance improvement indicates that (1) while a single layer may capture a good deal of semantic information, the other layers also capture complementary information that can be useful for disambiguating points on a finer level and (2) the diffusion model's internal knowledge about the image is not fully captured by a single pass through the UNet but is rather spread out over multiple timesteps.\nIn Figure 4, we show qualitative examples of our predicted semantic correspondences compared with DINO and SD-Layer-4. In general, while both baselines are able to relate broad semantic regions such as the head, arms, or legs, they struggle with fine-grained subparts and confuse them with other visually similar regions. For example, SD-Layer-4 confuses the headlight (yellow) of the blue car with the rear light of the red car, whereas our method is able to correctly reason about the front vs. back of the cars. DINO confuses the nose (blue) and tail (cyan) of the miniature pinscher dog with the ears and knee of the greyhound, whereas our method finely localizes these subparts. Both baselines confuse the the tail (cyan) of the seagull with the beak of the northern flicker bird, likely because both are distinctive corners, whereas our method is able to correctly place the tail and beak (yellow) on opposite sides." }, { "figure_ref": [ "fig_0" ], "heading": "Ablations", "publication_ref": [ "b29", "b15" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Number of Diffusion Steps. We ablate the importance of the time dimension by tuning a variant of our method on feature maps from a one-step inversion process (Ours-One-Step), where there is only one possible timestep to extract from. While this variant that aggregates over all layers performs better than single layer selection (SD-Layer-4) or simple concatenation (SD-Concat-All) by a margin of at least 5% in [email protected] img , it also heavily lags behind our full method. As indicated by our mixing weights in Figure 5, the most useful information for semantic correspondence are concentrated in the early timesteps of the diffusion process, where the input image is relatively clean but contains some noise. Timestep selection can be thought of as a knob for the amount of high frequency detail present in the image to analyze, where at these early timesteps the model is implicitly mapping the noisy input to a smoother image, similar to the oversmoothed cats for the predicted x 0 in Figure 2.\nEvidently not all texture and detail is necessary for the task of semantic correspondence, and the model is able to produce more useful features at different timesteps which highlight different image frequencies.\nPruning. Since our method employs interpretable mixing weights, we have a ranking of the relative importance of each layer and timestep combination for the task of semantic correspondence. To verify this ranking, we ablate pruning to the single feature map with the highest mixing weight. As seen in Figure 5, for Ours-SDv1-5 this is the feature map associated with Layer 5, Timestep 10. Referring to Table 1, this automatically selected raw feature map (SD-Layer-Pruned) performs comparably to the feature map manually selected by prior work (SD-Layer-4). Interestingly, if only viewing features from a one-step inversion process it would seem that Layer 5 features are significantly worse than Layer 4 features, as discussed further in the Supplemental, but the story changes after tuning selection over both time and space. After passing this pruned feature map to our learned bottleneck layer, the feature map performs comparably on CUB and 6% better in [email protected] img on SPair-71k, as seen when Figure 5: The learned mixing weights for the variant of our method that aggregates SDv1-5 vs. SDv2-1 features across multiple layers and timesteps. Bright yellow denotes a high weighting, and dark blue denotes a low weighting. We also depict an example of predicted correspondences from SDv2-1-Layer-4 and Ours-SDv2-1. While raw features from SDv1-5-Layer-4 perform well in semantic correspondence, these same features of SDv2-1 perform extremely poorly. On the other hand, our method is able to aggregate different feature combinations depending on the model variant.\ncomparing SD-Layer-Pruned and Ours-Pruned in Table 1. This trend validates that our bottleneck layer does not degrade the power of the original representation, but rather refines it in a way that is likely helpful for complex object categories present in SPair-71k beyond birds. Considering the 9% gap in [email protected] img between our pruned and full method (Ours-Pruned vs. Ours), it becomes evident that it is not a single feature map that drives our strong performance but rather the soft mixing of many feature maps.\nModel Variant. We also ablate using a different model variant for our diffusion feature extraction, namely SDv2-1. Most notably, SDv2-1 differs from SDv1-5 in its large-scale text encoder, which scales from CLIP [30] to OpenCLIP [16]. The mixing weights learned for both model variants, depicted in Figure 5, showcase the same high-level trends, where the features found to be the most useful for semantic correspondence are concentrated in middle Layers 4-9 and early Timesteps 5-15. However, on a more nuanced level, the behavior starts to diverge with regards to the relative importance of layers and timesteps within this range. Namely, layer selection moves from Layers 4-5 to higher resolution Layers 5-7 from SDv1-5 to v2-1. This behavior is confirmed by our early hyperparameter sweeps of raw feature maps across model variants discussed in the Supplemental, where in fact the Layer 4 feature map of SDv2-1 performs extremely poorly for the task of semantic correspondence. Timestep selection also moves from Timestep 10 to Timestep 5 from SDv1-5 to v2-1, which is surprising because this means that SDv2-1 tends to select higher resolution feature maps from timesteps with higher frequency inputs for a task where it is essential to abstract fine-grained details into semantically meaningful matches. These trends seem to imply that a more powerful text encoder produces shared semantic representations at increased levels of detail, possibly because the model is better able to connect more distantly related visual concepts via text instead of giving them completely disjoint representations. Hence, our aggregation network is able to dynamically adjust to the representations being aggregated and the task at hand, both of which influence the most important set of features to select from the diffusion process." }, { "figure_ref": [], "heading": "Transfer on Synthetic Images", "publication_ref": [], "table_ref": [], "text": "In addition to evaluating the transfer of our aggregation network to other datasets such as CUB, we also evaluate on synthetic images. Specifically, we take the same aggregation network tuned on inversion features and simply flip the timestep ordering to operate on generation features. Therefore in this setting, we are testing our network's ability to generalize (1) to a completely unseen feature type from a different diffusion process (inversion vs. generation) and ( 2) out-of-domain object categories that are not present in SPair-71k. Surprisingly, our network generalizes well, outperforming predictions from both DINO and the raw feature map from the last step of the generation process (SD-Layer-4) as seen in Figure 6. Although DINO and SD-Layer-4 are generally able to correspond broad semantic regions correctly, they sometimes have difficulty with relative placement of subparts. In the case of the rungs of the Eiffel Tower (purple, pink, green), DINO collapses all of its predictions onto the middle rung and SD-Layer-4 collapses them onto the left rung, whereas our method is able to correctly correspond the middle and side rungs of the tower in a triangle formation. The baselines can also be distracted by other objects in the scene that are visually similar. In the case of the mermaid's legs (red, yellow), both baselines incorrectly correspond certain points with the rock in the right image, whose contours and silhouette resemble the legs. On the other hand, our method is able to Figure 6: Example synthetic images and their text prompts, the ground-truth user-annotated correspondences, and predicted correspondences from DINO, SD-Layer-4, and our method. Note that we transfer the aggregation network tuned on inversion features of real images, and we apply it on generation features of these synthetic images that are completely out-of-domain compared to the SPair-71k categories.\npredict more reliable correspondences for fine-grained subparts, even in these challenging cases with unseen textures (lego, snow) and categories (hat, cactus). The ability of our aggregation network to extend to open-domain synthetic images opens up the exciting possibility of generating custom synthetic datasets with pseudo ground-truth semantic correspondences, which we demonstrate are more precise than correspondences derived from the raw feature maps." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We demonstrate that our Diffusion Hyperfeatures are able to distill the information distributed across time and space from a diffusion process into a single descriptor map. Our intepretable aggregation network also enables automatic analysis of the most useful layers and timesteps based on both the underlying model and task of interest. We outperform methods that use supervised hypercolumns or self-supervised descriptors by a large margin on a semantic keypoint correspondence benchmark comprised of real images. Although we tune on a small set of real images with limited categories, we demonstrate that our method is able to retain the open-domain capabilities of the underlying diffusion features by demonstrating strong performance in predicting semantic correspondences in challenging synthetic images, especially compared to using the raw feature maps. Our ability to predict high-quality correspondences derived from the same feature maps used to produce the synthetic image could potentially be employed to create synthetic image sets with pseudo-labeled semantic keypoints, which would be valuable for downstream tasks such as image-to-image translation or 3D reconstruction. " }, { "figure_ref": [], "heading": "Supplementary Material 6.1 Computational Resources", "publication_ref": [], "table_ref": [], "text": "Our final aggregation network takes one day to train on one Nvidia Titan RTX GPU. Inference with our method is fast and uses a reasonable amount of memory, and it can run on a Nvidia T4 GPU on a Google Colab notebook." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Stable Diffusion Model Variant", "publication_ref": [ "b1", "b35", "b13" ], "table_ref": [ "tab_0" ], "text": "In Figure 7, we ablate the behavior of individual raw feature maps from each layer across multiple variants of Stable Diffusion. We extract these features from a one-step inversion process. We report the semantic keypoint matching accuracy on real images from SPair-71k according to [email protected] img . Due to limited computational resources, in this experiment we performed nearest neighbor matching on 64x64 resolution feature maps (the maximum possible resolution of Stable Diffusion) and rescaled our predictions to coordinates in the original image resolution. Therefore, we also include a DINO baseline [2] that uses the same procedure as reference for this experimental setting.\nViewing Figure 7, for Stable Diffusion models that share the same broader model variant (e.g., SDv1-3 vs. SDv1-4 vs. SDv1-5), the behavior across layers is similar. In contrast, there is a larger difference in layer behavior when comparing SDv1 (pink) and SDv2 (blue). For SDv1 Layer 4 outshines all other layers, consistent with observations from prior work [36], but this layer actually performs extremely poorly in SDv2. In fact, for SDv2 it is Layers 5 and 6 that are the layers that are strong at semantic correspondence. As seen in Figure 8, SDv2-1's Layer 4 features seem to perform poorly at semantic correspondence because they also strongly encode positionality; while they are able to disambiguate the birds and the backgrounds, they also separate the top left (green), top right (blue), bottom left (orange), and bottom right (pink) of the image. Perhaps SDv2 also encodes positionality in the Layer 4 features because this information is relevant when synthesizing images from prompts that describe relations or more complex object compositions, which SDv1's CLIP struggles with representing [14]. Finally, the behavior when concatenating feature maps from all layers (Concat All) is also very different between SDv1 and SDv2 when viewing Figure 7. While for SDv1 Concat All performs reasonably well, slightly lagging behind its single best feature map, for SDv2 it exhibits subpar performance. This trend is better understood when examining the PCA of these feature maps for two images in Figure 8, where for SDv1-5 Concat All produces a meaningful feature map that delineates the bird, branch, and background and for SDv2-1 Concat All produces a muddy feature map that only delinates the top vs. bottom of the image. This phenomenon where SDv2 produces a low-quality aggregated feature map in the case of simple concatenation is likely also because its stronger encoding of positionality dominates the encoding of semantics across the features. On the other hand, our method is able to meaningfully aggregate features across layers for both SDv1 and SDv2, as demonstrated by the strong keypoint matching performance from both variants in Table 1. Our method is also able to reflect the differing layer behaviors across different Stable Diffusion variants, as seen by the consistency between the the trends observed in Figure 7 and the learned mixing weights in Figure 5." }, { "figure_ref": [], "heading": "Validation Performance", "publication_ref": [ "b1", "b25" ], "table_ref": [ "tab_1", "tab_0" ], "text": "In Table 2 we report our performance compared with DINO [2], DHPF [26], and the Stable Diffusion baselines on 360 image pairs from SPair-71k's validation split. The trends are similar to our observations in Table 1, where SD-Layer-4 outperforms DINO by 5% [email protected] img and our method outperforms the baselines by at least 15% [email protected] img ." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Additional Examples", "publication_ref": [ "b43" ], "table_ref": [], "text": "In Figure 9, we show additional examples of real image pairs from each of the 18 object categories in SPair-71k and our method's predicted correspondences. Our method is able to handle a variety of difficult cases such as large viewpoint transformations (e.g., the side and front views of the cow or aeroplane) and occlusions from other objects (e.g., the people on top of the motorbike or bars in front of the potted plant).\nIn Figure 10, we show additional examples of synthetic image pairs and our method's predicted correspondences. Many of the prompts were inspired by objects and compositions from PartiPrompts [44].\nIn the same setting as Section 4.3, we transfer the aggregation network tuned on real images to make these predictions. Our method is able to produce high-quality correspondences for these out-of-domain synthetic images, such as the astronaut riding a horse or raccoon playing chess. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Angjoo Kanazawa, Yossi Gandelsman, Norman Mu, and David Chan for helpful discussions. This work was supported in part by DoD including DARPA's SemaFor, PTG and/or LwLL programs, as well as BAIR's industrial alliance programs." } ]
Diffusion models have been shown to be capable of generating high-quality images, suggesting that they could contain meaningful internal representations. Unfortunately, the feature maps that encode a diffusion model's internal information are spread not only over layers of the network, but also over diffusion timesteps, making it challenging to extract useful descriptors. We propose Diffusion Hyperfeatures, a framework for consolidating multi-scale and multi-timestep feature maps into per-pixel feature descriptors that can be used for downstream tasks. These descriptors can be extracted for both synthetic and real images using the generation and inversion processes. We evaluate the utility of our Diffusion Hyperfeatures on the task of semantic keypoint correspondence: our method achieves superior performance on the SPair-71k real image benchmark. We also demonstrate that our method is flexible and transferable: our feature aggregation network trained on the inversion features of real image pairs can be used on the generation features of synthetic image pairs with unseen objects and compositions. Our code is available at https://diffusion-hyperfeatures.github.io.
Diffusion Hyperfeatures: Searching Through Time and Space for Semantic Correspondence
[ { "figure_caption": "Figure 2 :2Figure 2: We show an example pair of synthetic images for the prompt \"cat sitting in a living room\" and the PCA of the features from Layers 4, 10 during both an early and late generation step. While different layers capture different image characteristics (here Layer 4 delineates the face vs. body and Layer 10 captures the edges), these features also evolve and become more fine-grained over time.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: We show an example pair of real images from SPair-71k and the PCA of the features from Layers 4, 10 when extracted at the middle timestep t = 25. While prior work extracts generation features by noising and denoising the image independently at the specific timestep (left), in our approach we extract inversion features from one continuous chain (right). Extracting features from the same timestep of the inversion chain can produce features more true to original image content.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example images from SPair-71k and CUB, the ground-truth user-annotated correspondences, and predicted correspondences from DINO, SD-Layer-4, and our method.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1 Figure 7 :17Figure 7: We report the behavior of individual layers across different variants of Stable Diffusion, extracting the raw feature map from a one-step inversion process and computing the semantic keypoint matching performance on real images from SPair-71k. Note that for efficiency reasons in this experiment we compute nearest neighbors matches on 64x64 resolution feature maps and rescale the predictions to the original image resolution.", "figure_data": "", "figure_id": "fig_3", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: We show an example pair of real images from SPair-71k and the PCA of the features from Layers 4-6 and Concat All extracted from a one-step inversion process for SDv1-5 and SDv2-1.", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Additional examples of predicted correspondences from our method on real images from each of the 18 categories in SPair-71k.", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Additional examples of predicted correspondences from our method on synthetic images from a diverse set of prompts. Note that for synthetic images we transfer the aggregation network tuned on real images from SPair-71k.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "We evaluate our semantic keypoint matching performance on real images from SPair-71k and CUB. For our CUB evaluation, we transfer the model tuned on Spair-71k. For our Stable Diffusion baselines, we extract features from a single layer (SD-Layer-4) or concatenation of all layers (SD-Concat-All) from a one-step inversion process, selecting features from S = 1 timesteps.", "figure_data": "SPair-71kCUB# Layers# Timesteps↑ [email protected][email protected][email protected][email protected] [2]1-51.6841.0472.7255.90DHPF [26]34-55.2842.6377.3061.42SD-Layer-41158.8046.5878.4361.22SD-Concat-All12152.1241.8370.2254.05Ours121172.5664.6182.2969.42Ours-One-Step12163.7454.6976.5962.11SD-Layer-Pruned1157.6948.1680.6767.21Ours-Pruned1164.0253.7479.1063.95Ours-SDv2-1121170.7464.8580.3968.04", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "We evaluate our keypoint matching performance on 360 random pairs of real images from SPair-71k's validation split.", "figure_data": "SPair-71k", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Grace Luo; Lisa Dunlap; Dong Huk Park; Aleksander Holynski; Trevor Darrell
[ { "authors": "Jing Kfir Aberman; Mingyi Liao; Dani Shi; Baoquan Lischinski; Daniel Chen; Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b0", "title": "Neural best-buddies: Sparse cross-domain correspondence", "year": "2018" }, { "authors": "Shir Amir; Yossi Gandelsman; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b1", "title": "Deep vit features as dense visual descriptors", "year": "2022" }, { "authors": "Dmitry Baranchuk; Andrey Voynov; Ivan Rubachev; Valentin Khrulkov; Artem Babenko", "journal": "", "ref_id": "b2", "title": "Labelefficient semantic segmentation with diffusion models", "year": "2022" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b3", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Qifeng Chen; Vladlen Koltun", "journal": "", "ref_id": "b4", "title": "Photographic image synthesis with cascaded refinement networks", "year": "2017" }, { "authors": "Navneet Dalal; Bill Triggs", "journal": "Ieee", "ref_id": "b5", "title": "Histograms of oriented gradients for human detection", "year": "2005" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b6", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Walter Goodwin; Sagar Vaze; Ioannis Havoutis; Ingmar Posner", "journal": "", "ref_id": "b8", "title": "Zero-shot category-level object pose estimation", "year": "2022" }, { "authors": "Kamal Gupta; Varun Jampani; Carlos Esteves; Abhinav Shrivastava; Ameesh Makadia; Noah Snavely; Abhishek Kar", "journal": "", "ref_id": "b9", "title": "Asic: Aligning sparse in-the-wild image collections", "year": "2023" }, { "authors": "Mark Hamilton; Zhoutong Zhang; Bharath Hariharan; Noah Snavely; William T Freeman", "journal": "", "ref_id": "b10", "title": "Unsupervised semantic segmentation by distilling feature correspondences", "year": "2022" }, { "authors": "Bharath Hariharan; Pablo Arbeláez; Ross Girshick; Jitendra Malik", "journal": "", "ref_id": "b11", "title": "Hypercolumns for object segmentation and fine-grained localization", "year": "2015" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ziqi Huang; Tianxing Wu; Yuming Jiang; Kelvin C K Chan; Ziwei Liu", "journal": "", "ref_id": "b13", "title": "ReVersion: Diffusion-based relation inversion from images", "year": "2023" }, { "authors": "D H Hubel; T N Wiesel", "journal": "The Journal of Physiology", "ref_id": "b14", "title": "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex", "year": "1962" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt; Openclip", "journal": "", "ref_id": "b15", "title": "", "year": "2021-07" }, { "authors": "D G Jones; J Malik", "journal": "", "ref_id": "b16", "title": "Determining three-dimensional shape from orientation and spatial frequency disparities", "year": "1992" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Nilesh Kulkarni; Abhinav Gupta; David F Fouhey; Shubham Tulsiani", "journal": "", "ref_id": "b18", "title": "Articulation-aware canonical surface mapping", "year": "2020" }, { "authors": "Erik G Learned-Miller", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b19", "title": "Data driven image models through continuous joint alignment", "year": "2006" }, { "authors": "G David; Lowe", "journal": "Int. J. Comput. Vision", "ref_id": "b20", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004-11" }, { "authors": "J Malik; P Perona", "journal": "Journal of the Optical Society of America A", "ref_id": "b21", "title": "Preattentive texture discrimination with early vision mechanisms", "year": "1990" }, { "authors": "Xin Mao; Zhaoyu Su; Pin Siang Tan; Jun Kang Chow; Yu-Hsing Wang", "journal": "", "ref_id": "b22", "title": "Is discriminator a good feature extractor?", "year": "2019" }, { "authors": "Juhong Min; Jongmin Lee; Jean Ponce; Minsu Cho", "journal": "", "ref_id": "b23", "title": "Hyperpixel flow: Semantic correspondence with multi-layer neural features", "year": "2019" }, { "authors": "Juhong Min; Jongmin Lee; Jean Ponce; Minsu Cho", "journal": "", "ref_id": "b24", "title": "Spair-71k: A large-scale benchmark for semantic correspondence", "year": "2019" }, { "authors": "Juhong Min; Jongmin Lee; Jean Ponce; Minsu Cho", "journal": "", "ref_id": "b25", "title": "Learning to compose hypercolumns for visual correspondence", "year": "2020" }, { "authors": "David Novotny; Diane Larlus; Andrea Vedaldi", "journal": "", "ref_id": "b26", "title": "Anchornet: A weakly supervised network to learn geometry-sensitive features for semantic matching", "year": "2017" }, { "authors": "Dolev Ofri-Amar; Michal Geyer; Yoni Kasten; Tali Dekel", "journal": "", "ref_id": "b27", "title": "Neural congealing: Aligning images to a joint semantic atlas", "year": "2023" }, { "authors": "William Peebles; Jun-Yan Zhu; Richard Zhang; Antonio Torralba; Alexei Efros; Eli Shechtman", "journal": "", "ref_id": "b28", "title": "Gan-supervised dense visual alignment", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b29", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2021" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Ross Cade W Gordon; Mehdi Wightman; Theo Cherti; Aarush Coombes; Clayton Katta; Mitchell Mullis; Patrick Wortsman; Schramowski; Katherine Srivatsa R Kundurthy; Ludwig Crowson; Robert Schmidt; Jenia Kaczmarczyk; Jitsev", "journal": "", "ref_id": "b31", "title": "LAION-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b32", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b33", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b34", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Narek Tumanyan; Michal Geyer; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b35", "title": "Plug-and-play diffusion features for textdriven image-to-image translation", "year": "2023" }, { "authors": "Nikolai Ufer; Bjorn Ommer", "journal": "", "ref_id": "b36", "title": "Deep semantic feature matching", "year": "2017" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Andrew Zhu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "", "ref_id": "b37", "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "year": "2018-06" }, { "authors": "J Weber; J Malik", "journal": "International Journal of Computer Vision", "ref_id": "b38", "title": "Robust computation of optical flow in a multi-scale differential framework", "year": "1995" }, { "authors": "P Welinder; S Branson; T Mita; C Wah; F Schroff; S Belongie; P Perona", "journal": "", "ref_id": "b39", "title": "", "year": "2010" }, { "authors": "Jiarui Xu; Sifei Liu; Arash Vahdat; Wonmin Byeon; Xiaolong Wang; Shalini De Mello", "journal": "", "ref_id": "b40", "title": "Odise: Openvocabulary panoptic segmentation with text-to-image diffusion models", "year": "2022" }, { "authors": "Jianglong Ye; Naiyan Wang; Xiaolong Wang", "journal": "", "ref_id": "b41", "title": "Featurenerf: Learning generalizable nerfs by distilling pre-trained vision foundation models", "year": "2023" }, { "authors": "Fisher Yu; Dequan Wang; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b42", "title": "Deep layer aggregation", "year": "2018" }, { "authors": "Jiahui Yu; Yuanzhong Xu; Jing Yu Koh; Thang Luong; Gunjan Baid; Zirui Wang; Vijay Vasudevan; Alexander Ku; Yinfei Yang; Burcu Karagol Ayan; Ben Hutchinson; Wei Han; Zarana Parekh; Xin Li; Han Zhang; Jason Baldridge; Yonghui Wu", "journal": "", "ref_id": "b43", "title": "Scaling autoregressive models for content-rich text-to-image generation", "year": "2022" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b44", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 212.08, 392.6, 187.84, 17.63 ], "formula_id": "formula_0", "formula_text": "x t = √ α t x 0 + √ 1 -α t ϵ t where ϵ t ∼ N (0, 1)" }, { "formula_coordinates": [ 5, 262.44, 312.32, 87.39, 30.55 ], "formula_id": "formula_1", "formula_text": "S s=0 L l=1 w l,s • B l (r l,s )" } ]
2023-05-23
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b6", "b7", "b8", "b11", "b12", "b7" ], "table_ref": [], "text": "P OINT cloud semantic segmentation aims to classify every point in a given 3D point cloud representation of a scene [1], [2]. It is one of the essential and fundamental tasks in the field of computer vision and is in intense demand for many real-world applications, e.g., virtual reality, self-driving vehicles, and robotics. Driven by the large-scale datasets and powerful deep learning technologies, fully supervised 3D point cloud semantic segmentation methods [3]- [7] have demonstrated significant achievements in recent years. Nevertheless, it is laborious and expensive to build large-scale Fig. 1: Point-wise annotated masks are necessary for previous few-shot 3D point cloud segmentation methods. In this work, we introduce a semantic projection network, which generates prototypes for target categories via texts, as an alternative choice to make up for the limitations of the previous methods that cannot work without masks. segmentation datasets with point-level annotations. Besides, additional annotated samples for novel categories and finetuning/re-training operations are required when extending the trained segmentation model to novel categories. To address these issues, few-shot 3D point cloud semantic segmentation is proposed [8] and has attracted lots of attention. Few-shot point cloud segmentation aims to generate mask for the unlabeled point cloud of query sample based on the clues of a few labeled support samples. It greatly eases the heavy demand for largescale datasets and demonstrates good generalization capability on novel categories.\nA critical challenge of few-shot 3D point cloud segmentation lies in how to effectively classify every point by the limited support information. Methods in 2D few-shot segmentation [9]- [12] extract discriminative and representative support features as prototypes (feature vectors) to guide the segmentation of query images, which has achieved significant results. However, the success of few-shot semantic segmentation in 2D computer vision is driven by the pre-training on large-scale datasets like imagenet [13]. The feature extractor pretrained on large-scale datasets greatly helps the few-shot learning by generating a good feature representation of objects. However, the development of 3D deep learning is hindered by the limited volume and instance modality of current datasets, due to the significant cost of 3D data collection and annotation. This results in less representative features and large feature variation in few-shot 3D point cloud segmentation, even for intra-class samples. Therefore, the prototypical methods that work well in 2D few-shot segmentation are ineffective for the less-well pre-trained networks for 3D point cloud. To address this issue, we propose a Query-Guided Prototype Adaptation (QGPA) module to modify features of support sample prototypes to feature space of query sample. Specifically, the proposed QGPA leans the feature distribution mapping via cross attention between support and query features on the channel dimension, which produces a feature channel adaptation to convert the prototypes from support feature space to query feature space. Propagating the channel-wise distribution from support feature space to query feature space for prototypes smooths the channel distribution discrepancy. With such prototype adaptation, we greatly alleviate the feature variation issue in point clouds and significantly improve the performance of few-shot 3D point cloud semantic segmentation.\nMoreover, optimizing prototype generation is worth enhancing the category-related features, as more representative and discriminative prototypes are the foundation for the success of subsequent adaptation and segmentation. If the prototype obtained from the support feature is not an apposite representative, it can hardly transfer informative clues to the query sample. Meantime, the usage of prototype adaptation reduces the influence of query mask supervision on prototype generation from the support set, despite the proposed QGPA having greatly narrowed down the feature gap between prototypes and query features. Hence, we propose a Self-Reconstruction (SR) module, which enables prototypes to reconstruct the support masks, to strengthen the representation of prototypes. Specifically, after obtaining the prototype by an average pooling over the features of points indicated by the support mask, we apply the prototype back to the support features to reconstruct the support mask and employ explicit supervision on this selfreconstruction process. Such a simple self-reconstruction plays an important regularization role in the whole few-shot point cloud segmentation task to enhance the discriminative and semantic information embedded in prototypes.\nFinally, although previous approaches [8] and the aboveproposed method reduce the number of required annotated samples via meta-learning, point-wise segmentation masks are still necessary. In some practical application scenarios, we may only have the category name of interest but have no corresponding images or masks. Thus, in this work, we propose to step forward further and discard support masks, i.e., jointly considering few-shot and zero-shot 3D point cloud segmentation, as shown in Fig. 1. To this end, we introduce the semantic information, e.g., words of category name, to indicate the target categories and propose a Semantic Projection Network that bridges the semantic and visual features. Our projection network takes the semantic embedding as input and outputs a projected prototype, supervised by real prototypes from point clouds. During testing, besides obtaining prototypes via support branch with dense-annotated support masks, prototypes can be alternatively obtained by inputting semantic words with our proposed projection network.\nIn a nutshell, the main contributions of our work are:\n• We propose an efficient and effective Query-Guided Prototype Adaption (QGPA) that propagates the channelwise features from support sample prototypes to query feature space, which maps prototypes into query feature space. Prototypes are thus endowed with better adaption ability to mitigate channel-wise intra-class sample variation.\n• We introduce Self-Reconstruction (SR) module that enforces the prototype to reconstruct the support mask generating this prototype, which greatly helps the prototype preserve discriminative class information. • We design a semantic projection network to produce prototypes with the input of semantic words, which facilitates the inference without the use of support information. • We achieve new state-of-the-art performance on two fewshot point cloud segmentation benchmarks, S3DIS and ScanNet. Specifically, our method significantly outperforms state-of-the-arts by 7.90% and 14.82% under the challenging 2-way-1-shot setting on S3DIS and ScanNet benchmarks, respectively." }, { "figure_ref": [], "heading": "II. RELATED WORK A. 3D Point Cloud Semantic Segmentation", "publication_ref": [ "b13", "b15", "b1", "b3", "b16", "b20", "b1", "b5" ], "table_ref": [], "text": "3D point cloud semantic segmentation aims to label each point in a given 3D point cloud by the most appropriate semantic category from a set of predefined categories. Thanks to the great success of deep neural networks, most recent deeplearning-based approaches have achieved impressive improvements in point cloud segmentation performance. There are two mainstreams in point cloud segmentation: voxel-based [14]- [16] and point-based methods [2], [4], [17]- [21]. The pointbased methods have attracted more and more attention because of its simplicity and effectiveness. PointNet [2], a first pointbased method, proposes a novel neural network to segment point clouds directly, which preserves the permutation invariance of the input well. DGCNN [6] utilizes EdgeConv module to capture local structures which is neglected in PointNet. Despite these approaches achieved promising segmentation performance, they cannot easily segment unseen categories without being fine-tuned on enough labeled data. In this work, we follow the structure of DGCNN to capture local structure feature and propose our method to generalize to new classes with only a few of annotated samples." }, { "figure_ref": [], "heading": "B. Few-shot 3D Point Cloud Semantic Segmentation", "publication_ref": [ "b7", "b7" ], "table_ref": [], "text": "Few-shot 3D Point Cloud Semantic Segmentation put the general 3D point cloud semantic segmentation in a few-shot scenario, where model is endowed the ability to segment novel classes with only a handful support data. Zhao et al. [8] propose attention-aware multi-prototype transductive inference to segment novel classes with a few annotated examples for few-shot point cloud semantic segmentation for the first time. However, attMPTI is very complicated and time-consuming due to exploiting multiple prototypes and establishing graph construction for few-shot point cloud segmentation and cannot achieve the impressive result. In this work, we deal with the few-shot point cloud semantic segmentation following the paradigm of [8]. We explore to mitigate the feature variation for the objects with same label but from different images via a simple and effective transformer design." }, { "figure_ref": [], "heading": "C. Few-shot Learning and Zero-shot Learning", "publication_ref": [ "b21", "b24", "b25", "b27", "b22", "b21", "b28", "b31", "b32", "b33", "b34", "b35", "b36" ], "table_ref": [], "text": "Few-shot learning focuses on learning a new paradigm for a novel task with only a few annotated samples. Existing work can be grouped into two main categories, which are based respectively on metric learning [22]- [25], and meta-learning network [26]- [28]. The core concept in metric learning is distance measurement between images or regions. For example, Vinyals et al. [23] design matching networks to embed image into an embedded feature and implement a weighted nearest neighbor matching for classifying unlabelled samples. Snell et al. [22] introduce a prototypical network to build a metric space where an input is identified in line with its distance from the class prototypes. Our work is in conformity with the prototypical network while we use it for more challenging segmentation tasks with a simple yet effective design.\nZero-shot learning [29]- [32] aims to classify images of unseen categories with no training samples via utilizing semantic descriptors as auxiliary information. There are two main paradigms: classifier-based methods and instance-based methods. Classifier-based methods aim to learn a good projection between visual and semantic spaces [33], [34] or transfer the relationships obtained in semantic space to visual feature space [35], [36]. Another main branch is instance-based methods [37] that synthesize some fake samples for unseen classes. The proposed semantic projection network bridges semantic prototypes and visual prototypes, and combines zeroshot learning with few-shot learning to flexibly handle cases with masks and without masks, which greatly eases the heavy demand for large-scale 3D datasets." }, { "figure_ref": [], "heading": "D. Few-shot Segmentation", "publication_ref": [ "b37", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b8", "b47", "b9", "b11", "b48" ], "table_ref": [], "text": "As an extension of few-shot classification, few-shot segmentation performs a challenging task of predicting dense pixel classification [38]- [41] on new classes with only a handful of support examples [42], [43]. Shaban et al. [44] introduce this research problem for the first time and design a classical two-branch network following the pipeline of Siamese network [45]. Later on, PL [46] introduces the concept of prototypical learning into segmentation task for the first time. where predictions is generated according to the cosine similarity between pixels in query image and prototypes generated from support image and support mask. SG-One [47] designs masked average pooling to generate object related corresponding prototype, which has become the cornerstone of subsequent methods. PANet [9] extends this work to a more efficient design and propose a prototype alignment regularization to make better use of support set information, achieving better generalization capability. CANet [48] designs two-branch architecture to perform multi-level feature comparison and embed iterative optimization module to get refined predicted results. PPNet [10] and PMMs [12] have similar idea to decompose objects into multiple parts and are capable of obtaining diverse and fine-grained representative features. PFENet [49] generates training-free prior masks utilizing a pre-trained backbone, and alleviated the spatial inconsistency through enhancing query features with prior masks and support features.\nHowever, existing methods have a common characteristic, i.e. the feature extractor pretrained on large-scale datasets greatly affects the performance of few-shot learning. Therefore, the feature extractor for few-shot 3D point cloud cannot provide representative features for objects because of lacking of pretraining on large-scale datasets hindered by the limiting volume and instance modality. Consequently, existing popular prototypical methods in 2D few-shot classification/segmentation do not work well in the field of 3D point cloud segmentation. In this work, we tackle this issue by proposing a prototype adapter and self-reconstruction to project the prototype from support point clouds feature space to query point clouds feature space effectively." }, { "figure_ref": [], "heading": "III. APPROACH", "publication_ref": [], "table_ref": [], "text": "In this section, we present the proposed approach. We first give the task definition in Sec. III-A and the architecture overview of our proposed model in Sec. III-B. Then the proposed Query-Guided Prototype Adaption (QGPA), Self-Reconstruction, and Semantic Prototype Projection are presented in Sec. III-C, Sec. III-E, and Sec. III-D, respectively." }, { "figure_ref": [], "heading": "A. Problem Definition", "publication_ref": [ "b7", "b22" ], "table_ref": [], "text": "We follow previous work [8] to define the training and testing protocols for few-shot point cloud semantic segmentation. Each point cloud I ∈ R N ×(3+f0) contains N points associated with the coordinate information ∈ R 3 and an additional feature ∈ R f0 , e.g., color. Suppose we have point cloud samples from two non-intersection sets of classes C seen and C unseen . We train our model on a train set D train that is constructed from C seen and evaluate on the test set D test built from C unseen . Widely used episodic paradigm [23] in few-shot learning is adopted to construct the train set D train and test set D test .\nEach training/testing episode in few-shot point cloud segmentation instantiates a C-way K-shot segmentation learning task. There are K ⟨point cloud, mask⟩ pairs for each of the C different categories in support set, and every point in the query point cloud is classified to one of the C categories or \"background\" that does to belong to any of the C categories. We denote the support set as We further introduce semantic words as an alternative choice to provide support information for target categories. In our training stage, the support set are reformulated as S = {(I c,k S , M c,k S , W S )}, where W S is semantic word set for support point cloud. In the testing stage, different from previous methods that compulsorily require point-level annotated masks, it is acceptable for our approach to only input the semantic word related to the target-of-interest, i.e., S = {W c S }. The goal of our approach is summarized as, to train a model that, when given support set S with either annotated masks or semantic words that provide support information for C classes, generates a segmentation mask with point-wise labels from the supported C classes (and \"background\") for the input query point cloud Q.\nS = {(I c,k S , M c,k S )}, where I is point cloud, M is mask, k ∈ {1, • • • , K}," }, { "figure_ref": [ "fig_0" ], "heading": "B. Architecture Overview", "publication_ref": [ "b5" ], "table_ref": [], "text": "The overall training architecture of our proposed approach is shown in Fig. 2. For each episode in the training stage, point clouds of the support set and query set are processed by a DGCNN [6] backbone and mapped to deep features. To obtain prototypes from support set, the masked average pooling (MAP) is applied over support features. Then, Query-Guided Prototype Adaption (QGPA) is utilized to rectify the feature channel distribution gap between query and support point clouds. The cosine similarity is employed between prototype and query feature to produce the score maps for generating the final mask prediction. Every point cloud in the query set is assigned the label with the most similar prototype. To preserve class-related discriminative information embedded in prototype, Self-Reconstruction (SR) module is introduced to obtain self-consistency high-quality prototypes. What's more, a semantic projection network is proposed to project the semantic word embeddings to visual prototypes under regression loss. During the inference stage, the proposed semantic projection network can take place of the visual support branch to provide prototypes when there is no pointwise annotated masks as support information." }, { "figure_ref": [], "heading": "C. Query-Guided Prototype Adaption", "publication_ref": [ "b49" ], "table_ref": [], "text": "We generate prototypes for every target category in the support set via conducting masked average pooling over the support features with corresponding support masks. Given a support set S = {(I c,k S , M c,k S )}, where k ∈ {1, ..., K} and c ∈ {1, ..., C} are the C-way and K-shot indexes respectively, and its feature F c,k S ∈ R N ×d , where N is the number of points and d is feature channel number, the prototype (feature vector) of category c is obtained by:\np c = 1 K k x F c,k S,x 1(M c,k S,x = c) x 1(M c,k S,x = c) ,(1)\nwhere x ∈ {1, ..., N } denotes the coordinate positions and 1( * ) is the binary label indicator that outputs 1 when * is true. Besides the target categories, we compute a background prototype p 0 to represent the points that do not belong to any of the C target categories:\np 0 = 1 CK c,k x F c,k S,x 1(M c,k S,x / ∈ {1, ..., C}) x 1(M c,k S,x / ∈ {1, ..., C}) .(2)\nNow we have a set of prototypes P = {p 0 , p 1 , ..., p C }.\nPrototypes obtained from the support point clouds have a channel-wise feature distribution gap with features of query point clouds, as we discussed in Sec. I. Each sample has different feature channel response distribution [50]. This feature \nℝ !×! ℝ !×# ! ℝ !×# ! ℝ $×! Linear Support Features Query Features ℝ $×!" }, { "figure_ref": [ "fig_0" ], "heading": "Value", "publication_ref": [ "b5" ], "table_ref": [], "text": "Query Key Fig. 3: The architecture of our proposed Query-Guided Prototype Adaption (QGPA) module. Query Features and Support Features are extracted from the Query point clouds and Support point clouds by DGCNN [6] (see Fig. 2). distribution gap is more obvious in 3D point cloud than in 2D segmentation in terms of image level, due to the lack of large-scale 3D datasets for pretraining of feature extractor. To rectify the feature distribution gap, we design Query-Guided Prototype Adaption (QGPA) that maps the prototype to query feature space under the guidance of query-support feature interaction, as shown in Fig. 3.\nIn detail, given a prototype p i ∈ R 1×d , i ∈ {0, 1, ..., C}, its support features that generate this prototype, and a query feature F Q ∈ R N ×d , we calculate the channel-wise cross attention between support and query features, producing a projection attention matrix. The prototype is then processed by the projection attention matrix to fit the query feature distribution. We first average these support features for each prototype:\nF i S =          1 K k F c,k S , i ∈ {1, ..., C}, 1 CK c,k F c,k S , i = 0,(3)\nwhere F i S ∈ R N ×d represent averaged support feature for prototype p i . Then the input to our QGPA is designed as:\nQuery = F Q ⊤ W q , Key = F i S ⊤ W k , V alue = p i W v ,(4)\nwhere W q ,W k ∈ R N ×N ′ and W v ∈ R d×d are the learnable parameters for fully connected layers that project features and prototype to the corresponding latent space, respectively. N ′ ≤ N is decreased hidden point number to save computing resources. Hence, we get Query ∈ R d×N ′ , Key ∈ R d×N ′ and V alue ∈ R 1×d for transformer attention.\nThen, we calculate a matrix multiplication between the Query and the transpose of Key, and add a softmax layer to obtain the channel attention map Attn ∈ R d×d as:\nAttn = softmax( Query • Key ⊤ √ d ),(5)\nwhere softmax(•) is a row-wise softmax function for attention normalization. This cross-attention map Attn establishes the channel-to-channel correspondence between query and support features, which guides the channel distribution propagation. Finally, a matrix multiplication is conducted between the Attn and the transpose of value to rectify the prototype to query feature channel distribution:\nṗi = p i + W p (Attn • V alue ⊤ ) ⊤ ,(6)\nwhere the original prototype is added as a residual connection for more stable training convergence, W p ∈ R d×d is the parameter of a fully connected layer. All the C + 1 prototypes P = {p 0 , p 1 , ..., p C } are dealt with Eq. (3) to Eq. ( 6), generating C + 1 refined prototypes denoted as Ṗ = { ṗ0 , ṗ1 , ..., ṗC }. With Eq. ( 6), the prototype generated by the support set is in line with the feature channel distribution of the query sample, which contributes to mitigating feature variation between point clouds sample. Therefore, when we perform cosine similarity between query features and refined prototypes, channel-wise inconsistency will be alleviated, which greatly contributes to getting a better result.\nWe employ cosine similarity with softmax function between query feature F Q and prototypes Ṗ to get probability score map. For each prototype ṗi we have the score map:\nS i Q,x = exp(-α⟨F Q,x , ṗi ⟩) ṗi ∈ Ṗ exp(-α⟨F Q,x , ṗi ⟩) ,(7)\nwhere ⟨a, b⟩ represents the computation of cosine similarity between a and b, α is an amplification factor. The predicted segmentation mask is then given by\nMQ,x = arg max i S i Q,x .(8)\nLearning proceeds by minimizing the negative log-probability\nL seg = - 1 N x i 1(M Q,x = i)logS i Q,x ,(9)\nwhere N is the total number of points, and M Q is the ground truth mask of query point cloud I Q ." }, { "figure_ref": [], "heading": "D. Self-Reconstruction", "publication_ref": [], "table_ref": [], "text": "Although through section III-C we can obtain refined prototypes that better fit the distribution of query features, they may lose their original critical class and semantic information that were learned from the support set. Additionally, it's crucial to extract more representative and discriminative prototypes from the support set, as this is the foundation for the success of subsequent adaptation and segmentation. The discriminative prototypes should have the category information flow of the support point cloud, i.e., prototypes need to have the capability to reconstruct ground-truth masks from themselves.\nFor each support feature F c,k S and corresponding support mask M c,k S , we calculate cosine similarity with softmax function between support feature F c,k S and prototypes {p 0 , p c }, to get score map S c,k S ,\nS c,k,i S,x = exp(-α⟨F c,k S,x , p i ⟩) p i ∈{p 0 ,p c } exp(-α⟨F Q,x , p i ⟩) . (10\n)\nThe reconstructed support mask is given by\nM c,k S,x = arg max i S c,k,i S,x .(11)\nThis reconstructed support mask M c,k S is expected to be consistent with the information of the original support point cloud ground truth mask M c,k S . We call this process as Self-Reconstruction (SR). The Self-Reconstruction loss L sr is computed by minimizing the negative log-probability, similar to Eq. ( 9):\nL sr = - 1 CKN c,k,x i 1(M c,k S,x = i) log S c,k,i S,x .(12)\nThe final segmentation loss is sum of L seg and L sr :\nL total = L seg + L sr .(13)\nWithout this Self-Reconstruction as a constraint, prototypes may lose the original critical class and semantic information when aligning with the distribution of the query feature. On the other hand, it does not adequately utilize the support information for few-shot learning. The proposed Self-Reconstruction module serves as an important regularization role in the whole few-shot point cloud segmentation task to preserve discriminative and semantic information embedded in prototypes. Besides, it provides a mechanism to balance prototype adaptation while maintaining the original separability, adding an additional level of refinement to the process." }, { "figure_ref": [], "heading": "E. Semantic Prototype Projection", "publication_ref": [ "b50", "b51", "b52", "b54", "b55" ], "table_ref": [], "text": "Few-shot learning methods though reduce the number of required annotated samples to a certain degree, point-level segmentation masks are still compulsory during inference, as in Eq. ( 1) and in Eq. (2). To relieve the demanding requirements for point-level annotations, we step forward further and explore discarding support masks in this work. To this end, we introduce the semantic information to indicate the target categories and propose a semantic projection network that projects the semantic words from semantic space to visual space, i.e., from semantic words to visual prototypes. Given a set of semantic words W S = {w 0 S , w1 S , ..., w C S }, where w i S represents the name word for class i, we first employ a text-encoder, e.g., word2vec [51] or CLIP [52], to produce corresponding semantic embeddings, denoted as E S = {e 0 S , e 1 S , ..., e C S }, so that we have e i S = textencoder(w i S ). A dump of the Wikipedia corpus containing 3 billion words is utilized to train the word2vec model and 400 million textimage pairs are used in training CLIP. Such text encoder has the ability to produce corresponding embeddings for words that have the corresponding relationships in context [53]- [55]. Hence, the semantic embeddings from word2vec/CLIP is expected to have obtained the semantic relationship among the classes. We utilize Linear+LeakyReLU+Dropout to form a projection block, whose output is produced by another Linear transformation to generate the projected prototype. We denote the projected prototypes as P = {p 0 , p1 , ..., pC }, then we have pi = P rojection(e i S ). We follow [56] to use a differential criterion to compare the target prototypes Ṗ and the projected prototypes P by minimize the maximum mean discrepancy:\nL reg = ṗ, ṗ′ ∈ Ṗ G( ṗ, ṗ′ ) + p, p′ ∈ P G(p, p′ ) -2 ṗ∈ Ṗ p∈ P G( ṗ, p),(14) where\nG(a, b) = exp(-1 2σ 2 ∥ a -b ∥ 2\n) is a Gaussian function with bandwidth parameter σ. The notation ṗ′ represents one of the points in the sample Ṗ that is being compared to the point ṗ via the kernel function. When the training is over, the mapping from semantic embedding to visual prototype has been established. During the testing process, the proposed projection network can take place of the support branch to provide prototypes when no point-level annotated masks are provided. As long as the name of the category of interest is given, the corresponding prototype with semantic features is generated, and then query point cloud can be segmented without point-wise annotated masks as support information." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we report the experimental results of the proposed approach in comparison with previous state-of-theart methods and ablation studies that verify the effectiveness of our proposed modules." }, { "figure_ref": [], "heading": "A. Implementation Details", "publication_ref": [], "table_ref": [], "text": "Our approach is implemented based on the public platform Pytorch 1 . During training, we employ a variety of techniques to augment the training samples, such as Gaussian jittering, random shift, random scale, and random rotation around the zaxis. The training samples are then randomly sampled at every episode of each iteration. The feature extractor is initially pretrained on the training set D train for a total of 100 epochs, using Adam as the optimizer. We set the learning rate and batch size to 0.001 and 32, respectively. Following this pre-training stage, we proceed to train our proposed model with the pretrained weights as initialization weights. For the convenience of explanation, we divide the network into two sub-networks, the segmentation part (including feature extractor, Query-Guided Prototype Adaption, and Self-Reconstruction) and the semantic projection part (Semantic Projection Network). To ensure that the performance of segmentation part is not affected by the semantic projection part, we employ two independent optimizers during joint training. Adam is utilized as the optimizer to train the segmentation part, with an initial learning rate of 0.001 for the newly added layers, and 0.0001 for the feature extractor. It should be noted that the Self-Reconstruction does not introduce any additional parameters. The learning rates decay by 0.5 after every 5K iterations. The semantic projection component is trained using the Adam optimizer with a consistent learning rate of 0.0002 throughout the entire training process. As for hyper-parameter settings, we set the σ of G used in Eq. ( 14) to {2, 5, 10, 20, 40, 60}. We set the number of Transformer layers and attention head to 1 for simplicity. Quey-Guided Prototype Adaption and Self-Reconstruction, respectively. S i denotes the split i is used for testing." }, { "figure_ref": [], "heading": "B. Datasets and Evaluation Metrics a) Datasets:", "publication_ref": [ "b56", "b57", "b1", "b5", "b7", "b8", "b60" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We perform experimental evaluation on two public 3D point clouds datasets S3DIS [57] and ScanNet [58]. S3DIS is composed of 271 point clouds from Matterport scanners in six different areas from three buildings. The annotation for the point clouds has 12 semantic classes in addition to one background class annotated with clutter. ScanNet is made up of 1,513 point clouds of scans from 707 unique indoor scenes. The annotation for the point clouds has 20 semantic classes in addition to one background class annotated with unannotated space. Since the original scene is too large to process, we need to split it into smaller blocks. As a result, S3DIS and ScanNet contain 7,547 and 36,350 blocks through data pre-processing strategy utilized in [2], [6], respectively. N = 2, 048 points are randomly sampled for each block and each point is represented by a 9D vector (XYZ, RGB and normalized spatial coordinates).\nFollowing [8], semantic classes in each dataset is evenly split into two non-overlapping subsets . For both S3DIS and ScanNet, when testing the model with test class set C unseen on one fold, we use the other fold to train the model with train class set C seen for cross-validation.\nIn training process, an episode is constructed using the following procedure. First, C classes from C seen is randomly chosen which should meet the criterion N < |C seen |; Next, random choose sample from support set S and a query set Q based on the chosen C classes. Finally, the ground-truth mask M S for the support set and M Q for the query set are generated from the original mask annotation as the binary mask according to the chosen classes. The episodes for testing are built in a similar form. Except for one difference, we traverse N classes out of C unseen classes instead of randomly choosing N classes to get more fair result. 100 episodes are sampled for evaluation.\nb) Evaluation Metrics: Following conventions in the point cloud semantic segmentation community, we evaluate all methods with Mean Intersection-over-Union (mean-IoU). The per-class Intersection over Union (IoU) is defined as T P T P +F N +F P , where the T P , F N , and F P is the count of true positives, false negatives and false positives, respectively. For few-shot setting, mean-IoU is calculated by averaging over all the classes in testing classes C unseen . in TABLE I. The experiments are conducted on both S3DIS and ScanNet under 2-way 1-shot and 2-way 5-shot settings. We adopt ProtoNet [9], [61] as our baseline. First of all, when adding the proposed Query-Guided Prototype Adaption (QGPA) module on our baseline, a performance gain of 3.97% and 10.66% in terms of mIoU under 1-shot settings is observed over the baseline on S3DIS and ScanNet, respectively, as shown in TABLE I. The performance gain is because of the benefit of our effective transformer design and the adaption of prototypes from support feature space to query feature space. The superior result demonstrates that the capacity of transformer in adapting feature channel-wise correlations between samples, which is important in point cloud scenery, especially for few-shot learning with only a handful of training samples." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "C. Ablation", "publication_ref": [ "b60", "b61", "b62", "b8", "b60", "b5", "b8", "b58", "b59", "b7", "b7" ], "table_ref": [ "tab_6", "tab_6", "tab_10" ], "text": "Then, by introducing Self-Reconstruction (SR) as auxiliary supervision, we further obtain significant improvement over the QGPA, e.g., 3.78% and 4.66% performance gain under 2-way 1-shot setting on S3DIS and ScanNet, respectively, as shown in the last row in TABLE I. The proposed Self-Reconstruction forces the prototypes to restore the support information generating them, which give constraints on the prototypes to retrain the class-related clues. With SR, better prototypes that contain discriminative feature representations are produced. Meantime, the gradients from QGPA may make the class-related support clues less pronounced while the proposed SR protect and enhance such clues. Meanwhile, we observe that the performance gain by Self-Reconstruction over baseline without QGPA is less than with QGPA, e.g., 0.17% and 0.11% under 2-way-1-shot setting on S3DIS and ScanNet, respectively, as shown in the third row in extra constraints on this basis does not play a big role. When combining these two modules together, our full method achieves the best results that improve substantially over the baseline, which demonstrates that the proposed SR and QGPA are mutually beneficial. It is worth noting that when SR is added on the model with QGPA, it can play a better positive role than on the model without QGPA, which is in line with our original motivation. When we utilize QGPA to align prototypes with the query feature distribution, it may lose the original discriminative and semantic characteristics deviate from the original prototype. With our proposed SR as supervision, prototypes can preserve these informative clues by reconstructing support gourd-truth masks from themselves.\nb) Ablation Study for Baseline Configuration: We study the effects of various designs of the ProtoNet [61] since it is the baseline of our method. The results of different variants are listed in TABLE II. The results are improved consistently with the help of data augmentation (AUG), multi-scale feature aggregation (MS) [62], [63], and align loss (AL) [9]. As shown in TABLE II, based on the vanilla ProtoNet [61], with random scale and random shift to augment training samples, an improvement of 0.81% on S3DIS and 2.2% on ScanNet is achieved. Then, by aggregating multi-scale features from the backbone DGCNN [6], we further improve the results by 2.05% on S3DIS and 3.33% on ScanNet. Then by introducing align loss [9] with a performance gain of around 1%, the baseline achieves 52.17% on S3DIS and 41.03% on ScanNet. c) Ablation Study for QGPA Configuration: In Fig. 4, we illustrate the effects of three hyper-parameters in our proposed QGPA configuration (i.e., number of transformer block layer, dropout rate, hidden point number N ′ ) under the setting of 2-way 1-shot point cloud semantic segmentation on one split of S3DIS and ScanNet. As shown in Fig. 4 (a), increasing the layers of QGPA achieves better results, but overly large layers consume more computing resources and slow down the inference speed. Thus, we choose a single layer to achieve a good balance of accuracy and efficiency. As Fig. 4 (b) reveals that increasing drop out rate degrades the result a lot, the drop rate of 0.0 gives the best result on both datasets. While in Fig. 4 (c), with the increase of hidden point number, the result first rises and then becomes flat. Therefore, our transformer's hidden point number N ′ is set to 512 to achieve robust performance and keep efficient. d) Comparison with Other Designs for QGPA: To verify the superiority of our proposed QGPA, we list several other SOTA transformer-like modules design in TABLE III. For a fair comparison, except for the transformer design, these methods are trained with the same experimental configuration and we conduct experiments based on the same baseline without Self-Reconstruction. Classifier Weight Transformer (CWT) [59] is a few-shot segmentation transformer architecture which modifies classifier weight adapting to each query feature. Classifier weight can be regarded as prototypes, hence here we set the prototype as query; query feature as the key and value input of CWT transformer, please refer to CWT's paper for CWT architecture details. Besides, we use DETR [60] decoder branch which contains self-and cross-attention block design. We set the prototype as query; query feature as the key and value, similar with CWT. It is worth noting that the number of prototypes varies with the number of ways C. For instance, in a 2-way setting, there would be three prototypes. Therefore, the self-attention block in DETR is able to work as intended. As the experimental results shown in Fig. 5: Qualitative results of our method in 2-way 1-shot point cloud few-shot segmentation on the S3DIS dataset in comparison to the ground truth and attMPTI [8]. Five combinations of 2-way are illustrated from the top to bottom rows, i.e., \"ceiling, board\" (first row), \"chair, column\" (second row), \"door, table\" (third row), \"floor, wall\" (fourth row).\"window, sofa\" (last row).\nto our proposed Query-Guided Prototype Adaption (QGPA), where performance drops of -3.98%/-6.5% on S3DIS and -9.80%/-7.66% on ScanNet under 1-shot settings are observed, respectively. This confirms that simply applying Transformers for the 3D point cloud segmentation task is not effective, because they ignore the discrepancy in the distribution of features in the channel-wise space. D-QGPA represents the degradation of our proposed QGPA which replaces support feature with query feature for key input. It will lead to that attention value is calculated from query feature itself and cannot get knowledge from support feature. We find that D-QGPA does not lead to improvement but a large drop (-14.18% under 1-shot On ScanNet) compared to QGPA. This indicates the essential importance of adaption from support to query feature space. Without support feature, the prototype adaption lacks the information of the source, only the information of the target, which makes the adaptation process out of action.\nThe model parameters and inference speed of these methods are also listed in TABLE III. As can be seen, our method is more lightweight and efficient. e) Ablation Study for Different Choices of Prototypes in Self-Reconstruction: Upon transferring the prototype to the query feature space, the feature gap between the refined prototype and the original support feature arises. Consequently, utilizing the refined prototype in query feature space to reconstruct the support mask in support feature space is Fig. 6: Qualitative results of our method in 2-way 1-shot point cloud few-shot semantic segmentation setting on the ScanNet dataset in comparison to the ground truth and attMPTI [8].\nFive combinations of 2-way are illustrated from the top to bottom rows, i.e., \"desk, bed\" (first row), \"chair, cabinet\" (second row), \"table, sofa\" (third row), \"window, wall\" (fourth row), \"toilet, sink\" (last row). unreasonable. In contrast, using constraints on the original prototype can facilitate the extraction of a more discriminative representation from the support set. This serves as a foundation for the subsequent adaptation's success, allowing the refined prototype to indirectly retain critical class and semantic information. Furthermore, we conduct extensive experiments to assess the choice of original prototypes or refined prototypes in conducting reconstructed support masks, as shown in TABLE IV. Utilizing the original prototypes yields superior performance, exhibiting an increase of up to 4.05% in mIoU compared to the refined prototype, which is in accordance with our previous analysis." }, { "figure_ref": [], "heading": "D. Qualitative Result", "publication_ref": [ "b7", "b7", "b7" ], "table_ref": [], "text": "To qualitatively demonstrate the performance of our proposed approach, we visualize some point cloud segmentation result on the S3DIS and ScanNet in Fig. 5 and Fig. 6, respectively. In both Fig. 5 and Fig. 6, the first column is visualization of input point clouds, the second column is ground-truth masks, the third and fourth columns are predicted masks by attMPTI [8] and our proposed approach, respectively. Our approach achieves better results than attMPTI [8]. For example, in the last row of Fig. 5, our approach segments the \"sofa\" very well, while attMPTI identifies part of the \"sofa\" to \"background\". In the last row of Fig. 6, attMPTI incorrectly classifies \"sink\" to \"toilet\" and \"background\", while our approach generates high-quality mask for \"sink\". The qualitative results in Fig. 5 and Fig. 6 demonstrate the effectiveness of our proposed approach and our approach's superior over previous state-of-the-art method attMPTI [8]." }, { "figure_ref": [], "heading": "E. Comparison with State-of-the-Art Methods", "publication_ref": [ "b56", "b57", "b7", "b7", "b60" ], "table_ref": [], "text": "In TABLE V and TABLE VI, we compare with previous state-of-the-art methods and report our quantitative results on S3DIS [57] and ScanNet [58] datasets, respectively. Our proposed method significantly outperforms previous state-ofthe-art method by a large margin. We outperforms attMPTI [8] in all settings. For example, our proposed approach is 7.90% and 3.5% better than attMPTI [8] under 2-way 1-shot and 2-way 5-shot settings on S3DIS, and is 22.46% and 12.32% better than attMPTI under 3-way 1shot, 3-way 5-shot settings on ScanNet. Compared to ProtoNet [61] which has a similar design paradigm to us, our method achieves up to 13.57% and 24.07% gains on S3DIS and ScanNet, respectively. The huge improvements demonstrate that our method can obtain more dicriminative and adaptive prototypes from not only support samples but also support-query feature adaption. The superior results obtained by our method show that the intraclass-sample-variations problem is critical in 3D point cloud scenery, and our proposed Query-Guided Prototype Adaption and Self-Reconstruction are effective to address this problem." }, { "figure_ref": [], "heading": "F. Comparison with State-of-the-Art Zero-shot Methods", "publication_ref": [ "b50", "b51", "b63", "b64" ], "table_ref": [], "text": "We further evaluate our model with semantic prototype projection branch, as shown in TABLE VII. During testing, the point-level annotations of the support set are replaced by semantic prototypes which are generated from our semantic branch. We report our results with both word2vec [51] and CLIP [52], [64] as text encoder. On the one hand, Compared with TABLE V, our text-based model achieves competitive results compared to the ones with visual support samples. Therefore, the introduction of semantic projection network is able to bridge the gap between visual support and semantic words and establish a generalized framework for few-and zero-shot learning that achieves superior performance regardless of whether the input is in the form of semantic words or visual support samples. It is worth noting that our zeroshot segmentation model achieves better results under 5-shot training than 1-shot. This is because we jointly train our fewshot model and zero-shot model, and use the visual prototypes from support samples as the ground truth of our wordprojected prototypes during training. More support samples produce better visual prototypes, which contribute to training a better text-vision projection network that can generate more accurate word-projected prototypes. On the other hand, to provide a fair comparison with other zero-shot methods, we follow their official code and evaluate their method under our experimental settings for data augmentations for the training and the training and testing subset splitting of the data to get the results in TABLE VII. The results show that our method has significant improvement over 3DGenZ [65], the only opensource 3D zero-shot segmentation method to the best of our knowledge. This comparison further validates the effectiveness of our method. " }, { "figure_ref": [], "heading": "G. Computational Complexity", "publication_ref": [ "b7" ], "table_ref": [], "text": "In TABLE VIII, we present the number of parameters and computational complexity of our proposed model and previous SOTA method attMPTI [8]. The Query-Guided Prototype Adaption (QGPA) introduces two linear layers that map the point cloud from 2048 to 512, resulting in a moderate increase in the number of parameters. With the addition of Self-Reconstruction (SR), we only need to calculate an additional loss item, and no additional parameters are introduced, thus keeping the computational complexity unchanged. Finally, we integrate our Semantic Project Network (SPN) to obtain the final model. As we only need to learn the mapping from semantic words to visual prototypes, the increase in the number of parameters and computational complexity is minimal. Our model demonstrates strong performance while maintaining a relatively low number of parameters and computational complexity, particularly in terms of FPS. Although attMPTI's Transductive Inference process doesn't increase the parameter count, it significantly slows down inference speed. As a result, our approach is a highly effective and efficient solution for few-and zero-shot 3D point cloud semantic segmentation, delivering superior results and faster FPS. " }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We propose a prototype adaption and projection network for few-shot and zero-shot point cloud semantic segmentation. By analyzing the feature channel distribution of 2D images and 3D point clouds, we have observed that the feature intraclass variation of 3D point clouds is worse than 2D due to the lack of pre-training on large-scale datasets. We hence propose a Query-Guided Prototype Adaption (QGPA) module to map the prototypes extracted in support feature space to the query feature space, which greatly improves the few-shot segmentation performance. To preserve more class-specific clues in prototypes, we introduce Self-Reconstruction (SR) that enables the prototype to reconstruct the corresponding mask as well as possible. Furthermore, a semantic projection network is proposed to deal with the zero-shot learning cases where no annotated sample is provided but just category names. The semantic projection network makes our model more practical in the real-world. We evaluate the proposed approach on two popular 3D point cloud segmentation datasets, which show new state-of-the-art performances with significant improvement over previous methods." } ]
In this work, we address the challenging task of few-shot and zero-shot 3D point cloud semantic segmentation. The success of few-shot semantic segmentation in 2D computer vision is mainly driven by the pre-training on large-scale datasets like imagenet. The feature extractor pre-trained on large-scale 2D datasets greatly helps the 2D few-shot learning. However, the development of 3D deep learning is hindered by the limited volume and instance modality of datasets due to the significant cost of 3D data collection and annotation. This results in less representative features and large intra-class feature variation for few-shot 3D point cloud segmentation. As a consequence, directly extending existing popular prototypical methods of 2D few-shot classification/segmentation into 3D point cloud segmentation won't work as well as in 2D domain. To address this issue, we propose a Query-Guided Prototype Adaption (QGPA) module to adapt the prototype from support point clouds feature space to query point clouds feature space. With such prototype adaption, we greatly alleviate the issue of large feature intraclass variation in point cloud and significantly improve the performance of few-shot 3D segmentation. Besides, to enhance the representation of prototypes, we introduce a Self-Reconstruction (SR) module that enables prototype to reconstruct the support mask as well as possible. Moreover, we further consider zeroshot 3D point cloud semantic segmentation where there is no support sample. To this end, we introduce category words as semantic information and propose a semantic-visual projection model to bridge the semantic and visual spaces. Our proposed method surpasses state-of-the-art algorithms by a considerable 7.90% and 14.82% under the 2-way 1-shot setting on S3DIS and ScanNet benchmarks, respectively. Code is available at https://github.com/heshuting555/PAP-FZS3D.
Prototype Adaption and Projection for Few-and Zero-shot 3D Point Cloud Semantic Segmentation
[ { "figure_caption": "Fig. 2 :2Fig. 2: Architecture overview of our training model. We embed the support and query point clouds into deep features by DGCNN with shared weights. The prototypes are generated by masked average pooling (MAP) over the support features. We further introduce Query-Guided Prototype Adaption (QGPA) and Self-Reconstruction (SR) to enhance the discriminative and representative of prototypes. A semantic projection network is proposed to replace visual prototypes with semantic prototypes to get rid of support branch during inference. The query images are segmented by computing the pixel-wise similarity between the adapted prototypes and feature maps. \"cos\" denotes cosine similarity.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Effects of three hyper-parameters, number of layers, drop out rate, and hidden point number N ′ , under 2-way 1-shot setting on S3DIS (S 0 ) and ScanNet (S 0 ) datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Results under 2-way setting on S3DIS and ScanNet dataset using mean-IoU metric (%). QGPA and SR are our proposed", "figure_data": "", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Effects of different baseline network configuration under2-way 1-shot setting on S3DIS (S 0 ) and ScanNet (S 0 ) datasets.The symbols ✓ and ✗ indicate that the corresponding setting is included or excluded, respectively. The abbreviations AUG, MS, AL denote augmentation including random scale and random shift augmentations, multi-scale feature, align loss[9], respectively.", "figure_data": "", "figure_id": "tab_4", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "This phenomenon suggests that original prototypes extract discriminative clues from support samples well and adding", "figure_data": "S3DISScanNetMethod#Params. FLOPs1-shot5-shot1-shot5-shotS 0S 1meanS 0S 1meanS 0S 1meanS 0S 1meanBaseline352.19K 7.12G52.17 57.85 55.01 63.82 67.60 65.71 41.03 41.36 41.19 54.98 51.10 53.04CWT [59]685.74K 8.16G52.14 57.86 55.00 61.64 66.48 64.06 42.33 41.78 42.05 55.60 53.77 54.68DETR [60]2.62M8.20G50.87 54.09 52.48 57.29 65.31 61.30 43.32 45.07 44.19 55.42 53.49 54.45D-QGPA2.48M7.48G51.55 54.05 52.80 58.45 65.97 62.21 36.08 39.26 37.67 54.07 51.19 52.63QGPA2.48M7.48G56.77 61.19 58.98 64.57 69.16 66.87 52.66 51.05 51.85 62.19 57.23 59.71", "figure_id": "tab_5", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Comparison of different transformer-like modules under 2-way 1-shot setting on S3DIS and ScanNet dataset using mean-IoU metric (%). S i denotes the split i is used for testing.", "figure_data": "(a) num layer(b) drop out rate(c) hidden point number ! !", "figure_id": "tab_6", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "CWT and the vanilla DETR transformer perform inferior", "figure_data": "ceilingboardchaircolumndoor tablefloor wallwindowsofabackgroundInput Point CloudGround TruthattMPTIOurs", "figure_id": "tab_7", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Effect of different choices (original or refined prototype) to conduct Self-Reconstruction under 2-way setting on S3DIS and ScanNet dataset using mean-IoU metric (%).", "figure_data": "", "figure_id": "tab_10", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Results on S3DIS[57] dataset using mean-IoU metric (%). S i denotes the split i is used for testing.", "figure_data": "2-way3-wayMethod1-shot5-shot1-shot5-shotS 0S 1meanS 0S 1meanS 0S 1meanS 0S 1meanFT [8]31.5528.9430.2542.7137.2439.9823.9919.1021.5534.9328.1031.52ProtoNet [8]33.9230.9532.4445.3442.0143.6828.4726.1327.3037.3634.9836.17AttProtoNet [8]37.9934.6736.3352.1846.8949.5432.0828.9630.5244.4939.4541.97MPTI [8]39.2736.1437.7146.9043.5945.2529.9627.2628.6138.1434.3636.25attMPTI [8]42.5540.8341.6954.0050.3252.1635.2330.7232.9846.7440.8043.77Ours57.0855.9456.5164.5559.6462.1055.2755.6055.4459.0253.1656.09", "figure_id": "tab_12", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Results on ScanNet[58] dataset using mean-IoU metric (%). S i denotes the split i is used for testing.", "figure_data": "", "figure_id": "tab_13", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "Comparison of zero-shot 3D semantic segmentation results on S3DIS and ScanNet dataset using mean-IoU (%).", "figure_data": "", "figure_id": "tab_15", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "Analysis of computation cost of the proposed method and the results are under the 2-way 1-shot setting.", "figure_data": "", "figure_id": "tab_17", "figure_label": "VIII", "figure_type": "table" } ]
Shuting He; Xudong Jiang; Ieee Fellow; Wei Jiang; Henghui Ding
[ { "authors": "L Landrieu; M Simonovsky", "journal": "", "ref_id": "b0", "title": "Large-scale point cloud semantic segmentation with superpoint graphs", "year": "2018" }, { "authors": "C R Qi; H Su; K Mo; L J Guibas", "journal": "", "ref_id": "b1", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Q Huang; W Wang; U Neumann", "journal": "", "ref_id": "b2", "title": "Recurrent slice networks for 3d segmentation of point clouds", "year": "2018" }, { "authors": "Y Li; R Bu; M Sun; W Wu; X Di; B Chen", "journal": "", "ref_id": "b3", "title": "Pointcnn: Convolution on x-transformed points", "year": "2018" }, { "authors": "X Li; H Ding; Z Tong; Y Wu; Y M Chee", "journal": "", "ref_id": "b4", "title": "Primitive3d: 3d object dataset synthesis from randomly assembled primitives", "year": "2022" }, { "authors": "Y Wang; Y Sun; Z Liu; S E Sarma; M M Bronstein; J M Solomon", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b5", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "X Li; H Ding; W Zhang; H Yuan; J Pang; G Cheng; K Chen; Z Liu; C C Loy", "journal": "", "ref_id": "b6", "title": "Transformer-based visual segmentation: A survey", "year": "2023" }, { "authors": "N Zhao; T.-S Chua; G H Lee", "journal": "", "ref_id": "b7", "title": "Few-shot 3d point cloud semantic segmentation", "year": "2021" }, { "authors": "K Wang; J H Liew; Y Zou; D Zhou; J Feng", "journal": "IEEE", "ref_id": "b8", "title": "Panet: Few-shot image semantic segmentation with prototype alignment", "year": "2019" }, { "authors": "Y Liu; X Zhang; S Zhang; X He", "journal": "Springer International Publishing", "ref_id": "b9", "title": "Part-aware prototype network for few-shot semantic segmentation", "year": "2020" }, { "authors": "W Liu; C Zhang; H Ding; T.-Y Hung; G Lin", "journal": "IEEE Trans. Multimedia", "ref_id": "b10", "title": "Few-shot segmentation with optimal transport matching and message flow", "year": "2022" }, { "authors": "B Yang; C Liu; B Li; J Jiao; Q Ye", "journal": "Springer International Publishing", "ref_id": "b11", "title": "Prototype mixture models for few-shot semantic segmentation", "year": "2020" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b12", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "B Graham; M Engelcke; L Van Der Maaten", "journal": "", "ref_id": "b13", "title": "3d semantic segmentation with submanifold sparse convolutional networks", "year": "2018" }, { "authors": "B Graham; L Van Der Maaten", "journal": "", "ref_id": "b14", "title": "Submanifold sparse convolutional networks", "year": "2017" }, { "authors": "C Choy; J Gwak; S Savarese", "journal": "", "ref_id": "b15", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "C R Qi; L Yi; H Su; L J Guibas", "journal": "", "ref_id": "b16", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "W Wu; Z Qi; L Fuxin", "journal": "", "ref_id": "b17", "title": "Pointconv: Deep convolutional networks on 3d point clouds", "year": "2019" }, { "authors": "H Zhao; L Jiang; J Jia; P H Torr; V Koltun", "journal": "", "ref_id": "b18", "title": "Point transformer", "year": "2021" }, { "authors": "X Lai; J Liu; L Jiang; L Wang; H Zhao; S Liu; X Qi; J Jia", "journal": "", "ref_id": "b19", "title": "Stratified transformer for 3d point cloud segmentation", "year": "2022" }, { "authors": "T Vu; K Kim; T M Luu; X T Nguyen; C D Yoo", "journal": "", "ref_id": "b20", "title": "Softgroup for 3d instance segmentation on point clouds", "year": "2022" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "", "ref_id": "b21", "title": "Prototypical networks for fewshot learning", "year": "2017" }, { "authors": "O Vinyals; C Blundell; T Lillicrap; D Wierstra", "journal": "", "ref_id": "b22", "title": "Matching networks for one shot learning", "year": "2016" }, { "authors": "F Sung; Y Yang; L Zhang; T Xiang; P H Torr; T M Hospedales", "journal": "", "ref_id": "b23", "title": "Learning to compare: Relation network for few-shot learning", "year": "2018" }, { "authors": "C Zhang; Y Cai; G Lin; C Shen", "journal": "", "ref_id": "b24", "title": "Deepemd: Few-shot image classification with differentiable earth mover's distance and structured classifiers", "year": "2020" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "JMLR. org", "ref_id": "b25", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "A A Rusu; D Rao; J Sygnowski; O Vinyals; R Pascanu; S Osindero; R Hadsell", "journal": "", "ref_id": "b26", "title": "Meta-learning with latent embedding optimization", "year": "2018" }, { "authors": "Q Cai; Y Pan; T Yao; C Yan; T Mei", "journal": "", "ref_id": "b27", "title": "Memory matching networks for one-shot image recognition", "year": "2018" }, { "authors": "C H Lampert; H Nickisch; S Harmeling", "journal": "", "ref_id": "b28", "title": "Learning to detect unseen object classes by between-class attribute transfer", "year": "2009" }, { "authors": "S He; H Ding; W Jiang", "journal": "", "ref_id": "b29", "title": "Semantic-promoted debiasing and background disambiguation for zero-shot instance segmentation", "year": "2023" }, { "authors": "H Zhang; H Ding", "journal": "", "ref_id": "b30", "title": "Prototypical matching and open set rejection for zero-shot semantic segmentation", "year": "2021" }, { "authors": "S He; H Ding; W Jiang", "journal": "", "ref_id": "b31", "title": "Primitive generation and semantic-related alignment for universal zero-shot segmentation", "year": "2023" }, { "authors": "B Demirel; R Gokberk Cinbis; N Ikizler-Cinbis", "journal": "", "ref_id": "b32", "title": "Attributes2classname: A discriminative model for attribute-based unsupervised zero-shot learning", "year": "2017" }, { "authors": "Y Li; Z Jia; J Zhang; K Huang; T Tan", "journal": "", "ref_id": "b33", "title": "Deep semantic structural constraints for zero-shot learning", "year": "2018" }, { "authors": "C Gan; M Lin; Y Yang; Y Zhuang; A G Hauptmann", "journal": "", "ref_id": "b34", "title": "Exploring semantic inter-class relationships (sir) for zero-shot action recognition", "year": "2015" }, { "authors": "Z Zhang; V Saligrama", "journal": "", "ref_id": "b35", "title": "Zero-shot learning via joint latent similarity embedding", "year": "2016" }, { "authors": "F X Yu; L Cao; R S Feris; J R Smith; S.-F Chang", "journal": "", "ref_id": "b36", "title": "Designing category-level attributes for discriminative visual recognition", "year": "2013" }, { "authors": "H Ding; X Jiang; B Shuai; A Q Liu; G Wang", "journal": "", "ref_id": "b37", "title": "Context contrasted feature and gated multi-scale aggregation for scene segmentation", "year": "2018" }, { "authors": "B Shuai; H Ding; T Liu; G Wang; X Jiang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b38", "title": "Toward achieving robust low-level and high-level scene parsing", "year": "2018" }, { "authors": "H Ding; X Jiang; B Shuai; A Q Liu; G Wang", "journal": "", "ref_id": "b39", "title": "Semantic correlation promoted shape-variant context for segmentation", "year": "2019" }, { "authors": "H Ding; X Jiang; A Q Liu; N M Thalmann; G Wang", "journal": "", "ref_id": "b40", "title": "Boundary-aware feature propagation for scene segmentation", "year": "2019" }, { "authors": "H Ding; H Zhang; X Jiang", "journal": "Pattern Recognition", "ref_id": "b41", "title": "Self-regularized prototypical network for few-shot semantic segmentation", "year": "2023" }, { "authors": "W Liu; Z Wu; H Ding; F Liu; J Lin; G Lin", "journal": "", "ref_id": "b42", "title": "Few-shot segmentation with global and local contrastive learning", "year": "2021" }, { "authors": "A Shaban; S Bansal; Z Liu; I Essa; B Boots", "journal": "BMVA Press", "ref_id": "b43", "title": "One-shot learning for semantic segmentation", "year": "2017" }, { "authors": "G Koch; R Zemel; R Salakhutdinov", "journal": "", "ref_id": "b44", "title": "Siamese neural networks for one-shot image recognition", "year": "2015" }, { "authors": "N Dong; E Xing", "journal": "BMVA Press", "ref_id": "b45", "title": "Few-shot semantic segmentation with prototype learning", "year": "2018" }, { "authors": "X Zhang; Y Wei; Y Yang; T S Huang", "journal": "IEEE Trans. Cybern", "ref_id": "b46", "title": "Sg-one: Similarity guidance network for one-shot semantic segmentation", "year": "2020" }, { "authors": "C Zhang; G Lin; F Liu; R Yao; C Shen", "journal": "", "ref_id": "b47", "title": "Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning", "year": "2019" }, { "authors": "Z Tian; H Zhao; M Shu; Z Yang; R Li; J Jia", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b48", "title": "Prior guided feature enrichment network for few-shot segmentation", "year": "2022" }, { "authors": "X Chu; W Ouyang; H Li; X Wang", "journal": "", "ref_id": "b49", "title": "Structured feature learning for pose estimation", "year": "2016" }, { "authors": "T Mikolov; K Chen; G Corrado; J Dean", "journal": "", "ref_id": "b50", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b51", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "H Ding; C Liu; S Wang; X Jiang", "journal": "", "ref_id": "b52", "title": "Vision-language transformer and query generation for referring segmentation", "year": "2021" }, { "authors": "H Ding; S Cohen; B Price; X Jiang", "journal": "Springer", "ref_id": "b53", "title": "Phraseclick: toward achieving flexible interactive segmentation by phrase and click", "year": "2020" }, { "authors": "H Ding; C Liu; S Wang; X Jiang", "journal": "IEEE TPAMI", "ref_id": "b54", "title": "VLT: vision-language transformer and query generation for referring segmentation", "year": "2023" }, { "authors": "Y Li; K Swersky; R Zemel", "journal": "", "ref_id": "b55", "title": "Generative moment matching networks", "year": "2015" }, { "authors": "I Armeni; O Sener; A R Zamir; H Jiang; I Brilakis; M Fischer; S Savarese", "journal": "", "ref_id": "b56", "title": "3d semantic parsing of large-scale indoor spaces", "year": "2016" }, { "authors": "A Dai; A X Chang; M Savva; M Halber; T Funkhouser; M Niessner", "journal": "", "ref_id": "b57", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Z Lu; S He; X Zhu; L Zhang; Y.-Z Song; T Xiang", "journal": "", "ref_id": "b58", "title": "Simpler is better: Few-shot semantic segmentation with classifier weight transformer", "year": "2021" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer International Publishing", "ref_id": "b59", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "V Garcia; J Bruna", "journal": "", "ref_id": "b60", "title": "Few-shot learning with graph neural networks", "year": "2018" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b61", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "H Ding; X Jiang; B Shuai; A Q Liu; G Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b62", "title": "Semantic segmentation with context encoding and multi-path decoding", "year": "2020" }, { "authors": "R Zhang; Z Guo; W Zhang; K Li; X Miao; B Cui; Y Qiao; P Gao; H Li", "journal": "", "ref_id": "b63", "title": "Pointclip: Point cloud understanding by clip", "year": "2022" }, { "authors": "B Michele; A Boulch; G Puy; M Bucher; R Marlet", "journal": "IEEE", "ref_id": "b64", "title": "Generative zero-shot learning for semantic segmentation of 3d point clouds", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 311.98, 604.66, 251.06, 23.66 ], "formula_id": "formula_0", "formula_text": "S = {(I c,k S , M c,k S )}, where I is point cloud, M is mask, k ∈ {1, • • • , K}," }, { "formula_coordinates": [ 4, 364.58, 549.32, 198.46, 31.05 ], "formula_id": "formula_1", "formula_text": "p c = 1 K k x F c,k S,x 1(M c,k S,x = c) x 1(M c,k S,x = c) ,(1)" }, { "formula_coordinates": [ 4, 343.2, 650.99, 219.84, 31.06 ], "formula_id": "formula_2", "formula_text": "p 0 = 1 CK c,k x F c,k S,x 1(M c,k S,x / ∈ {1, ..., C}) x 1(M c,k S,x / ∈ {1, ..., C}) .(2)" }, { "formula_coordinates": [ 5, 71.41, 57.78, 200.3, 155.53 ], "formula_id": "formula_3", "formula_text": "ℝ !×! ℝ !×# ! ℝ !×# ! ℝ $×! Linear Support Features Query Features ℝ $×!" }, { "formula_coordinates": [ 5, 93.35, 476.29, 206.67, 56.01 ], "formula_id": "formula_4", "formula_text": "F i S =          1 K k F c,k S , i ∈ {1, ..., C}, 1 CK c,k F c,k S , i = 0,(3)" }, { "formula_coordinates": [ 5, 54.84, 569.4, 245.18, 14.84 ], "formula_id": "formula_5", "formula_text": "Query = F Q ⊤ W q , Key = F i S ⊤ W k , V alue = p i W v ,(4)" }, { "formula_coordinates": [ 5, 103.29, 706.57, 196.74, 25.97 ], "formula_id": "formula_6", "formula_text": "Attn = softmax( Query • Key ⊤ √ d ),(5)" }, { "formula_coordinates": [ 5, 371.81, 133.68, 191.23, 12.07 ], "formula_id": "formula_7", "formula_text": "ṗi = p i + W p (Attn • V alue ⊤ ) ⊤ ,(6)" }, { "formula_coordinates": [ 5, 365.46, 339.9, 197.57, 24.75 ], "formula_id": "formula_8", "formula_text": "S i Q,x = exp(-α⟨F Q,x , ṗi ⟩) ṗi ∈ Ṗ exp(-α⟨F Q,x , ṗi ⟩) ,(7)" }, { "formula_coordinates": [ 5, 393.72, 412.12, 169.32, 18.89 ], "formula_id": "formula_9", "formula_text": "MQ,x = arg max i S i Q,x .(8)" }, { "formula_coordinates": [ 5, 350.13, 452.67, 212.91, 26.65 ], "formula_id": "formula_10", "formula_text": "L seg = - 1 N x i 1(M Q,x = i)logS i Q,x ,(9)" }, { "formula_coordinates": [ 5, 353.85, 722.57, 205.03, 28.89 ], "formula_id": "formula_11", "formula_text": "S c,k,i S,x = exp(-α⟨F c,k S,x , p i ⟩) p i ∈{p 0 ,p c } exp(-α⟨F Q,x , p i ⟩) . (10" }, { "formula_coordinates": [ 5, 558.89, 733.81, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 6, 129.96, 73.57, 170.07, 19.12 ], "formula_id": "formula_13", "formula_text": "M c,k S,x = arg max i S c,k,i S,x .(11)" }, { "formula_coordinates": [ 6, 70.49, 178.9, 229.53, 27.61 ], "formula_id": "formula_14", "formula_text": "L sr = - 1 CKN c,k,x i 1(M c,k S,x = i) log S c,k,i S,x .(12)" }, { "formula_coordinates": [ 6, 131.23, 232.92, 168.79, 9.65 ], "formula_id": "formula_15", "formula_text": "L total = L seg + L sr .(13)" }, { "formula_coordinates": [ 6, 311.98, 88.93, 251.06, 43.8 ], "formula_id": "formula_16", "formula_text": "L reg = ṗ, ṗ′ ∈ Ṗ G( ṗ, ṗ′ ) + p, p′ ∈ P G(p, p′ ) -2 ṗ∈ Ṗ p∈ P G( ṗ, p),(14) where" }, { "formula_coordinates": [ 6, 339.79, 121.9, 124.19, 13.47 ], "formula_id": "formula_17", "formula_text": "G(a, b) = exp(-1 2σ 2 ∥ a -b ∥ 2" } ]
10.18653/v1/N19-1423
2024-03-12
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b27", "b20" ], "table_ref": [], "text": "Vast quantities of data are locked away in tables found in scientific literature, webpages, and more. These tables are primarily designed for visual presentation, and the underlying data is typically not available in any structured format, such as a relational or graph database. Some table collections have simple or uniform structures (Cafarella et al., 2008), making them easy to convert to relational data, for example, Wikipedia tables (Lebret et al., 2016;Iyyer et al., 2017), however a lot of information is stored in tables with complex and varied layouts, such as tables of experimental data presented in scientific literature.\nPrior work on extracting structured data from tables has focused on developing custom pipelines 1 Code and data are available at our GitHub repository." }, { "figure_ref": [ "fig_0" ], "heading": "Cell Type Attributes", "publication_ref": [ "b23", "b5", "b23" ], "table_ref": [], "text": "Result task (string), model (string), metric (string), training data (string), test data (string)\nExtraction Schema Output {\"value\": \"95.7\", \"type\": \"Result\", \"task\": \"Named Entity Recognition\", \"model\": \"ELMo\",\"metric\": \"F1\", \"training data\": \"CoNLL-2003\", \"test data\": \"CoNLL-2003 Dev\"} ... {\"value\": \"92.4\", \"type\": \"Result\", \"task\": \"Named Entity Recognition\", \"model\": \"BERT base\",\"metric\": \"F1\", \"training data\": \" CoNLL-2003\", \"test for each new table format or domain, for example extracting machine learning leaderboards from L A T E X result tables (Kardas et al., 2020). Importantly, the development of these specialized pipelines necessitates domain-specific labeled data, which not only incurs a significant cost in collection for every new extraction task but also constrains their applicability outside the originating domain.\nIn this paper, we demonstrate that large language models can enable accurate domain-independent extraction of data from heterogeneous tables. To show this, we present a new formulation of the table extraction problem, which we refer to as Schema-Driven Information Extraction. In Schema-Driven IE, the only human supervision provided is a schema that describes the data model, including the target attributes and their data types, formulated in a JSON format. 2 Given an extraction schema, and a table as input, the model then outputs a sequence of JSON objects, each of which describes a single cell in the table and adheres to the user-provided schema. For example, as demonstrated in Figure 1, a domain expert outlines the attributes of interest related to experimental result cells in a machine learning table, and the model extracts JSON objects following this schema.\nTo evaluate the ability of LLMs to perform Schema-Driven IE, we introduce a new benchmark consisting of table extraction datasets in four diverse domains: machine learning papers, chemistry literature, material science journals, and webpages -each of which has a different data format (L A T E X, XML, CSV, and HTML, respectively). We curate and annotate data for the first two domains, while adapting existing datasets for the latter two.\nWe then use this benchmark to analyze the performance of open-source and proprietary LLMs. We find that proprietary models perform well across diverse domains and data formats. For example, GPT-4 (OpenAI, 2023) and code-davinci (Chen et al., 2021), are capable of accurate table extraction (ranging from 74.2 to 96.1 F 1 ), given only a relevant data schema as supervision. This performance is comparable to fully supervised models, which operate at an F 1 range of about 64.1 to 96.1. We also present a number of analyses on various factors that are key to achieving good performance while minimizing inference costs, including retrieving text from outside the table, in addition to an iterative error recovery strategy. Moreover, we demonstrate the utility of Schema-Driven IE by evaluating performance on the downstream task of leaderboard extraction from machine learning papers (Kardas et al., 2020).\nWhile open-source models have yet to match the performance of their proprietary counterparts, our analysis reveals that recent models like CodeLlama-instruct-13B (Rozière et al., 2023) show significant progress, e.g., on ML tables, its performance is comparable to that of Furthermore, we conduct comprehensive ablation studies and analyses to explore model performance nuances, and demonstrate the feasibility of distilling efficient table extraction models without compromising performance. By introducing a new benchmark and a baseline for Schema-Driven IE, our goal is to encourage the creation of future opensource models capable of executing this task independently of proprietary APIs." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Schema-Driven Information Extraction", "publication_ref": [ "b4", "b7" ], "table_ref": [], "text": "We present a new task for extracting structured records from tables. As shown in Figure 1, the task input contains two elements: 1) a table with numerous cells, optionally supplemented with contextual text, e.g., retrieved paragraphs from the same document; and 2) an extraction schema that outlines target attributes and their data types for various record types (implemented as JSON templates). Given the input, the model generates a sequence of JSON objects, where each object corresponds to a cell in the table and contains key-value pairs for the pre-defined attributes of a specific record type.\nConsider a table in an ML paper that displays various models' results. Our proposed task enables the extraction of result records from each cell in the table. These records include relevant attributes such as the evaluation metric, task, etc, which are structured in corresponding JSON objects and could facilitate meta-analysis of experiments or support research on reproducibility.\nTo demonstrate the feasibility of Schema-Driven IE on tables, we introduce INSTRUCTE, a method to extract structured records from a broad range of semi-structured data, using only task-specific instructions. INSTRUCTE uses a template-based approach to information extraction (Chambers and Jurafsky, 2011;Chen et al., 2023), where the extraction schema is represented as a series of JSON templates. The underlying LLM is instructed to select the appropriate template and populate it with extracted values for each cell in an input table, following a specified cell traversal order. As illustrated in Figure 2 (left), the prompt used by INSTRUCTE consists of four key components: an input table (optionally) supplemented with contextual text, an extraction schema, task-specific instructions, and an initial record for starting the process. Here are JSON templates for four types of numeric cells: \"Other\", \"Result\", \"Data Stat.\", and \"Hyper-param.\": {\"value\": \"xx\", \"type\": \"Result\", \"task\": \"xx\", … {\"value\": \"xx\", \"type\":\"Hyper-param.\", \"model\": ... ..." }, { "figure_ref": [], "heading": "Extraction Schema", "publication_ref": [], "table_ref": [], "text": "Please describe all numeric cells in the above latex table following the JSON templates (proceeding by row in a left-right, top-down direction). For each cell, output one JSON description ..." }, { "figure_ref": [], "heading": "Task-specific Instruction", "publication_ref": [], "table_ref": [], "text": "Cell Description: {\"value\": \"345M\", \"type\": {\"value\": \"345M\", \"type\": \"Hyper-params.\", ...} {\"value\": \"1.3B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"5B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"0.755\", \"type\": \"Result\", ...} {\"value\": \"0.907\", \"type\": \"Result\", ...} {\"value\": \"0.953\", \"type\": \"Result\", ...} c {\"value\": \"345M\", \"type\": \"Hyper-params.\", ...} {\"value\": \"1.3B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"5B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"0.755\", \"type\": \"Result\", ...} {\"value\": \"0.907\", \"type\": \"Result\", ...} {\"value\": \"0.953\", \"type\": \"Result\", ...} c {\"value\": \"345M\", \"type\": \"Hyper-params.\", ...} {\"value\": \"1.3B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"5B\", \"type\": \"Hyper-params.\", ...} {\"value\": \"0.755\", \"type\": \"Result\", ...} {\"value\": \"0.762\", \"type\":" }, { "figure_ref": [ "fig_2" ], "heading": "Error Recovery", "publication_ref": [], "table_ref": [], "text": "Does not follow the instructed order (truncated) Append the next cell (following the instructed order) and re-prompt the model Follows the instructed \"left-right, top-down\" order Despite explicit instructions, we found that models often fail to generate JSON records for all the cells in a single inference pass. Instead, models deviate from the instructed cell traversal order, leading to partial extraction of the input table's cells.\nTo mitigate this, we use an iterative error recovery strategy. As shown on the right side of Figure 2, we detect deviations from the instructed left-right, top-down order by comparing predicted cell values with those from a rule-based cell detector. Then, we truncate the LLM's output to the point of deviation, and re-prompt the model with the truncated sequence, adding the value of the next target cell. This strategy guides the model to adhere to the instructed order, and continues iteratively until all records are generated. In Section 4.4, we show that this approach achieves comparable performance to cell-by-cell prompting, while significantly reducing inference costs. For more details on IN-STRUCTE, including prompt formulation and cell detectors, please refer to Appendix A." }, { "figure_ref": [ "fig_0" ], "heading": "The SCHEMA-TO-JSON Benchmark", "publication_ref": [ "b24", "b10", "b40", "b11", "b11", "b12" ], "table_ref": [], "text": "We now present the details of our benchmark, SCHEMA-TO-JSON, which is designed to assess the capabilities of LLMs to extract data from tables, adhering to a predefined schema. This benchmark contains tables from four diverse domains: machine learning papers, chemistry literature, material science journals, and webpages. Each domain fea-tures a unique textual format, namely, L A T E X, XML, CSV, and HTML, requiring models to generalize across domains and formats. For ML tables, we add relevant paragraphs from the same documents to provide additional context, testing the models' capacity to jointly understand tabular and textual data. We manually annotate datasets for the first two domains and adapt pre-existing datasets into our unified format for the latter two. Statistics of the four datasets are summarized in Table 1.\narXiv Machine Learning Tables We create a manually annotated dataset focused on tables from arXiv ML papers, emphasizing numeric cells that are classified into four categories: Experimental Results, Hyper-parameters, Data Statistics, or Other. Extraction attributes are pre-defined for the first three categories; for instance, Result records incorporate textual attributes such as evaluation metric (e.g., F 1 ) and dataset (e.g., SQuAD), as shown in Figure 1. We collect papers from three subfields: Machine Learning, Computer Vision, and Natural Language Processing, and randomly select five tables from each paper (including those in appendices) for budget reasons. 3 We employ computer scientists with ML backgrounds for annotation, and evaluate inter-annotator agreement (IAA) score by calculating Table-F 1 (a metric detailed in Section 4.1) on double-annotated tables, treating one set of annotations as gold labels and the other as predictions. This method yields a Table-F 1 score of 96.6 when applying thresholded token-level F 1 for attribute matching. For additional information on the ML tables, including predefined attributes and the annotation process, please refer to Appendix B.\nPubMed Chemistry Tables We also annotate a new dataset of PubMed tables describing the physical properties of chemical compounds. The automated extraction of physical properties from such tables could provide substantial real-world benefits, for example collecting much-needed data for training ML models that can support inverse molecular design (Kim et al., 2018) and thus accelerating the drug design process (Fields, 2019;Stokes et al., 2020). Here, we focus on cells concerning five important physical properties identified by chemists: IC 50 , EC 50 , GI 50 , CC 50 , and MIC.4 Three common attributes are manually extracted from tables for all properties: unit, treatment (experimental compound), and target (measured biological entity, e.g., a gene expression). Similar to the ML tables, domain experts annotate JSON records for relevant cells, and Table-F 1 calculated on double-annotated tables is used as the IAA score. A Table-F 1 score of 91.0 is achieved when applying thresholded tokenlevel F 1 for attribute matching, underscoring the reliability of the dataset.\nDISCOMAT (Gupta et al., 2022) We incorporate DISCOMAT, an existing dataset focusing on glass composition tables from Elsevier material science journals. The task is to extract tuples comprising (material, constituent, percentage, unit) from given tables. We adapt DISCOMAT to fit our Schema-Driven IE framework by grounding the percentage element to numeric cells in the table and considering the other elements as attributes. The model is tasked to identify numeric cells representing constituent percentages and predict the associated three attributes. We refer readers to Gupta et al. (2022) for more details of DISCOMAT.5 SWDE (Hao et al., 2011) Finally, we add SWDE (Structured Web Data Extraction) as a fourth dataset, aimed at extracting pre-defined at- " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate the capability of various LLMs to perform schema-driven information extraction, in addition to full fine-tuning using our benchmark. For ML and chemistry tables, we use a subset of 10 and 7 randomly sampled papers separately for model development, which facilitates the training of supervised models, thereby enabling comparison with a schema-driven approach. For the two pre-existing datasets, we follow the data splits used in the original experiments." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b11", "b11", "b12", "b12" ], "table_ref": [], "text": "We introduce For DISCOMAT and SWDE, we use the metrics specified in the original papers to support comparisons with prior work. In the case of DISCOMAT, we report Tuple-F 1 (Gupta et al., 2022), where a predicted 4-element tuple is considered correct only We employ Table-F 1 for our two newly annotated datasets and provide a measure of human performance.\nFor DISCOMAT (Gupta et al., 2022) and SWDE (Hao et al., 2011), we adhere to their original evaluation metrics, i.e., Tuple-F 1 and Page-F 1 respectively, to support comparisons with established methods. In SWDE experiments, k represents the number of trained websites from each vertical. Due to API cost constraints, *INSTRUCTE's results are computed on a 1,600 webpage sample, with bootstrap confidence intervals calculated to validate the reliability of these performance estimates (margin of error for 95% confidence interval with 1000 samples is 0.00995.)\nif it exactly matches the gold tuple. For SWDE, we report Page-F 1 (Hao et al., 2011), which measures the number of pages where the attributes are accurately predicted. 6" }, { "figure_ref": [], "heading": "Baselines & Implementation Details", "publication_ref": [ "b29", "b41", "b36", "b15", "b30", "b49", "b15", "b45", "b28" ], "table_ref": [], "text": "We evaluate the capability of multiple LLMs to perform Schema-Driven IE, including API-based GPT-4 and GPT-3.5 models and open-source models, such as Llama2-Chat-13B (Touvron et al., 2023b), CodeLlama-instruct-13B (Rozière et al., 2023), StarCoder-15.5B (Li et al., 2023), LLaMA-7B (Touvron et al., 2023a), and Alpaca-7B (Taori et al., 2023). We also frame Schema-Driven IE as a TableQA problem, applying multi-choice and extractive QA prompts for template selection and cell attribute prediction, respectively. Furthermore, we also evaluate T5-11B (Raffel et al., 2020) and TaPas (Herzig et al., 2020), a table-specialized LM.\nFor implementation details of INSTRUCTE and other methods, see Appendix D.\n6 Notably, SWDE primarily focuses on identifying textual HTML nodes containing attribute values rather than exact text spans, so we use token-level F1 to identify the most relevant HTML node for each extracted attribute.\nFor DISCOMAT and SWDE, we compare IN-STRUCTE with established baselines, which either design task-specific architectures, such as Free-Dom (Lin et al., 2020) and LANTERN (Zhou et al., 2022), or use LMs pretrained on tables or web pages, like TaPas (Herzig et al., 2020), TaBERT (Yin et al., 2020), and MarkupLM (Li et al., 2022)." }, { "figure_ref": [ "fig_3" ], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "Figure 3 presents the main results from the comparison between INSTRUCTE and other methods on our SCHEMA-TO-JSON benchmark. We observe that INSTRUCTE, in conjunction with APIbased models, achieves strong performance across domains and input formats, without any domainspecific labels. With GPT-4, INSTRUCTE can outperform fine-tuned models on ML and chemistry tables. However, a substantial disparity remains compared to human performance, e.g., the Table-F 1 on double-annotated examples for ML tables stands at 96.6 when applying thresholded tokenlevel F 1 for attribute matching, which is 22.4 F 1 points higher than GPT-4.\nFor DISCOMAT and SWDE, GPT-4 performs on par or slightly trails behind the fully supervised Page-F 1 on ML tables and SWDE, respectively. This success is likely due to these domains being more represented in the model's pre-training corpus, suggesting that enhancing the pre-training data in less represented domains, such as chemistry and material science, may be an avenue for narrowing the gap with API-based models." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We assess the impact of different components of INSTRUCTE, including task formulation and error recovery, using ML tables.\nLLMs & Task Formulation In Table 2, we compare different LLMs, leading to two principal observations. First, code models show strong performance on Schema-Driven IE. This is evident from several key comparisons, such as the performance similarity between code-davinci-002 and GPT-4, the superior performance of code-davinci-002 compared to other GPT-3.5 models, and the fact that CodeLlama-instruct-13B significantly outperforms Llama2-chat-13B, approaching the performance of gpt-3.5-turbo. This superiority of code models might be attributed to their alignment with Schema-Driven IE, which involves converting table source code into JSON records. Second, non-code open-source models with similar sizes (for instance, those in the 6-7B range) tend to achieve comparable fine-tuning performance, though they might exhibit variations in prompting performance.\nSubsequently, we compare three task formulations: SCHEMA-TO-JSON, TableQA, and Function Calling, which is a feature provided by the OpenAI API. 7 In Function Calling, the schema is formatted as function definitions with attributes serving as arguments. The LM is then tasked with selecting the function and generating JSON objects for extracted arguments on a cell-by-cell basis. From the T5-11B fine-tuning experiments, we observe that SCHEMA-TO-JSON attains better performance than TableQA, demonstrating the value of integrating task-specific instructions and extraction schema in the input. Function Calling with gpt-3.5-turbo shows limited effectiveness, and error analysis suggests that this shortfall primarily stems from the model's struggle in selecting the correct function.8 We use code-davinci-002 for these experiments considering API budget limitations and its resemblance to GPT-4 in terms of performance and context length. We observe that removing supplementary text degrades performance. ). Both conversion noise and the model's format-specific processing capabilities could contribute to these differences. The optimal performance on original formats underlines the necessity of developing models adept at handling diverse data formats directly, rather than relying on format conversion tools." }, { "figure_ref": [], "heading": "Prompt Components & Error Recoverery", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Knowledge Distillation", "publication_ref": [ "b26", "b22", "b48", "b46", "b39", "b35", "b6", "b8", "b15", "b45", "b19", "b0", "b13", "b3", "b11", "b33", "b31", "b32", "b23", "b16", "b11" ], "table_ref": [ "tab_6" ], "text": "Considering the strong performance of API-based models on Schema-Driven IE, we now show that it is possible to use knowledge distillation (Le et al., 2022;Kang et al., 2023) to build a cost-efficient compact model, using ML tables as a demonstration. Specifically, this process first generates synthetic data by performing inference on unlabeled tables using code-davinci-002, followed by finetuning a smaller model (e.g., 7B parameters) using the synthetic data. We compile a collection of 979 arXiv ML papers, submitted between 2008 and 2019, yielding 3,434 tables (containing a total of 100K cells). In Table 3, we can see that LLaMA-7B and Alpaca-7B demonstrate similar performance as seen in the fine-tuning results (Table 2). While fine-tuning LLaMA with LoRA (Hu et al., 2022) presents noticeable computational efficiency, full-parameter fine-tuning of T5-11B et al., 2016;Zhong et al., 2017;Yu et al., 2018;Schlichtkrull et al., 2021). Various approaches have emerged, such as semantic parsing for compositional question answering (Pasupat and Liang, 2015), and symbolic reasoning for fact verification (Chen et al., 2020). In contrast, our work transforms tables into structured JSON records, where a data schema is the only supervision provided.\nPre-training on Semi-structured Data The rise of pre-trained language models (Devlin et al., 2019), has stimulated interest in pre-training on semi-structured data. TaPas (Herzig et al., 2020) and TaBERT (Yin et al., 2020) pre-train on linearized tables with a specialized cell index embedding. TABBIE (Iida et al., 2021) employs dual transformers for separate row and column encoding. HTLM (Aghajanyan et al., 2022) uses an HTML-specialized pre-training objective, facilitating a novel structured prompting scheme. Similar to our work, TabLLM (Hegselmann et al., 2023) uses general-purpose LLMs to process linearized tables, but we focus on schema-driven IE rather than table classification or question answering.\nIE from Semi-structured Data Information extraction from semi-structured data has gained increasing interest (Carlson and Schafer, 2008;Dong et al., 2020;Gupta et al., 2022;Lou et al., 2023). Innovations such as OpenCeres (Lockard et al., 2019) and ZeroShotCeres (Lockard et al., 2020) highlight open-domain extraction from web data, while Ax-Cell (Kardas et al., 2020) and TDMS-IE (Hou et al., 2019) focus on leaderboard extraction from ML tables. DisCoMat (Gupta et al., 2022) showcases material composition extraction from scientific tables. Despite these advancements, most existing methods require supervised datasets or specialized models for task-specific fine-tuning. Our approach stands out by using LLMs to accurately extract data across various formats and domains without relying on labels or custom extraction pipelines. Beyond the model's inherent limitations, the availability of specific API-based backbones like GPT-4 and code-davinci-002 may change, impacting reliance on these resources. To reduce this dependency, we include results from opensource models and investigate knowledge distillation as a viable alternative, showing promising results. Our benchmark aims to facilitate future research focused on enhancing smaller, openly accessible models, recognizing the importance of such developments for practical application and broader accessibility." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "Our use of OpenAI's API-based models to distill open-source table extractors complies with Ope-nAI's terms of service, as we do not \"use the output from the Services to develop models that compete with OpenAI\". Regarding licenses of four datasets in our SCHEMA-TO-JSON benchmark, the arXiv ML tables align with the licenses of their original papers. The PubMed Chemistry tables, sourced from the PMC Open Access Subset, conform to Creative Commons or equivalent licenses. For the other two datasets, we adapt pre-existing datasets released by the NLP research community, abiding by their respective original licenses." }, { "figure_ref": [], "heading": "A INSTRUCTE", "publication_ref": [ "b11" ], "table_ref": [ "tab_10", "tab_11" ], "text": "Prompt Formulation Our proposed prompt consists of four components: 1) \"Input Table (w/ supp. text)\" includes the table source code paired with supplementary text from the document; 2) \"Extraction Schema\" defines the JSON formats for extracted records, encompassing the record type, attribute names, and associated data types; 3) \"Taskspecific Instructions\" outline the task execution process, addressing both the extraction process from individual cells and the traversal strategy across cells, such as \"left-right, top-down\"; 4) \"Initial Record\" is used to jump-start the prompting process, including the partial record of the first cell.\nFor \"Input Table (w/ supp. text)\", we employ the BM25 algorithm to retrieve the most relevant paragraphs for each table. For \"Extraction Schema\", we propose two guidelines for schema design: 1) Attribute names should be specific, which decreases the probability of the model generating incorrect attributes, or hallucinations. For instance, when extracting relevant attributes about a movie from a movie webpage, it's advisable to use specific terms such as \"movie name\" or \"director name\", rather than the generic \"name\"; 2) Attributes should be strategically ordered, placing simpler attributes ahead of more complex ones as errors in preceding attributes can adversely affect the prediction of subsequent ones due to the autoaggressive nature of LMs. The exact INSTRUCTE prompts used in our experiments are shown in Table 5 andTable 6. Cell Detector We develop a rule-based method to identify numeric cells for both the ML and chemistry tables. Specifically, for the ML tables, we use the row separator \"\\\\\" and the column separator \"&\" to divide the table into cells. We then loop over each cell, checking for numeric values after stripping away any stylized text. In cases, where a cell contains multiple numeric values, such as \"0 ± 0\", we consistently choose the first numeric value. For the chemistry tables, the parsing process is more straightforward, owing to the structured XML format of the table. Here, we iterate over each cell, verifying if it contains a numeric value once stylized text has been removed. The performance of our rule-based cell detector on two datasets is presented in Table 4. In the case of DISCOMAT, we use the cell detector provided by the original paper Gupta et al. (2022). " }, { "figure_ref": [], "heading": "B arXiv Machine Learning Tables", "publication_ref": [ "b16", "b23" ], "table_ref": [], "text": "Extraction Attributes We design a set of extraction attributes for each of the three primary types of numeric cells in ML tables: \"Result\", \"Hyperparameter\", and \"Data Statistics\". These attributes are outlined in detail below.\n• \"Result\" includes seven attributes: training data, test data, task, metric, model, model settings and experimental settings. The first five attributes are fixed, with answers being text spans in the paper. The last two attributes, model settings and experimental settings, are free-form attributes, with answers being JSON objects. For example, the experimental settings attribute may be {\"number of training examples\": \"0\"} for a zero-shot setting. This scheme is more detailed than previous approaches (Hou et al., 2019;Kardas et al., 2020) and can accommodate a broader range of ML paradigms and provide more granular information.\n• \"Hyper-parameter\" includes optimization parameters like learning rate and batch size, as well as numeric descriptions of model architectures such as layer count. The three fixed attributes for this category are: model, parameter/architecture, and dataset.\n• \"Data Stat.\" covers four attributes: dataset, dataset attribute, sub-set/group, and dataset features. The sub-set/group specifies a dataset subset (e.g., \"train\" or \"test\"), while dataset features, a free-form attribute, captures various dataset characteristics like the language or domain.\nAnnotation Process We sample 10 papers from each of three pertinent arXiv fields: Machine Learning, Computer Vision, and Natural Language Processing. After removing papers without L A T E X source code or any tables, a total of 25 papers Dataset Full Prompt" }, { "figure_ref": [], "heading": "ML Tables [Retrieve paragraphs] [Input table]", "publication_ref": [], "table_ref": [], "text": "Here are JSON templates for four types of numeric cells: \"Other\", \"Result\", \"Data Stat.\", and \"Hyper-parameter/Architecture\": {\"value\": \"xx\", \"type\": \"Other\"} {\"value\": \"xx\", \"type\": \"Result\", \"task\": \"xx\", \"metric\": \"xx\", \"training data/set\": \"xx\", \"test data/set\": \"xx\", \"model/method\": \"xx\", \"model/method settings\": {\"xx\": \"yy\"}, \"experimental settings\": {\"xx\": \"yy\"}} {\"value\": \"xx\", \"type\": \"Data Stat.\", \"dataset\": \"xx\", \"attribute name\": \"xx\", \"sub-set/group name\": \"xx\", \"dataset features\": {\"xx\": \"yy\"}} {\"value\": \"xx\", \"type\": \"Hyper-parameter/Architecture\", \"model\": \"xx\", \"parameter/architecture name\": \"xx\", \"dataset\": \"xx\"} Please describe all numeric cells in the above latex table following the JSON templates (proceeding by row in a left-right, top-down direction). For each cell, output one JSON description per line. For any unanswerable attributes in the templates, set their value to the placeholder \"xx\" if it is of string type and {\"xx\": \"yy\"} if it is of dictionary type.\nCell Description: {\"value\": \"[Query cell]\", \"type\":\nChem. Tables [Input table]\nHere are JSON templates for six types of numeric cells: \"Other\", \"IC50\", \"EC50\", \"CC50\", \"MIC\", and \"GI50\": {\"value\": \"xx\", \"type\": \"Other\"} {\"value\": \"xx\", \"type\": \"IC50\", \"unit\": \"xx\", \"treatment compound\": \"xx\", \"target compound\": \"xx\"} {\"value\": \"xx\", \"type\": \"EC50\", \"unit\": \"xx\", \"treatment compound\": \"xx\", \"target compound\": \"xx\"} {\"value\": \"xx\", \"type\": \"CC50\", \"unit\": \"xx\", \"treatment compound\": \"xx\", \"target compound\": \"xx\"} {\"value\": \"xx\", \"type\": \"MIC\", \"unit\": \"xx\", \"treatment compound\": \"xx\", \"target compound\": \"xx\"} {\"value\": \"xx\", \"type\": \"GI50\", \"unit\": \"xx\", \"treatment compound\": \"xx\", \"target compound\": \"xx\"} Please describe all numeric cells in the above XML table following the JSON templates (proceeding by row in a left-right, top-down direction). For each cell, output one JSON description per line. For any unanswerable attributes in the templates, set their value to the placeholder \"xx\".\nCell Description: {\"value\": \"[Query cell]\", \"type\": are covered in our dataset. To optimize the annotation budget and the dataset diversity, we cap the number of annotated tables to five per paper.\nRecognizing the domain-specific expertise needed, we employ expert annotators with backgrounds in ML research, who are provided with tables in both L A T E X and PDF formats and encouraged to thoroughly read the paper before annotation. The annotation process comprises two steps: 1) identifying the numeric cells and their record types, and 2) filling in the slots of pre-determined attributes, forming a JSON record with keys as attribute names and values as extracted content, in a text editor. Conse-quently, the dataset contains 122 tables, with 3,792 cells and 21K attributes annotated." }, { "figure_ref": [], "heading": "C Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "Comparing an LLM-predicted JSON object with a gold JSON object is a non-trivial task, as those generative LLMs may produce text spans that do not exactly exist in the input table. Consequently, we devote substantial effort to examining various metrics to determine the one best suited for our task using ML tables. Here, we consider three metrics: the standard token-level F 1 to capture the level of lexical overlap between the predicted and Dataset Full Prompt" }, { "figure_ref": [], "heading": "DISCOMAT [Input table]", "publication_ref": [], "table_ref": [], "text": "Here are JSON templates for two types of numeric cells: \"Other\" and \"Glass_Compound_Amount\": {\"value\": \"xx\", \"type\": \"Other\"} {\"value\": \"xx\", \"type\": \"Glass_Compound_Amount\", \"constituent compound name\": \"xx\", \"unit\": \"xx\", \"glass material/sample name/id/code\": \"xx\"} Please describe all numeric cells in the above table following the JSON templates (proceeding by row in a left-right, top-down direction). For each cell, output one JSON description per line. For any unanswerable attributes in the templates, set their value to the placeholder \"xx\".\nCell Description: {\"value\": \"[Query cell]\", \"type\":" }, { "figure_ref": [], "heading": "SWDE-auto [Input webpage]", "publication_ref": [ "b37", "b47" ], "table_ref": [], "text": "Here is the JSON template for automobile attribute extraction: {\"webpage title\": \"xx\", \"automobile model (year)\": \"xx\", \"price\": \"xx\", \"engine type\": \"xx\", \"fuel economy\": \"xx\"} Please extract the automobile's attributes from the HTML code above following the JSON template. For any unanswerable attributes in the template, set their value to the placeholder \"<NULL>\". {\"webpage title\": \"[webpage title]\", \"automobile model (year)\": gold attributes, and two semantic similarity metrics, SBERT (Reimers and Gurevych, 2019) and BERTScore (Zhang et al., 2020), to identify semantically similar expressions (e.g., # params vs. the number of parameters).\nMeta Evaluation To assess how accurate each metric is compared to human evaluation, we manually annotated predicted-gold attribute pairs as to whether or not each pair matches. We consider a given pair to \"match\" if they are semantically equivalent, meaning they can be used interchangeably. For attributes that encapsulated multiple subattributes, we consider a pair to match if at least half of the sub-attributes are matched (i.e., F 1 score ≥ 0.5), with the decision for each sub-attribute being based on the same as in the text-span attributes. For the set of pairs to annotate and use as a test set, we sample a total of 100 cell pairs (i.e., 677 attribute pairs) according to the following process: 1) we first uniformly sample a table from the development set (containing 10 papers); and 2) we then sample a random cell from the table, ensuring there were no duplicate cells. For each pair of predictedgold attributes, each metric's decision (1 or 0) is made using a specific threshold. For example, if\nIn this section, we present four methods, which we call strategies, that aim to improve zero-shot hate speech detection … {\"value\": \"+100\", \"type\": all metrics for our task. This might suggest that discerning subtle differences is more crucial than identifying different phrases with the same meaning for this task. Based on these empirical findings, we opt for the token-level F 1 for automatic evaluation at the attribute level. This choice is highly desirable not only because of its high accuracy but also due to its simplicity." }, { "figure_ref": [], "heading": "D Implementation Details", "publication_ref": [], "table_ref": [ "tab_15" ], "text": "Considering the lengthy source code for tables, we employ different strategies to encode the input table and perform Schema-Driven IE, based on the context length of the chosen LLM. For LLMs with a larger context length, such as GPT-4, code-davinci-002, and CodeLlama, we input the full table and conduct the proposed error recovery process. For LLMs with a more limited context length, such as LLaMA and T5-11B, we query each target cell individually. The input table is condensed by rows, retaining the first two rows, typically containing headers, and the row with the query cell, with the token <select> pinpointing the position of the query cell. We use greedy decoding to maximize the reproducibility of our results.\nFor the TableQA setting, we divide the problem into two steps: selecting the record type and predicting the relevant attributes. For T5 and Flan-T5, the first step is modeled as a multi-choice QA problem, where the model chooses the type of the query cell from a list of provided options. The second step is modeled as an extractive QA task, asking the model to pinpoint the answer spans for the attributes associated with the selected type. For TaPas, the initial step is treated as a classification problem, whereas the latter one is handled as a cell selection problem. The hyper-parameters used for fine-tuning T5 and TaPas are presented in Table 9. " }, { "figure_ref": [], "heading": "E Error Analysis of Caption", "publication_ref": [], "table_ref": [], "text": "In Section 4.4, we observe an unexpected finding that table captions do not enhance performance, but rather seem to detract from it, which is counterintuitive. To delve deeper into this observation, we conduct an error analysis. This involves comparing the performances of our INSTRUCTE system with and without captions at the table level. This analysis uncovers a few outliers (3 out of 68) where including a caption leads to a 0 F 1 score, whereas the score is near perfect when the caption is excluded. For instance, as depicted in Figure 6, the predictions all fall into the \"Other\" category when a caption is included, leading to a 0 F 1 score in these outlier instances. Conversely, removing the caption results in an F 1 score of 89.3. This high score is due to the fact that retrieved paragraphs provide ample contextual information (e.g., \"hate speech detection\") without the presence of a caption.\nWe hypothesize that the model's inclination to predict \"Other\" in the presence of a caption may be a consequence of the captions' lack of specificity with respect to the attributes relevant to the table cells (for example, \"hate speech detection\"). This lack of explicit, relevant details could create confusion in associating the caption with the retrieved paragraphs, thereby misleading the model. To test our hypothesis, we manually adjust the captions to include more specific attributes, such as \"hate speech detection\" and \"T5-Base.\" As a result, we observe an improvement in the model's performance with the revised caption, with the total F 1 score even exceeding that achieved without a caption. This outcome partially supports our hypothesis and suggests that carefully crafted captions could indeed be beneficial, aligning with our initial expectations. However, this investigation also points to the fact that the model currently lacks robustness in handling these outlier scenarios." }, { "figure_ref": [], "heading": "F Positive Externalities of INSTRUCTE F.1 Extraction from Table Images", "publication_ref": [ "b25" ], "table_ref": [], "text": "One practical challenge with INSTRUCTE is the need for tables in a textual format, while many tables are available only as PDFs or images. To address this, we integrate INSTRUCTE with multimodal models to extract structured data from table images. Specifically, we experiment with two strategies: 1) direct extraction from table images, and 2) a pipeline that first employs multi-modal models to transform table images into text, and then run INSTRUCTE on the textual tables.\nIn a preliminary study with ML tables, we use GPT-4V as the backbone for INSTRUCTE. We find that the pipeline method yields a Additionally, we test IDEFICS-80b-instruct (Laurençon et al., 2023), a leading open-source multi-modal model, which unfortunately could not perform the table-text conversion or direct extraction. 13 This suggests a clear avenue for future research to enhance multi-modal models' ability to accurately process image-based tables." }, { "figure_ref": [], "heading": "F.2 Leaderboard Extraction from ML Papers", "publication_ref": [ "b16", "b23", "b23" ], "table_ref": [], "text": "Task Definition & SOTA Methods The task of leaderboard extraction (Hou et al., 2019;Kardas et al., 2020) entails extracting leaderboard tuples (task, dataset, metric, score) from tables in ML papers. Unlike our proposed Schema-Driven IE, which requires open-domain span identification, leaderboard extraction presumes prior knowledge of all leaderboards, represented as pre-defined (task, dataset, metric) tuples, and centers on linking numeric cells to these leaderboards.\nThe state-of-the-art leaderboard extraction method, AXCELL (Kardas et al., 2020), is a comprehensive pipeline system comprising four components: Table Type Classification, Table Segmentation, Cell Linking, and Filtering. For each component, except the last one, AXCELL employs a supervised model. It starts with table type classi-fication to identify result-related tables, which are then passed to the table segmenter responsible for annotating the header cells of the table. Following this step, a retrieval model links numeric cells in the table to pre-defined leaderboards using humanengineered features. Lastly, AXCELL filters and selects the best record based on the leaderboard taxonomy criteria, such as retaining higher values for \"Accuracy\" and lower ones for \"error rate\"." }, { "figure_ref": [], "heading": "Application of INSTRUCTE", "publication_ref": [], "table_ref": [], "text": "To extract leaderboards from an ML paper, we consider all tables that contain numeric cells, instead of selecting tables via a trained classifier as in AXCELL. For each table, we run INSTRUCTE using a customized leaderboard extraction JSON template. This template resembles the ML-table template with two additional fixed attributes: eval split and eval class in the \"Result\" cell template. We add the eval split attribute because the evaluated split is essential information for this task; for instance, \"dev F 1 \" and \"test F 1 \" are treated as different metrics in the leaderboard taxonomy. The eval class attribute is used to exclude sub-set or sub-class results that are typically present in analysis tables.\nAfter generating all predicted cell descriptions, we filter them based on three criteria: 1) the type attribute must be \"Result\"; 2) the eval class attribute must be \"all\" or \"Null\" as observed on the development set; and 3) the cell must be bolded in the table, as this usually indicates its superior performance and possible relevance to the leaderboard. For papers without any bolded cells, we experiment with two strategies: 1) include all the remaining cells in the table that meet the first two criteria; 2) use cells selected by AXCELL, as its engineered features for cell selection may be useful. This hybrid system is referred to as INSTRUCTE+. We then use the predicted task, dataset, and metric attributes in each JSON record to match with the pre-defined leaderboards using token-level F 1 , and we select the leaderboard with the highest average score over three attributes. Finally, following AXCELL, we choose the best record based on the leaderboard taxonomy criteria, e.g., retaining higher values for \"Accuracy\" and lower ones for \"error rate\"." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b23" ], "table_ref": [], "text": "We compare INSTRUCTE with AXCELL on PWC LEADERBOARDS (Kardas et al., 2020), the largest dataset for leaderboard extraction. For INSTRUCTE, we use code-davinci-002 given its excellent performance on SCHEMA-TO-JSON. Ta- ble 10 presents the results of both methods. We can see that INSTRUCTE achieves competitive performance compared to the supervised AXCELL, highlighting the efficacy of our proposed approach. When we enhance INSTRUCTE with AXCELL's cell selection capabilities to create INSTRUCTE+, it outperforms AXCELL, demonstrating the promising potential of combining these two approaches." }, { "figure_ref": [], "heading": "GPT-4V", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We would like to thank Azure's Accelerate Foundation Models Research Program and OpenAI's Researcher Access Program for graciously providing access to API-based models, such as GPT-4. This research is supported in part by the NSF (IIS-2052498), ODNI and IARPA via the HIATUS program (2022-22072200004), and the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001119C0108. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. This work is approved for Public Release, Distribution Unlimited." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "F la n -T 5 G P T -3 .5 A lp a c a { \"value\": \"+100\", \"type\": \"Other\" } … { \"value\": \"+0.0\", \"type\": \"Other\" } …" }, { "figure_ref": [], "heading": "{", "publication_ref": [], "table_ref": [], "text": "\"value\": \"+100\", \"type\": \"Result\", \"task\": \"hate speech detection\", \"model/method\": \"FCS\" … } … { \"value\": \"+0.0\", \"type\": \"Result\", \"task\": \"hate speech detection\", \"model/method\": \"FCS$_{p_1}$\" … } … Edge cases F1: 89.3\nTotal F1: 75.5\nEdge cases F1: 0.0 To tal F1: 72.3 \\caption{Evaluation of FCS variants on hate speech detection. …} { \"value\": \"+100\", \"type\": \"Result\", \"task\": \"hate speech detection\", \"model/method\": \"FCS\" … } … { \"value\": \"+0.0\", \"type\": \"Result\", \"task\": \"hate speech detection\", \"model/method\": \"FCS$_{p_1}$\" … } … Here are JSON templates for four types of numeric cells: \"Other\", \"Result\", \"Data Stat.\", and \"Hyper-parameter/Architecture\": {\"value\": \"xx\", \"type\": \"Result\", \"task\": \"xx\", \"metric\": \"xx\",…" }, { "figure_ref": [], "heading": "Cell description templates", "publication_ref": [], "table_ref": [], "text": "Please describe all numeric cells in the above latex table following the JSON templates …" }, { "figure_ref": [], "heading": "Task-specific Instruction", "publication_ref": [], "table_ref": [], "text": "Cell Description: {\"value\": \"0.755\", Initial cell description Output { \"value\": \"0.755\", \"type\": \"Result\", \"training data/set\": \"SGD\", \"test data/set\": \"SGD Bus Booking\", \"task\": \"Intent Classification\", \"metric\": \"Accuracy\", \"model\": \"Megatron-GPT\", \"experimental settings\": { \"mode\": \"Zero Shot\", \"out-of-domain\": \"true\" " }, { "figure_ref": [], "heading": "Manually specifed caption", "publication_ref": [], "table_ref": [], "text": "Figure 6: An error analysis of edge cases in which the predictions made by INSTRUCTE with captions default to \"Other\" (resulting in an 0 F 1 ). Our hypothesis that this issue may stem from the caption's lack of specificity is tested by manually expanding the caption (displayed on the right). This amendment significantly improves the performance on these edge cases, increasing the F 1 score to 92.3.\n{\"value\": \"95.7\", \"type\": \"Result\", \"task\": \"Named Entity Recognition\", \"model\": \"Elmo\",\"metric\": \"F1\", \"training data\": \"CoNLL 2003\", \"test data\": \"CoNLL 2003\"} {\"value\": \"96.4\", \"type\": \"Result\", \"task\": \"Named Entity Recognition\", \"model\": \"BERT base\",\"metric\": \"F1\", \"training data\": \"CoNLL 2003\", \"test data\": \"CoNLL Gold {\"value\": \"95.7\", \"type\": \"Result\", \"task\": \"Entity Recognition\", \"model\": \"Elmo\",\"metric\": \"Accuracy\", \"training data\": \"CoNLL\", \"test data\": \"CoNLL\"} {\"value\": \"96.4\", \"type\": \"Result\", \"task\": \"Entity Recognition\", \"model\": \"BERT base\",\"metric\": \"Accuracy\", \"training data\": \"CoNLL 03\", \"test data\": \"CoNLL 03\"} Predicted JSON Records" }, { "figure_ref": [], "heading": "Eval", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Table-F1", "publication_ref": [], "table_ref": [], "text": "Gold the token-level F 1 's score for paired attributes is 0.4 and the threshold is 0.5, then the decision would be 0, indicating no match. The decisions over the test set containing 677 attribute pairs are then compared to human evaluation. In this binary classification problem, F 1 is used to evaluate the performance of the metrics.\nIn Table 8, we present the performances of each metric with the optimal threshold for each. Surprisingly, we find that the token-level F 1 (with a threshold of 0.25) decision aligns nearly perfectly with human judgment, and performs the best among" } ]
In this paper, we explore the question of whether large language models can support cost-efficient information extraction from tables. We introduce schema-driven information extraction, a new task that transforms tabular data into structured records following a humanauthored schema. To assess various LLM's capabilities on this task, we present a benchmark comprised of tables from four diverse domains: machine learning papers, chemistry literature, material science journals, and webpages. We use this collection of annotated tables to evaluate the ability of open-source and API-based language models to extract information from tables covering diverse domains and data formats. Our experiments demonstrate that surprisingly competitive performance can be achieved without requiring task-specific pipelines or labels, achieving F 1 scores ranging from 74.2 to 96.1, while maintaining cost efficiency. Moreover, through detailed ablation studies and analyses, we investigate the factors contributing to model success and validate the practicality of distilling compact models to reduce API reliance.
Schema-Driven Information Extraction from Heterogeneous Tables
[ { "figure_caption": "TableFigure 1 :1Figure 1: Overview of Schema-Driven Information Extraction. The input includes two elements: the source code of a table and a human-authored extraction schema, outlining the target attributes and their data types. The output consists of a sequence of JSON records that conform to the extraction schema.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "c", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Left: Prompt formulation of our proposed method INSTRUCTE. Right: Illustration of our error-recovery strategy, which ensures the model compliance of the instructed cell traversal order and reduces inference costs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Capability of various LLMs to perform Schema-Driven IE, measured using the SCHEMA-TO-JSON benchmark. We employ Table-F 1 for our two newly annotated datasets and provide a measure of human performance. For DISCOMAT(Gupta et al., 2022) and SWDE(Hao et al., 2011), we adhere to their original evaluation metrics, i.e., Tuple-F 1 and Page-F 1 respectively, to support comparisons with established methods. In SWDE experiments, k represents the number of trained websites from each vertical. Due to API cost constraints, *INSTRUCTE's results are computed on a 1,600 webpage sample, with bootstrap confidence intervals calculated to validate the reliability of these performance estimates (margin of error for 95% confidence interval with 1000 samples is 0.00995.)", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 shows INSTRUCTE's performance subject to the exclusion of varying prompt components.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Results of comparing various metrics, including token-level F 1 , SBERT, and BERTScore, to human judgment over different thresholds on ML tables. Numbers are computed over 677 sampled attributes that are paired with respective gold references.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "le s w / n u m . c e ll s P r o m p t In th is w o rk , w e e x p lo re th e ta sk o f in te n t c la ss if ic a ti o n u si n g th e se la rg e la n g u a g e m o d e ls a n d p tu n in g . G e n e ra ti v e m e th o d s … R e tr ie v e d p a ra g ra p h s \\b e g in {t a b le }[ !h tb p ] \\b e g in {t a b u la r} {@ {} ll ll ll l@ {} } M o d e & \\m u lt ic o lu m n {3 }{ l} {B u s B o o k in g } & \\m u lt ic o lu m n {3 }{ l} {H o te l R e se rv a ti o n } \\\\ \\m id ru le & {t a b u la r} \\c a p ti o n {Z e ro -s h o t a n d F e w S h o t (F S ) p e rf o rm a n c e o n th e h e ld o u t d o m a in s … \\e n d {t a b le } In p u t T a b le H e re a re J S O N te m p la te s fo r fo u r ty p e s o f n u m e ri c c e ll s: \" O th e r\" , \" R e su lt \" , \" D a ta S ta t. \" , a n d \" H y p e r-p a ra m e te r/ A rc h it e c tu re \" : {\" v a lu e \" : \" x x \" , \" ty p e \" : \" R e su lt \" , \" ta sk \" : \" x x \" , \" m e tr ic \" : \" x x \" ,… C e ll d e sc ri p ti o n te m p la te s P le a se d e sc ri b e a ll n u m e ri c c e ll s in th e a b o v e la te x ta b le fo ll o w in g th e J S O N te m p la te s … T a sk -s p e c if ic In st ru c ti o n C e ll D e sc ri p ti o n : {\" v a lu e \" : \" 0 .7 5 5 \" , In it ia l c e ll d e sc ri p ti o n", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "\\textbf{1.3B} & \\textbf{5B} & \\textbf{345M} & \\textbf{1.3B} & \\textbf{5B} \\\\ \\hline Zero Shot & 0.755 & 0.762 & 0.787 & 0.379 & 0.448 & 0.467 \\\\ FS -10 samples & 0.907 & 0.789 & 0.942 & 0.793 & 0.720 & 0.939 \\\\ FS -50 samples & 0.953 & 0.965 & 0.975 & 0.957 & 0.968 & 0", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Generate L A T E X code for image tables using GPT-4V.", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Table-F 1 , a new reference-based evaluation metric gauging attribute prediction performance within a table. Table-F 1 represents the harmonic mean of precision and recall, with precision being the ratio of correctly predicted attributes to total predicted attributes. At the attribute level, we consider two metrics: token-level F 1 and exact match (EM). For token-level F 1 , a prediction is deemed correct if the score exceeds a specific threshold, which is determined by maximizing the alignment between model predictions and human judgments on the dev set (see Appendix C). An example of Table-F 1 calculation is shown in Figure 7 in the appendix. We report macro-averaged Table-F 1 given the wide variance in table sizes.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Figure4: Ablation studies on various components of our INSTRUCTE (w/ code-davinci-002) on the ML tables. Interestingly, excluding the table caption improves performance. Our detailed analysis in Appendix E reveals that low-quality captions (e.g., lack of specificity) may confuse the model, leading to inaccurate predictions. state-of-the-art methods, signifying the potential of LLMs to act as flexible, powerful tools for extracting information from tables across diverse data formats and domains.Despite a noticeable gap when compared to APIbased LLMs, open-source models, like CodeLlamainstruct-13B, show promising results in ML and web domains, achieving 60.0 Table-F 1 and 91.7", "figure_data": "Schema/InstructionPrompting Strategy(Full) w/o header w/o caption w/o paragraphs w/o caption & paragraphsInstrucTE (Full) w/o schema w/o instructionInstrucTE (Full) Cell-by-cell prompting w/o error recovery w/o error recovery (+ ordered cell list)72.372.3F123.3F151.472.769.964.357.657.6EM22.2EM42.054.354.548.3020406080020406080", "figure_id": "tab_4", "figure_label": "InstrucTE", "figure_type": "table" }, { "figure_caption": "Table headers con-TEST set performance on ML tables with different LLMs and task formulations.F1 score). Yet, the challenge lies in quantifying the familiarity of terminology and content, given the model's black-box nature, and addressing it represents a compelling direction for future research.", "figure_data": "Exp. SetupFormulation ModelToken-F 1 EMTableQATaPas (large) T5 (11B)27.7 61.221.6 46.2Fine-tuning (# Train=1169)SCHE2JSONGPT-J (6B) LLaMA (7B) Alpaca (7B)49.6 51.3 50.238.4 38.0 39.4T5 (11B)64.150.2TableQAFlan-T5 (11B)36.927.7Func. Calling gpt-3.5-turbo (0613)22.418.4GPT-J (6B)18.616.2LLaMA (7B)13.511.5Alpaca (7B)26.821.1No Fine-tuningLlama2-chat (13B)31.523.0SCHE2JSONStarCoder (15.5B) CodeLlama-instruct (13B)41.2 60.032.3 44.0gpt-3.5-turbo (0613)64.147.9text-davinci-00367.450.4code-davinci-00272.357.6gpt-4 (0613)74.258.1", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results for knowledge distillation on the ML tables. Student models are trained on 3,434 tables labeled by the teacher model. GPU hours refers to the training time (× number of GPUs) of student models for one epoch. matches the teacher model's performance. 12 4.7 Positive Externalities of INSTRUCTE To further validate INSTRUCTE's practicality, we integrate it with multi-modal models, like GPT4-V, for extracting data from table images. In an initial study with ML tables, it yields a Table-F 1 of 70.2, approaching the 74.2 Table-F 1 achieved with the original text inputs. Additionally, we explore INSTRUCTE's application to the task of Leaderboard Extraction, where it shows competitive performance against leading supervised systems. Due to space constraints, details on these explorations are provided in Appendix F.", "figure_data": "Model (GPU hours)Token-Level F 1EMPRF 1PRF 1Teacher code-davinci-002 74.1 71.8 72.3 59.4 56.9 57.6LLaMA-7B (50h)74.1 67.6 69.1 56.8 53.4 54.3StudentAlpaca-7B (50h)72.7 64.8 67.5 56.1 50.0 52.0T5-11B (380h)75.8 71.4 73.2 60.3 56.7 58.1", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "INSTRUCTE prompts used for ML and chemistry tables.", "figure_data": "", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "INSTRUCTE prompts used for DISCOMAT and SWDE. For SWDE, we use the \"Auto\" vertical as an illustrative example, and the prompts for other verticals differ only in attribute names (refer to Table7for the attributes of each vertical).", "figure_data": "Vertical# Sites # PagesAttributesAuto1017,923 model, price, engine, fuel-economyBook1020,000title, author, ISBN-13, publisher, publish-dateCamera105,258model, price, manufacturerJob1020,000title, company, location, dateMovie1020,000title, director, genre, ratingNBA Player104,405name, team, height, weightRestaurant1020,000name, address, phone, cuisineUniversity1016,705name, phone, website, type", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "SWDE statistics.", "figure_data": "", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Hyper-parameters used for fine-tuning T5 and TaPas.", "figure_data": "T5 (11B) TaPaslearning rate1e-45e-5batch size832# epoches510", "figure_id": "tab_15", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Table-F 1 score of 70.2 from image inputs, approaching the 74.2 Table-F 1 achieved with the original text inputs. It outperforms direct extraction using GPT-4V, which attains only a Table-F 1 score of 46.4, as the pipeline can capitalize on INSTRUCTE's error recovery capabilities, resulting in more thorough and accurate extractions.", "figure_data": "", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" }, { "figure_caption": "\"Please generate LaTex code for the uploaded image table.\"", "figure_data": "", "figure_id": "tab_17", "figure_label": "Prompt", "figure_type": "table" }, { "figure_caption": "Leaderboard extraction results on the PWC LEADERBOARDS dataset.", "figure_data": "", "figure_id": "tab_18", "figure_label": "10", "figure_type": "table" } ]
Fan Bai; Junmo Kang; Gabriel Stanovsky; Dayne Freitag; Alan Ritter
[ { "authors": "Armen Aghajanyan; Dmytro Okhonko; Mike Lewis; Mandar Joshi; Hu Xu; Gargi Ghosh; Luke Zettlemoyer", "journal": "", "ref_id": "b0", "title": "Htlm: Hyper-text pre-training and prompting of language models", "year": "2022" }, { "authors": "Taylor Berg-Kirkpatrick; David Burkett; Dan Klein", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "An empirical investigation of statistical significance in NLP", "year": "2012" }, { "authors": "Alon Michael J Cafarella; Daisy Zhe Halevy; Eugene Wang; Yang Wu; Zhang", "journal": "", "ref_id": "b2", "title": "Webtables: exploring the power of tables on the web", "year": "2008" }, { "authors": "Andrew Carlson; Charles Schafer", "journal": "", "ref_id": "b3", "title": "Bootstrapping information extraction from semi-structured web pages", "year": "2008" }, { "authors": "Nathanael Chambers; Dan Jurafsky", "journal": "", "ref_id": "b4", "title": "Templatebased information extraction without the templates", "year": "2011" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b5", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Wenhu Chen; Hongmin Wang; Jianshu Chen; Yunkai Zhang; Hong Wang; Shiyang Li; Xiyou Zhou; William Yang; Wang ", "journal": "", "ref_id": "b6", "title": "Tabfact: A large-scale dataset for table-based fact verification", "year": "2020" }, { "authors": "Yunmo Chen; William Gantt; Tongfei Chen; Aaron Steven White; Benjamin Van Durme", "journal": "", "ref_id": "b7", "title": "A unified view of evaluation metrics for structured prediction", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Xin Luna; Dong ; Hannaneh Hajishirzi; Colin Lockard; Prashant Shiralkar", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Multi-modal information extraction from text, semi-structured, and tabular data on the web", "year": "2020" }, { "authors": "B Gregg; Fields", "journal": "Cells", "ref_id": "b10", "title": "The rebirth of matrix metalloproteinase inhibitors: Moving beyond the dogma", "year": "2019" }, { "authors": "Tanishq Gupta; Mohd Zaki; N M Anoop Krishnan; Mausam ", "journal": "", "ref_id": "b11", "title": "Discomat: Distantly supervised composition extraction from tables in materials science articles", "year": "2022" }, { "authors": "Qiang Hao; Rui Cai; Yanwei Pang; Lei Zhang", "journal": "Association for Computing Machinery", "ref_id": "b12", "title": "From one tree to a forest: A unified solution for structured web data extraction", "year": "2011" }, { "authors": "Stefan Hegselmann; Alejandro Buendia; Hunter Lang; Monica Agrawal; Xiaoyi Jiang; David Sontag", "journal": "", "ref_id": "b13", "title": "Tabllm: Few-shot classification of tabular data with large language models", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "Jonathan Herzig; Krzysztof Pawel; Thomas Nowak; Francesco Müller; Julian Piccinno; Eisenschlos", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "TaPas: Weakly supervised table parsing via pre-training", "year": "2020" }, { "authors": "Yufang Hou; Charles Jochim; Martin Gleize; Francesca Bonin; Debasis Ganguly", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Identification of tasks, datasets, evaluation metrics, and numeric scores for scientific leaderboards construction", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b17", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Hanxu Hu; Yunqing Liu; Zhongyi Yu; Laura Perezbeltrachini", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Improving user controlled table-totext generation robustness", "year": "2023" }, { "authors": "Hiroshi Iida; Dung Thai; Varun Manjunatha; Mohit Iyyer", "journal": "", "ref_id": "b19", "title": "Tabbie: Pretrained representations of tabular data", "year": "2021" }, { "authors": "Mohit Iyyer; Wen-Tau Yih; Ming-Wei Chang", "journal": "", "ref_id": "b20", "title": "Search-based neural structured learning for sequential question answering", "year": "2017" }, { "authors": "Sujay Kumar; Jauhar ; Peter Turney; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Tables as semi-structured knowledge for question answering", "year": "2016" }, { "authors": "Junmo Kang; Wei Xu; Alan Ritter", "journal": "", "ref_id": "b22", "title": "Distill or annotate? cost-efficient fine-tuning of compact models", "year": "2023" }, { "authors": "Marcin Kardas; Piotr Czapla; Pontus Stenetorp; Sebastian Ruder; Sebastian Riedel; Ross Taylor; Robert Stojnic", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "AxCell: Automatic extraction of results from machine learning papers", "year": "2020" }, { "authors": "Kyungdoc Kim; Seokho Kang; Jiho Yoo; Youngchun Kwon; Youngmin Nam; Dongseon Lee; Inkoo Kim; Youn-Suk Choi; Yongsik Jung; Sangmo Kim", "journal": "npj Computational Materials", "ref_id": "b24", "title": "Deep-learning-based inverse design model for intelligent discovery of organic molecules", "year": "2018" }, { "authors": "Lucile Hugo Laurençon; Léo Saulnier; Stas Tronchon; Amanpreet Bekman; Anton Singh; Thomas Lozhkov; Siddharth Wang; Alexander M Karamcheti; Douwe Rush; Matthieu Kiela; Victor Cord; Sanh", "journal": "", "ref_id": "b25", "title": "Obelics: An open web-scale filtered dataset of interleaved image-text documents", "year": "2023" }, { "authors": "T Nghia; Fan Le; Alan Bai; Ritter", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Fewshot anaphora resolution in scientific protocols via mixtures of in-context experts", "year": "2022" }, { "authors": "Rémi Lebret; David Grangier; Michael Auli", "journal": "", "ref_id": "b27", "title": "Neural text generation from structured data with application to the biography domain", "year": "2016" }, { "authors": "Junlong Li; Yiheng Xu; Lei Cui; Furu Wei", "journal": "", "ref_id": "b28", "title": "MarkupLM: Pre-training of text and markup language for visually rich document understanding", "year": "2022" }, { "authors": "Raymond Li; Loubna Ben Allal; Yangtian Zi; Niklas Muennighoff; Denis Kocetkov; Chenghao Mou; Marc Marone; Christopher Akiki; Jia Li; Jenny Chim", "journal": "", "ref_id": "b29", "title": "Starcoder: may the source be with you!", "year": "2023" }, { "authors": "Ying Bill Yuchen Lin; Nguyen Sheng; Sandeep Vo; Tata", "journal": "Association for Computing Machinery", "ref_id": "b30", "title": "Freedom: A transferable neural architecture for structured information extraction on web documents", "year": "2020" }, { "authors": "Colin Lockard; Prashant Shiralkar; Xin Luna; Dong ", "journal": "", "ref_id": "b31", "title": "Openceres: When open information extraction meets the semi-structured web", "year": "2019" }, { "authors": "Colin Lockard; Prashant Shiralkar; Xin ; Luna Dong; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "ZeroShotCeres: Zeroshot relation extraction from semi-structured webpages", "year": "2020" }, { "authors": "Yuze Lou; Bailey Kuehl; Erin Bransom; Sergey Feldman; Aakanksha Naik; Doug Downey", "journal": "OpenAI", "ref_id": "b33", "title": "S2abel: A dataset for entity linking from scientific tables", "year": "2023" }, { "authors": "Ankur Parikh; Xuezhi Wang; Sebastian Gehrmann; Manaal Faruqui; Bhuwan Dhingra; Diyi Yang; Dipanjan Das", "journal": "", "ref_id": "b34", "title": "Totto: A controlled table-to-text generation dataset", "year": "2020" }, { "authors": "Panupong Pasupat; Percy Liang", "journal": "", "ref_id": "b35", "title": "Compositional semantic parsing on semi-structured tables", "year": "2015" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b37", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Jonas Baptiste Rozière; Fabian Gehring; Sten Gloeckle; Itai Sootla; Gat; Ellen Xiaoqing; Yossi Tan; Jingyu Adi; Tal Liu; Jérémy Remez; Artyom Rapin; Ivan Kozhevnikov; Joanna Evtimov; Manish Bitton; Cristian Canton Bhatt; Aaron Ferrer; Wenhan Grattafiori; Alexandre Xiong; Jade Défossez; Faisal Copet; Hugo Azhar; Louis Touvron; Nicolas Martin; Thomas Usunier; Gabriel Scialom; Synnaeve", "journal": "", "ref_id": "b38", "title": "Code llama: Open foundation models for code", "year": "2023" }, { "authors": "Vladimir Michael Sejr Schlichtkrull; Barlas Karpukhin; Mike Oguz; Wen-Tau Lewis; Sebastian Yih; Riedel", "journal": "", "ref_id": "b39", "title": "Joint verification and reranking for open fact checking over tables", "year": "2021" }, { "authors": "Jonathan M Stokes; Kevin Yang; Kyle Swanson; Wengong Jin; Andres Cubillos-Ruiz; Nina M Donghia; Craig R Macnair; Shawn French; Lindsey A Carfrae; Zohar Bloom-Ackermann; Victoria M Tran; Anush Chiappino-Pepe; Ahmed H Badran; Ian W Andrews; Emma J Chory; George M Church; Eric D Brown; Tommi S Jaakkola; Regina Barzilay; James J Collins", "journal": "Cell", "ref_id": "b40", "title": "A deep learning approach to antibiotic discovery", "year": "2020" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b41", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b42", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b43", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Fei Wang; Zhewei Xu; Pedro Szekely; Muhao Chen", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Robust (controlled) table-to-text generation with structure-aware equivariance learning", "year": "2022" }, { "authors": "Pengcheng Yin; Graham Neubig; Wen-Tau Yih; Sebastian Riedel", "journal": "", "ref_id": "b45", "title": "Tabert: Pretraining for joint understanding of textual and tabular data", "year": "2020" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman; Zilin Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task", "year": "2018" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b47", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Victor Zhong; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b48", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "year": "2017" }, { "authors": "Yichao Zhou; Ying Sheng; Nguyen Vo; Nick Edmonds; Sandeep Tata", "journal": "Association for Computing Machinery", "ref_id": "b49", "title": "Learning transferable node representations for attribute extraction from web documents", "year": "2022" } ]
[]
10.18653/v1/D19-3009
2024-01-31
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b57", "b26", "b48", "b17", "b13", "b39", "b44", "b49", "b21", "b49", "b13", "b49", "b44", "b57", "b28", "b20", "b2", "b16", "b40", "b36", "b66", "b12", "b65", "b41" ], "table_ref": [], "text": "Plain language summaries of scientific information are important to make science more accessible (Kuehne and Olden, 2015;Stoll et al., 2022) and inform public decision-making (Holmes-Rovner et al., 2005;Pattisapu et al., 2020). Recently, generative models have made gains in translating scientific information into plain language approachable to lay audiences (August et al., 2022b;Goldsack et al., 2023;Devaraj et al., 2021). Despite these gains, the field has not reached consensus on effective automated evaluation metrics for plain Figure 1: We present APPLS, the first granular testbed for analyzing evaluation metric performance for plain language summarization (PLS). We assess performance of 15 existing metrics and our new metric POMME. language summarization (PLS) (Luo et al., 2022;Ondov et al., 2022) due to the multifaceted nature of the PLS task. Removal of unnecessary details (Pitcher et al., 2022), adding relevant background explanations (Guo et al., 2021), jargon interpretation (Pitcher et al., 2022), and text simplification (Devaraj et al., 2021) are all involved in PLS, posing challenges for comprehensive evaluation.\nWe aim to assess how well existing metrics capture the multiple criteria of PLS. We define four criteria, informed by prior work (Pitcher et al., 2022;Ondov et al., 2022;Stoll et al., 2022;Jain et al., 2022), that a PLS metric should be sensitive to: informativeness, simplification, coherence, and faithfulness. We introduce a set of perturbations to probe metric sensitivity to these criteria, where each perturbation is designed to affect a single criterion with ideally minimal impact to others. 2 We produce the APPLS meta-evaluation testbed by incrementally introducing perturbations to the texts of two scientific PLS datasets, CELLS (Guo et al., 2022) and PLABA (Attal et al., 2023).\nUsing APPLS, we analyze 15 metrics, including the most widely used metrics in text simplification and summmarization, and recently-proposed prompt-based evaluation (Gao et al., 2023;Luo et al., 2023). We find that while established metrics like ROUGE (Lin, 2004), BERTScore (Zhang et al., 2019), and QAEval (Deutsch et al., 2021) demonstrate mixed sensitivities to perturbations associated with informativeness, coherence, and faithfulness; all tested metrics, including those explicitly crafted for text simplification (Xu et al., 2016;Maddela et al., 2022), display a lack of sensitivity towards simplification perturbations.\nIn response to the lack of effective metrics for simplification, we introduce POMME, a new metric that evaluates text simplicity by calculating normalized perplexity differences between language models (LMs) trained on in-domain (i.e., scientific) and out-of-domain (i.e., web) text. We show POMME's effectiveness at capturing differences in text simplicity through extensive experiments on APPLS and other text simplification datasets. Because POMME is normalized to a reference dataset, it also allows text simplicity to be compared across different text corpora.\nOur main contributions are as follows: • We present APPLS, the first granular testbed for analyzing evaluation metric performance for plain language summarization( §3, 4); • We assess the performance of existing evaluation metrics, demonstrating mixed effectiveness in evaluating informativeness, coherence, faithfulness, and simplification ( §5, 7); • We introduce a new metric, POMME, which employs language model perplexity to assess text simplicity, and validate its performance in our testbed and two additional datasets ( §6, 7)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b27", "b44", "b36", "b59", "b46", "b40", "b16", "b24", "b52", "b11", "b42", "b51", "b58" ], "table_ref": [], "text": "Limitations of Existing Metrics The primary approach for evaluating plain language summaries incorporates evaluation metrics for summarization and simplification, and human evaluation (Jain et al., 2021;Ondov et al., 2022). While ROUGE (Lin, 2004) and BLEU (Sulem et al., 2018) are frequently employed in PLS assessment, their efficacy is limited due to the reliance on high-quality reference summaries, which are often challenging to obtain for PLS. Further, these metrics struggle to accurately identify hallucinations, especially crucial for PLS in the health domain to accurately inform health decisions (Wallace et al., 2021;Pagnoni et al., 2021). Though human evaluation offers thorough assessment, the high costs and time needed impede scalability for larger datasets. While recent progress in prompt-based evaluation shows potential for assessing factuality (Luo et al., 2023) and summarization quality (Gao et al., 2023), their efficacy for PLS is yet to be validated. Our work aims to fill these gaps through a systematic examination of these metrics within the PLS context.\nRobust Analysis with Synthetic Data Synthetic data has been widely used in NLP tasks to evaluate metrics, including text generation (He et al., 2022;Sai et al., 2021), natural language inference (Chen and Eger, 2022;McCoy et al., 2019), question answering (Ribeiro et al., 2019), and reading comprehension (Sugawara et al., 2020). Yet, no prior work has focused on the PLS task or incorporated simplification into their benchmarks. Additionally, previous studies lack granular analyses to capture the nuanced relationship between text changes and score changes. Our research endeavors to bridge these gaps by crafting perturbations that mirror real-world errors within the PLS context." }, { "figure_ref": [], "heading": "Criteria-Specific Perturbation Design", "publication_ref": [ "b53", "b49", "b44", "b57", "b28", "b15", "b55", "b6" ], "table_ref": [ "tab_0" ], "text": "We define four criteria that an effective PLS evaluation metric should be sensitive to based on both abstractive summarization (Sai et al., 2022) and plain language summarization paradigms (Pitcher et al., 2022;Ondov et al., 2022;Stoll et al., 2022;Jain et al., 2022). To assess metric sensitivity, we develop perturbations for each criteria (illustrative examples in Table 1, experimental details in App. A). We define sensitivity similar to prior work (Gabriel et al., 2020) as being correlated in the expected direction with the amount of perturbation. These criteria and our designed perturbations are:\n• Informativeness measures the extent to which a PLS covers essential information from the paper (e.g., methods, main findings) and incorporates relevant background information (Smith et al., 2021;Beck et al., 1991). We perturb the text by deleting/adding sentences and adding definitions. ground explanation). We replace sentences in the original text with simpler text. • Coherence describes the logical arrangement of a plain language summary. We perturb the text by randomly reordering sentences. • Faithfulness denotes how well the summary aligns factually with the source text. Perturbations are swapping numbers, noun phrases, synonyms, antonyms, and negating sentences. Designed perturbations allow us to control each criteria. We ensure perturbed text quality by manually validating a subsample of perturbations ( §4.3)." }, { "figure_ref": [], "heading": "Constructing the APPLS Testbed", "publication_ref": [], "table_ref": [], "text": "We implement our perturbations in two existing large-scale PLS datasets ( §4.1). We describe how perturbations are incorporated into the dataset and our approach for managing perturbation magnitude ( §4.2) and validating perturbation quality ( §4.3). We employ this testbed in an analysis of existing ( §5) and novel ( §6) metrics for PLS ( §7)." }, { "figure_ref": [], "heading": "Diagnostic datasets", "publication_ref": [ "b20", "b2", "b20" ], "table_ref": [], "text": "For our experiments, we use the CELLS (Guo et al., 2022) and PLABA (Attal et al., 2023) CELLS (Guo et al., 2022) is a parallel corpus of scientific abstracts (source texts) and their corresponding plain language summaries (target texts), which are written by the abstract authors or by other domain experts. CELLS aggregates papers from for controlled perturbations, but is nonetheless not used as our primary dataset in APPLS due to the simplifications being relatively contrived. Moreover, PLABA's pronounced n-gram overlap between sources and summaries tend to skew results for evaluation metrics that prioritize n-gram overlap, potentially compromising the generalizability of the assessment. Therefore, PLABA serves as an auxiliary dataset to CELLS, helping to address its limitations, discussed in Sections §4.2 and §4.3. We report simplification perturbation results for PLABA in the main paper and remaining perturbation results in App. G." }, { "figure_ref": [], "heading": "Applying perturbations to datasets", "publication_ref": [ "b21", "b45", "b7", "b20", "b20", "b38", "b60" ], "table_ref": [ "tab_2" ], "text": "While the majority of metrics we assess only require targets and model-generated text (hypothesis), SARI and LENS additionally make use of the source text. For the APPLS testbed, we propose an oracle hypothesis, a reasonable extractive hypothesis that summarizes the source text with lexical variations while minimizing factual inaccuracies (Guo et al., 2021). For CELLS, the oracle hypothesis is created by selecting the set of source sentences yielding the highest ROUGE-L score when compared to the target summary and then introducing lexical variability through round-trip translation (Ormazabal et al., 2022). 3 Because PLABA is sentence-aligned, no extraction is needed, and the oracle hypothesis is created simply by round-trip translating the target. Details are in App. B. We apply all perturbations to the oracle hypotheses, where each perturbation introduces a change (e.g., add/swap sentences) at some magnitude (e.g., replace 50% of sentences). Given costs associated with some of our perturbations (e.g., LLMbased simplification), we restrict perturbation experiments to dataset test splits (stats in Table 2).\nFor informativeness, we add sentences to the oracle hypothesis from ACL papers (Bird et al., 2008) to simulate out-of-domain hallucinations and Cochrane abstracts4 for in-domain hallucinations. For sentence addition, we add up to the same number of sentences as in the oracle hypothesis. For sentence deletion, we delete sentences until a single sentence remains. For keyword definitions, we add up to three definitions, the average number of nouns explained in CELLS abstracts (Guo et al., 2022), i.e., 100% perturbed adds three definitions.\nFor simplification, for CELLS, we first generate an LLM-simplified summary from the oracle extractive hypothesis. We then align sentences between the oracle hypothesis and LLM-simplified summary using the sentence alignment algorithm from Guo et al. (2022). We perturb the text by replacing hypothesis sentences with their corresponding LLM-simplified sentences randomly until full replacement. We use GPT-3 (Brown et al., 2020) to generate LLM-simplified text due to its accessibility and demonstrated proficiency in text simplification (Lu et al., 2023). To ensure that our findings are not specific to the chosen model, we conduct additional experiments using Llama2 (Touvron et al., 2023) and Claude5 (details in App. §A). For PLABA, we perturb text by replacing source sentences with round-trip translated versions of their aligned simplified targets (no LLM is used). Source and target lengths in PLABA are roughly equivalent, allowing us to evaluate metric response when there are minimal changes in text length.\nFor coherence, we shuffle sentences in the hypothesis and quantify perturbation percentage as the distance between the original and shuffled hypotheses in terms of absolute difference in sentence order. A document with reversed sentence order would be 100% perturbed.\nFor faithfulness, perturbation percentage of number, entity, and verb swaps is determined by comparing the count of altered spans to the total number of eligible spans in the hypothesis. Full perturbation means all eligible spans are swapped. For sentence negation, we constrain the maximum number of negations to the sentence count in the hypothesis, allowing for a max of one negation per sentence. Therefore, full perturbation is achieved when each sentence contains a negation.\nTo mitigate the effects of randomness, we use two random seeds to produce perturbations." }, { "figure_ref": [], "heading": "Human validation of oracle extractive hypotheses and GPT-simplified summaries", "publication_ref": [ "b1" ], "table_ref": [], "text": "We assess the quality of oracle extractive hypotheses and GPT-simplified summaries through human evaluation. We sample 100 pairs each of (i) preand post-round-trip translation (RTT) oracle hypotheses and (ii) GPT-simplified summaries paired with oracle hypotheses. Annotators were asked to assess content alignment (defined as having identi-cal relation triples) and rate informativeness, simplification, faithfulness, and coherence on 5-point Likert scales. Annotations were performed by two independent annotators, both with doctorates in the biological sciences, who were hired on UpWork and compensated at 21 USD/hr. Each annotator reviewed all sampled pairs for both evaluation tasks.\nInter-rater agreement measured by Cohen's Kappa was 0.29, implying fair agreement (Artstein and Poesio, 2008). For task details, refer to App. C. Human annotators affirmed that RTT text retained its informativeness (98%), faithfulness (83%), coherence (100%), and simplicity (96%) compared to the original. Evaluators considered GPT-simplified sentences more simplified (98%), informative (63%), faithful (61%), and coherent (99%). In this context, neutral implies the same level of informativeness/simplicity between the two texts, so we report annotations equal to or better than neutral as positive. We observe that the alignment algorithm employed for simplification can lead to decreased informativeness and faithfulness; to mitigate the impact of such misalignment, we utilize the PLABA dataset for auxiliary diagnostics because it contains sentence-level alignments." }, { "figure_ref": [], "heading": "Existing Metrics", "publication_ref": [], "table_ref": [], "text": "Our analysis spans 8 established evaluation metrics, including the 5 most commonly reported in ACL'22 summarization/generation papers (empirical results in App. D). We also assess 5 lexical features associated with text simplification ( §5.2) and LLM-based evaluations ( §5.3). Details for all metrics are available in App. E." }, { "figure_ref": [], "heading": "Existing automated evaluation metrics", "publication_ref": [ "b36", "b47", "b5", "b65", "b66", "b41", "b12" ], "table_ref": [], "text": "Overlap-based metrics measure n-gram overlap. We report ROUGE (average of ROUGE-1, ROUGE-2, and ROUGE-L) (Lin, 2004), BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and SARI (Xu et al., 2016). Model-based metrics use pretrained models to evaluate text quality. We adopt GPT-PPL, BERTScore (Zhang et al., 2019), and LENS (Maddela et al., 2022). QA-based metrics capture content quality using a question-answering approach. We report QAEval (Deutsch et al., 2021) scores here." }, { "figure_ref": [], "heading": "Lexical features", "publication_ref": [ "b30", "b35", "b32", "b30", "b43" ], "table_ref": [], "text": "We also assess lexical features that have been shown to be associated with text simplicity:\n• Length: Shorter sentences are easier to understand (Kauchak et al., 2017). We report both sentence length and paragraph length. • Familiarity: Simple text contains more common words (Leroy et al., 2018). We compute the percentage of text that is made up of the 1,000 most common English words.6 • Specificity: Specificity quantifies the level of detail in the text. We use Speciteller (Ko et al., 2019) to compute the domain agnostic specificity of terms in the paragraph. • Phrase Transitions: Conjunctions (e.g., therefore) are important for flow and can assist with comprehension (Kauchak et al., 2017). We report the number of conjunctions. • Function Words: Simple text contains more verbs and fewer nouns (Mukherjee et al., 2017).\nWe report the number of verbs, nouns, adjectives, adverbs, and numbers." }, { "figure_ref": [], "heading": "LLM prompt-based evaluations", "publication_ref": [ "b16", "b40", "b16" ], "table_ref": [], "text": "Prompting LLMs for text generation evaluation has been explored in recent work (Gao et al., 2023;Luo et al., 2023). We adopt the prompt template from Gao et al. (2023) to have GPT-3 (text-davinci-003) evaluate each hypothesis on four criteriainformativeness, simplification, coherence, and faithfulness-and provide an overall quality score. All scores range from 0 (worst) to 100 (best). We supply definitions for each criterion in the prompt. We evaluate under two settings: providing only the source abstract (reference-free) and providing both source and target (reference-provided). Model configurations and prompts are available in App. F." }, { "figure_ref": [], "heading": "Novel Metric: POMME", "publication_ref": [ "b67", "b29", "b62", "b23", "b37", "b27", "b54" ], "table_ref": [ "tab_4", "tab_5" ], "text": "We introduce a novel, lightweight metric (POMME) to assess text simplification by leveraging pretrained LMs. LMs like GPT-2 have been used to assess readability through perplexity scores (Zhao et al., 2022;Kanthara et al., 2022), but these measures exhibit considerable sensitivity to text length (Wang et al., 2022), which is undesirable for PLS evaluation. Our own investigation corroborates this, showing divergent raw perplexity scores for different simplification datasets (Tables 3;4). POMME addresses this issue by employing the difference in perplexity scores from an in-domain and out-of-domain LM, leveraging the inherent domain shift from scientific to plain language in PLS and minimizing sensitivity to text length. The perplexity scores from these LMs are normalized relative to a reference dataset of complex and plain language texts, which addresses differences in magnitude when comparing perplexity scores from models with distinct vocabulary sizes. Specifically, POMME is computed by taking the difference of perplexity Z-scores rather than using raw values. POMME is computed as:\nZ(x) = log(x) -µ ref σ ref POMME = Z (PPL id ) -Z (PPL ood )\nwhere µ is the mean and σ the standard deviation of the perplexity of texts in the reference dataset (we use CELLS). We use BioMedLM (Bolton et al.) as our in-domain (\"scientific\") LM and T5 (Raffel et al., 2020) as our out-of-domain (\"plain\"). BioMedLM was trained exclusively on PubMed abstracts (matching the domain of our source texts) while T5 was trained primarily on general-domain data like web text and Wikipedia (more closely matching our target texts).\nThe core idea is that scientific LMs should assign lower perplexity scores to scientific texts than general English LMs, with the opposite holding true for plain language (Harris et al., 2012). Similar logic has been used successfully for controlled text generation (Liu et al., 2021;August et al., 2022a). POMME, by quantifying a text's perplexity within a perplexity score distribution from a reference dataset, guarantees the compatibility of POMME scores across varied datasets. An advantage of POMME is its model-agnosticism, enabling any two models to serve as in-and out-of-domain LMs. Thus, POMME could be adapted to evaluate text simplification in other fields such as law (Jain et al., 2021) or finance (Salo et al., 2016). In this work, we limit POMME evaluation to biomedical text, given the available pretrained models and paired PLS datasets in this domain." }, { "figure_ref": [ "fig_5", "fig_3" ], "heading": "Analysis Results", "publication_ref": [ "b67", "b10", "b63", "b25", "b22", "b31", "b35", "b30", "b43", "b30", "b40" ], "table_ref": [ "tab_5", "tab_11", "tab_4", "tab_5", "tab_4" ], "text": "Metric responses to perturbations are presented in Figure 2. All score trends are consistent across two random seeds. For contextualizing metric performance in APPLS, we survey reported metric changes in ACL'22 papers on text generation and summarization (full results in App. D). The median reported improvements are: ROUGE (+0.89), BLEU (+0.69), METEOR (+0.50), SARI (+1.71), BERTScore (+0.55), and PPL (-2.06). We summarize our main findings below.\nCurrent metrics exhibit shortcomings in evaluating simplicity. Metrics that are sensitive to simplification should consistently distinguish between more and less simplified text. As shown in Figure 2, the only metric that exhibits appropriate sensitivity to simplification perturbations is GPT-PPL (decreasing as more perturbations are introduced; lower PPL is better). However, in follow-up evaluations with other datasets (discussed below and shown in Table 4), we see that GPT-PPL has undesirable sensitivity to text length, as found in prior work (Zhao et al., 2022). ROUGE, BLEU, METEOR, SARI, BERTScore, and QAEval decrease in response to the simplification perturbation. While their response is consistent, we posit this is due to sensitivity to n-gram overlap rather than text simplicity. To confirm, we report metric changes when reversing sources and targets (perturbing simplified texts to increase complexity). Metrics also decrease in this case (App. Figure 14), suggesting that they are not sensitive to text simplicity. LENS and LLM prompt-based evaluations (App. Table 6) are erratic or insensitive to simplification perturbations.\nPOMME is sensitive to simplification perturbations. In Figure 2, we observe that POMME increases with simplification perturbations. We further validate POMME in PLABA (perturbation results in App. Figure 11) and two other text simplification datasets: MSD (Cao et al., 2020) and WikiSimple (Woodsend and Lapata, 2011). Using POMME to compare source and target texts from these datasets (Table 3), we observe consistently higher POMME for the source texts compared to the target texts (∆ is positive). We also present single-model PPL scores as computed by the inand out-of-domain LMs used to compute POMME, and find that inconsistency is evident. For instance, BioMedLM-PPL∆ for MSD is 0.0 and T5-PPL∆ for PLABA is 0.02, suggesting incorrectly that the source texts (scientific abstracts) are simpler or as simple as the targets (plain language summaries).\nTo further validate the sensitivity of POMME to simplification, we show results over CELLS and PLABA by % perturbed in Table 4. Especially for PLABA, GPT2-PPL is insensitive to perturbations, potentially due to similar text lengths between the scientific abstracts and plain text in PLABA. Conversely, POMME consistently reacts to perturba- (Holm, 1979).\ntions, producing higher scores for more extensively altered text. To ascertain that POMME responds to text simplification itself and not merely to the characteristics of GPT-simplified text, we conduct further tests by generating simplified text using Llama2 and Claude. These results, illustrated in App. Figure 13, reveal that POMME trends across these three models exhibit similar patterns.\nUsing a reference dataset to normalize perplexity enables POMME to be used for cross-dataset comparisons of text simplicity. In Table 3, we observe that based on POMME, the source and target texts of both MSD and WikiSimple are much simpler than those of CELLS. This aligns with their content: MSD contains consumer-health information, usually simpler than plain language summaries of research papers, and WikiSimple is sourced from English and Simple Wikipedias, both of which feature language suited for the general public. This supports the use of POMME to compare text simplicity across corpora. Domain adaptation can be further enabled by selecting a domainspecific reference dataset and domain-adapted LMs (Gururangan et al., 2020) Lexical features are useful measures of text simplicity. Figure 3 illustrates the response of lexical features to degrees of text simplification in CELLS, confirming trends observed in previous studies (Kauchak et al., 2014;Leroy et al., 2018;Kauchak et al., 2017;Mukherjee et al., 2017). As simplification increases, sentence length decreases; common words and verbs increase; and nouns, adjectives, and term specificity decrease. Although prior work emphasizes the importance of conjunctions for comprehension (Kauchak et al., 2017), our study reveals a reduction rather than increase in conjunctions as texts become simpler. Overall, these trends demonstrate that lexical features are valuable indicators for text simplification. Results on PLABA are similar, with an inverse trend for paragraph length (App. Figure 12).\nLLM prompt-based evaluations do not distinguish between PLS criteria. Prompt-based evaluations are insensitive to simplification perturbations, and in most cases, do not distinguish between the four criteria when scoring summaries (App. Figure 10). Despite findings from Luo et al. (2023) showing agreement between ChatGPT scores and human ratings, our results suggest that the capacity of LLMs for generative text evaluation warrants further examination. We also note that the referencefree and reference-provided settings yield very different scores along all four criteria, indicating that scores produced with this method are difficult to compare across settings and datasets. Detailed results are provided in App. F." }, { "figure_ref": [], "heading": "Discussion & Conclusion", "publication_ref": [], "table_ref": [], "text": "Recent advances point to the possibility of automated plain language summarization (PLS); however, the multifaceted nature of PLS makes evaluation challenging. We introduce the first-to our knowledge-meta-evaluation testbed, APPLS, for evaluating PLS metrics. In APPLS, we apply controlled text perturbations to existing PLS datasets based on several criteria (informativeness, simplification, coherence, and faithfulness). Using AP-PLS, we find that while some metrics reasonably capture informativeness, faithfulness, and coherence, they face challenges assessing simplification.\nMost metrics decrease, contrary to expectation, when computed for more simplified text. GPT-2 perplexity, the sole metric sensitive to simplification, exhibits inconsistencies across datasets.\nIn response to these shortcomings, we propose POMME. By using normalized perplexity differences between in and out-of-domain language models, POMME maintains the desirable qualities of language model perplexity while being robust and comparable across datasets. It is worth noting, though, that while POMME is sensitive to simplification, it is less sensitive to other PLS criteria. In other words, no single metric is capable of capturing all desired PLS criteria and a holistic evaluation will necessitate a combination of metrics.\nThe primary advantage of our testbed and metric is their extensibility. Using the perturbation pipeline, APPLS can transform any PLS dataset into a granular meta-evaluation testbed. Similarly, POMME can be easily adapted to other domains, requiring only a domain-specific dataset and two language models representing the source and target domains. Our testbed and metric lay the groundwork for further advancements in automated PLS, aiming to foster more impactful, accessible, and equitable scientific communication." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b33", "b18" ], "table_ref": [], "text": "Our perturbations use synthetic data to simulate real-world textual phenomena seen in PLS. Although our approach is informed by theory and provides valuable insights into metric behavior, further exploration of more sophisticated methods to simulate changes in these criteria is warranted. This is especially true for aligning sentences between scientific abstracts and plain language summaries, as sentence-level alignment for scientific summaries is still an open problem (Krishna et al., 2023).\nWe also acknowledge that text quality may deteriorate with synthetic perturbations in a way that affects multiple PLS criteria. However, by using synthetic data, we are benefiting from the ability to control our perturbations and extend our testbed creation framework to any dataset. It is infeasible to find naturally occurring text with the same controlled levels of each perturbation, with minimal changes to other aspects. Our aim is not to produce perfect outputs, but rather to establish a robust baseline that enables controlled text perturbations, assisting in evaluating shifts in metric scores. The results of our analysis complement qualitative examinations of model output conducted in other work, which further suggests that automated text generation evaluation metrics may be limited in their ability to assess generation performance of post-GPT-3 LLMs (Goyal et al., 2022).\nWe have also focused our analysis on commonly used metrics reported in prior work on simplification, summarization, and generation. Investigating the performance of metrics not included in this work, as well as the generalizability of our methods to meta-evaluation for other generative NLP tasks, is a future goal. minimizing LM perplexity. This results in a fluent sentence that contradicts the original one.\nNegate sentences We negate sentences by identifying verbs and adding negation terms (e.g., not) preceding them. The goal of this perturbation is to create sentences similar to the original but communicating the opposite information." }, { "figure_ref": [], "heading": "B Round-trip translation for oracle extractive hypothesis", "publication_ref": [], "table_ref": [], "text": "We use round-trip translation to introduce lexical variation into our oracle extractive summaries. This is important when computing metrics such as SARI, which exhibit degenerate behavior when the hypothesis is an extractive subset of the source. We examine two languages for round-trip translation: German and Russian. By employing the BLEU score as a performance metric for the round-trip generated text relative to the original source, we find that the English-German-English (en-de-en) Table 5: Counts of human evaluation ratings on each matched sentence for each criteria. For round trip translation and GPT simplification, there are a total of 400 ratings (2 annotators rating 200 pairs each). Overall, we see that round trip translation maintains strong faithfulness to the original, does not remove important information, and remains equally simple and coherent (shown by a majority of neutral ratings for the simplification and coherence criteria). For GPT simplification, we see that the simplification perturbation leads to substantially more simple text, while also maintaining faithfulness and informativeness. translation sequence yields superior BLEU scores (Figure 4), and therefore, select the en-de-en sequence to produce the oracle extractive hypothesis for our testbed.\nTo scrutinize the introduced variation through this extractive and round-trip translation pipeline, we evaluate the BLEU score. As depicted in Figure 5, the BLEU score for the oracle extractive hypothesis is lower than that of the oracle extractive summary. This suggests the successful introduction of text variations. Augmented by human evaluation results in Table 5, with 152 out of 198 raters indicat-ing comparable simplification levels between the oracle extractive hypothesis and its extractive counterparts, we conclude that our extractive and roundtrip translation approach successfully introduces lexical variation in our oracle extractive summaries without altering their simplicity level." }, { "figure_ref": [ "fig_1" ], "heading": "C Details of human evaluation", "publication_ref": [], "table_ref": [], "text": "To validate the quality of oracle extractive hypotheses and GPT-simplified summaries, we randomly select 100 summary pairs from each corpus for human evaluation. Each pair in the oracle extractive hypotheses consists of an oracle extractive sentence and its respective en-de-en round-trip-translation sentence. Similarly, each pair in the GPT-simplified summaries contains a hypothesis chunk along with its corresponding GPT-simplified summary chunk.\nEach pair is reviewed by two independent annotators. Annotators were hired through UpWork and have Bachelors and Doctorate degrees in the biological sciences. In the evaluation, the text pairs are labeled as Text A and Text B, without any indication that either text is generated. The annotators are first asked to assess whether the content of Text A matches the content of Text B, where a match is defined as containing the same relation tuples.\nIf the texts match, the annotators further evaluate Text B in relation to Text A, assessing whether Text B encapsulates key points (informativeness), is more comprehensible (simplification), maintains factual integrity (faithfulness), and exhibits a wellstructured layout (coherence). All facets are assessed using a 1-5 Likert scale (1-strongly disagree, 5-strongly agree). Representative questions can be found in Figure 6. This research activity is exempt from institutional IRB review." }, { "figure_ref": [], "heading": "D Empirical Study of Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "Reported in ACL 2022 Publications\nOur study undertakes a comprehensive analysis of scores reported in the long papers of ACL 2022 to identify the most prevalently reported metrics in summarization and simplification tasks. We primarily concentrate on tasks related to generation, summarization, and simplification. Our inclusion criteria are: 1) long papers with 'generat,' 'summar,' or 'simpl' in the title; and 2) papers that report scores for both the current model and at least one baseline model in the main text. We exclude scores from ablation studies.\nOf the 601 long papers accepted to ACL 2022, 109 satisfy our inclusion criteria, which we categorize into 31 summarization and 78 generation papers, with no qualified papers related to simplification tasks. Considering the significance of simplification in PLS, we expanded our search to all ACL 2022 papers, including long, short, system demonstration, and findings papers. This led to the identification of 2 out of 22 papers with 'simpl' in the title that reported SARI scores. As illustrated in Figure 7, the five most frequently reported automated evaluation metrics are ROUGE, BLEU, GPT-PPL, METEOR, and BERTScore. This investigation provides insight into the current adoption of evaluation metrics in natural language generation, summarization, and simplification tasks. We observe that a majority of papers employ the same metrics across these tasks, and the reported improvements are often relatively small compared to the overall ranges for each measure. We also underscore the difficulty of interpreting changes in some of these metrics, especially modelbased metrics, which lack grounding to lexical differences in text such as n-gram overlap.\nBy presenting the reported score differences from ACL papers, we hope to contextualize the metric changes observed through testing in our meta-evaluation testbed. Median reported improvements for the most commonly reported metrics and SARI are: ROUGE (+0.89), BLEU (+0.69), PPL (-2.06), METEOR (+0.50), BERTScore (+0.55), and SARI (+1.71), as shown in Figure 8. We report the median of BERTScore values and deltas as re-ported in these publications, without considering the usage of different models or settings." }, { "figure_ref": [], "heading": "E Details on existing automated evaluation metrics", "publication_ref": [ "b36", "b47", "b66", "b41", "b12", "b25" ], "table_ref": [], "text": "Overlap-based metrics measure n-gram overlaps, and are popular due to their ease of use.\n• ROUGE 9 (Lin, 2004) measures n-gram overlap between generated and reference summaries, focusing on recall. We report the average of ROUGE-1, ROUGE-2, and ROUGE-L. • BLEU 9 (Papineni et al., 2002) • GPT-PPL, 11 usually computed with GPT-2, measures fluency and coherence by calculating the average log probability assigned to each token by the GPT model, with lower scores indicating higher fluency and coherence. • BERTScore 9 (Zhang et al., 2019) quantifies the similarity between hypothesis and targets using contextualized embeddings from the BERT model, computing the F1-score between embeddings to capture semantic similarity beyond ngram matching. • LENS (Maddela et al., 2022) employs an adaptive ranking loss to focus on targets closer to the system output in edit operations (e.g., splitting, paraphrasing, deletion). QA-based metrics capture content quality using a question-answering approach.\n• QAEval (Deutsch et al., 2021) generates question-answer pairs from the target text, then uses a learned QA model to answer these questions using the generated text. The score is computed as the proportion of questions answered correctly. We report QAEval LERC scores. (Holm, 1979)." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "F LLM Prompt-Based Evaluation", "publication_ref": [], "table_ref": [], "text": "We use GPT-3 for LLM evaluation. The generation process is configured with a temperature parameter of 0, a maximum length of 100, and a penalty value of 0. For each input, the top-ranked text is selected as the GPT-simplified output. Example prompts used for evaluation are provided in Figure 9.\nFigure 10 shows the results for GPT-3 LLM evaluation, for both the reference-free and referenceprovided settings. Though the evaluation is sensitive to some perturbations (deletion, addition, negation), it is insensitive to other perturbations (coherence, swaps) and sensitive to simplification in the inverse direction as would be expected (simplification score drops when more source text is replaced by simplified text). Additionally, the LLM evaluation is generally unable to distinguish between the four criteria, as most perturbations lead to the same score trends for simplification, coherence, faithfulness, and to a lesser degree informativeness. These patterns are similar to those observed in the overall score, indicating that the LLM evaluation as performed is not useful for providing facet-based judgments.\nWe also observe that in the reference-provided setting, scores for some perturbations are much higher (e.g., deletion) while others are much lower (e.g., add out-of-domain) than in the reference-free setting. The lack of a reference point or a way to normalize these scores makes it impossible to compare them across settings or datasets." }, { "figure_ref": [], "heading": "a. Reference Free Prompt:", "publication_ref": [], "table_ref": [], "text": "Imagine you are a human annotator now. You will evaluate the quality of generated plain lanugage summary written for a scientific literature abstract. Please follow these steps: 1. Carefully read the scientific literature abstract, and be aware of the information it contains. 2. Read the proposed generated plain langauge summary. 3. Compared to the scientific abstract, rate the summary on four dimensions: informativeness, simplification, coherence, and faithfulness. Assign a score for each aspect and provide an overall score. You should rate on a scale from 0 (worst) to 100 (best). 4. You do not need to explain the reason. Only provide the scores." }, { "figure_ref": [], "heading": "Definitions are as follows:", "publication_ref": [], "table_ref": [], "text": "-Informativeness: measures the extent to which a plain language summary encapsulates essential elements such as methodologies, primary findings, and conclusions from the original scientific text. An informative summary efficiently conveys the central message of the source material, avoiding the exclusion of crucial details or the introduction of hallucinations (i.e., information present in the summary but absent in the scientific text), both of which could impair reader comprehension.\n-Simplification: encompasses the rendering of information into a form that non-expert audiences can readily interpret and understand. This criterion prioritizes the use of simple vocabulary, casual language, and concise sentences that minimize excessive jargon and technical terminology unfamiliar to a lay audience.\n-Coherence: pertains to the logical arrangement of a plain language summary. A coherent summary guarantees an unambiguous and steady progression of ideas, offering information in a well-ordered fashion that facilitates ease of comprehension for the reader. We conjecture that the original sentence order reflects optimal coherence. -Faithfulness: denotes the extent to which the plain language summary aligns factually with the source scientific text, in terms of its findings, methods, and claims. A faithful summary should not substitute information or introduce errors, misconceptions, and inaccuracies, which can misguide the reader or misrepresent the original author's intent. Faithfulness emphasizes the factual alignment of the summary with the source text, while informativeness gauges the completeness and efficiency of the summary in conveying key elements.\nThe scientific abstract and the generated plain language summary are given below: Scientific abstract: {} Generated plain language sumamry:{} b. Reference Provided Promt: Imagine you are a human annotator now. You will evaluate the quality of generated summary written for a scientific literature abstract. Please follow these steps: 1. Carefully read the scientific abstract and plain language summary written by human, and be aware of the information it contains. 2. Read the proposed genereated summary. 3. Compared to the scientific abstract and human-written plain language summry, rate the generated summary on four dimensions: informativeness, simplification, coherence, and faithfulness. Assign a score for each aspect and provide an overall score. You should rate on a scale from 0 (worst) to 100 (best). 4. You do not need to explain the reason. Only provide the scores." }, { "figure_ref": [], "heading": "Definitions are as follows:", "publication_ref": [], "table_ref": [], "text": "-Informativeness: measures the extent to which a plain language summary encapsulates essential elements such as methodologies, primary findings, and conclusions from the original scientific text. An informative summary efficiently conveys the central message of the source material, avoiding the exclusion of crucial details or the introduction of hallucinations (i.e., information present in the summary but absent in the scientific text), both of which could impair reader comprehension.\n-Simplification: encompasses the rendering of information into a form that non-expert audiences can readily interpret and understand. This criterion prioritizes the use of simple vocabulary, casual language, and concise sentences that minimize excessive jargon and technical terminology unfamiliar to a lay audience.\n-Coherence: pertains to the logical arrangement of a plain language summary. A coherent summary guarantees an unambiguous and steady progression of ideas, offering information in a well-ordered fashion that facilitates ease of comprehension for the reader. We conjecture that the original sentence order reflects optimal coherence. -Faithfulness: denotes the extent to which the plain language summary aligns factually with the source scientific text, in terms of its findings, methods, and claims. A faithful summary should not substitute information or introduce errors, misconceptions, and inaccuracies, which can misguide the reader or misrepresent the original author's intent. Faithfulness emphasizes the factual alignment of the summary with the source text, while informativeness gauges the completeness and efficiency of the summary in conveying key elements.\nThe scientific abstract, plain language summary, and generated summary are given below: Scientific abstract: {} Plain language summary: {} Generated summary: {} " }, { "figure_ref": [], "heading": "G Additional perturbation results for PLABA", "publication_ref": [ "b2", "b60" ], "table_ref": [], "text": "We present full perturbation results on PLABA (Attal et al., 2023) in Figure 11. The trends for many perturbations are in the same direction as in CELLS. While many metrics now show a desirable reversed trend to simplification (increasing), we point out that this is inconsistent performance relative to CELLS and is due to the high n-gram overlap between the hypothesis and targets in this case (we perturb by replacing source sentences with round-trip translated target sentences to form hypotheses, which only introduces minor lexical variation). Adding text, especially definitions, dramatically decreases many of these metrics due to the similar lengths of source and target texts in PLABA, again pointing to the n-gram and length sensitivities of most of these metrics.\nThe impact of simplification perturbations on lexical features in the PLABA dataset is shown in Figure 12. Most trends are similar to CELLS, though paragraph length increases with higher perturbation percentage. In PLABA's target construction scheme, the target simplified texts are slightly longer than the source abstracts.\nH POMME score for Llama-and Claude-simplified text\nIn addition to using GPT-3 (Brown et al., 2020) to produced simplified text for the simplification perturbation, we also test two other LLMs: Llama 2 (Touvron et al., 2023) and Claude. In Figure 13, we show that POMME score changes when perturbing using the simplified text generated by all three models. Similar score changes are observed for all three models, demonstrating that POMME is consistently responsive to text simplicity and not specifically to the characteristics of GPT-simplified text. (complex) or target text (simple) as reference for simplification perturbations on the CELLS dataset. A metric sensitive to text simplicity should move in opposing directions under these two settings. However, metrics decrease uniformly in both settings, suggesting that they are not sensitive to text simplicity." }, { "figure_ref": [ "fig_5" ], "heading": "I Reversing source and target texts for simplification perturbation", "publication_ref": [], "table_ref": [], "text": "To illustrate that existing metrics are not sensitive to text simplicity but rather to length and n-gram overlap, we present metric scores computed when swapping source and target for simplification perturbations (Figure 14). When target text is used as reference, we start with the oracle extractive hypothesis and increase perturbation percentage by swapping in simpler text, going from more complex to more simple text. When source text is used as reference, we reverse the original source and target, starting with simple text and swapping in the oracle extractive hypothesis, thereby moving from more simple to more complex text. A metric sensitive to text simplification should move in opposite directions in these two settings as perturbation percentage increases. However, these metric scores uniformly decrease under both settings, regardless of the reference, demonstrating that these metrics are not responsive to simplification but more so to text length and n-gram overlap. We do not report performance of BERTScore and QAEval under this setting due to the higher cost of computing these model based metrics." }, { "figure_ref": [], "heading": "A Criteria-Specific Perturbation Design", "publication_ref": [ "b68", "b20", "b56", "b19" ], "table_ref": [], "text": "A.1 Informativeness Delete sentences We simulate the omission of information by ranking sentences based on similarity to others (assuming greater similarity indicates more important content) (Zhong et al., 2020) and removing sentences starting from the most to least similar. Add sentences We simulate the inclusion of two forms of unrelated information by adding sentences from out-of-domain (i.e., unrelated dataset) and in-domain (i.e., within the same domain but on a different topic). Add definitions Background explanation is fundamental to PLS and involves adding external content like definitions or examples (Guo et al., 2022;Srikanth and Li, 2020). To simulate background explanations, we add definitions 7 of keywords identified by KeyBERT (Grootendorst, 2020)." }, { "figure_ref": [], "heading": "A.2 Simplification", "publication_ref": [ "b38" ], "table_ref": [], "text": "Replace sentences Taking advantage of the LLMs' ability to simplify text (Lu et al., 2023), we replace sentences in the original text with LLMsimplified versions. We use model to generate simplified summaries using the prompt \"explain the text in layman's terms to a primary school student.\" We use GPT-3 (text-davinci-003), Llama 2 (llama-2-13b-chat), and Claude (claude-instant-v1.0) 8 for text simplification. The maximum length of generation is set to 200." }, { "figure_ref": [], "heading": "A.3 Coherence Reorder sentences", "publication_ref": [ "b52" ], "table_ref": [], "text": "We simulate changes in text coherence by randomly shuffling the order of sentences, as suggested by Sai et al. (2021)." }, { "figure_ref": [], "heading": "A.4 Faithfulness Number swap", "publication_ref": [ "b64" ], "table_ref": [], "text": "We randomly add a number from 1 to 5 to the original numerical value in the text.\nVerb swap An appropriate metric should ignore verb synonyms but be sensitive to antonyms. We introduce two perturbations by substituting verbs with either synonyms or antonyms. Entity swap We replace entities using the KBIN method (Wright et al., 2022), which links entity spans to concepts in the Unified Medical Language System (UMLS) and replaces them with different entities while maximizing NLI contradiction and" } ]
While there has been significant development of models for Plain Language Summarization (PLS), evaluation remains a challenge. PLS lacks a dedicated assessment metric, and the suitability of text generation evaluation metrics is unclear due to the unique transformations involved (e.g., adding background explanations, removing specialized terminology). To address these concerns, our study presents a granular meta-evaluation testbed, APPLS, designed to evaluate metrics for PLS. We define a set of perturbations along four criteria inspired by previous work that a PLS metric should capture: informativeness, simplification, coherence, and faithfulness. An analysis of metrics using our testbed reveals that current metrics fail to capture simplification consistently. In response, we introduce POMME, a new metric designed to assess text simplification in PLS; the metric is calculated as the normalized perplexity difference between an in-domain and out-of-domain language model. We demonstrate POMME's correlation with fine-grained variations in simplification and validate its sensitivity across 4 text simplification datasets. This work contributes the first meta-evaluation testbed for PLS and a comprehensive evaluation of existing metrics. 1 1 The APPLS testbed and POMME is available at https://github.com/LinguisticAnomalies/APPLS.
APPLS: Evaluating Evaluation Metrics for Plain Language Summarization
[ { "figure_caption": "Figure 4 :Figure 5 :45Figure4: BLEU scores of round-trip translation for English-German-English (en-de-en) and English-Russian-English (en-ru-en) in CELLS oracle extractive hypotheses.", "figure_data": "", "figure_id": "fig_0", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example human evaluation task for assessing GPT-simplified summary quality.", "figure_data": "", "figure_id": "fig_1", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Prompts used for LLM evaluation. (a): Reference-free; (b) Reference-provided.", "figure_data": "", "figure_id": "fig_2", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Prompt-based evaluation scores for four criteria -informativeness, simplification, coherence, and faithfulness -along with an overall score. (a): Reference free; (b) Reference provided. Notably, prompt-based scores exhibit a reverse correlation with simplification perturbation (i.e., scores diminish as text simplifies) and demonstrate insensitivity towards coherence and faithfulness perturbations, except in instances of sentence negation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :Figure 12 :Figure 13 :111213Figure 11: Average scores of existing metrics and newly developed POMME score for perturbed texts in the PLABA dataset. Scores are averaged in 10 bins by perturbation percentage. Markers denote perturbations associated with our four defined criteria.", "figure_data": "", "figure_id": "fig_4", "figure_label": "111213", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure14: Average scores of ROUGE, BLEU, METEOR, and SARI scores calculated using either the source text (complex) or target text (simple) as reference for simplification perturbations on the CELLS dataset. A metric sensitive to text simplicity should move in opposing directions under these two settings. However, metrics decrease uniformly in both settings, suggesting that they are not sensitive to text simplicity.", "figure_data": "", "figure_id": "fig_5", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Original text Worldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected more than 59 million people and killed more than one of them. The first step is an accurate assessment of the population prevalence of past infections…(Kline et al., 2021) CoV-2), a severe acute respiratory syndrome, has infected more than 59 million people and killed more than one of them. The first step is an accurate assessment of the population prevalence of past infections… people and killed more than one of them. Coronaviruses are species in the genera of virus belonging to the subfamily Coronavirinae in the family Coronaviridae. Coronaviruses are enveloped viruses with a positive-sense RNA genome and with a nucleocapsid of helical symmetry.The genomic size of coronaviruses ranges from approximately 26 to 32 kilobases, extraordinarily large for an RNA virus. … CoV-2 is a virus that has infected over 59 million people globally and killed more than 1.39 million. Scientists are trying to learn more about the virus in order to design interventions to slow and stop its spread. One of the first steps is understanding how many people have been infected in the past, which requires accurate population prevalence studies… Example perturbations for criteria in APPLS. Original text comes from the CELLS(Guo et al., 2022).", "figure_data": "Notations: removals/ additions/ modificationsCriterionPerturbationSimulated real-Perturbed textworld situationInformativenessDelete sentences Salient information missing Worldwide, coronavirus 2 (SARS-Add out-of-domain sentences Out-of-domain hallucination In this paper we address the problem of aggregating the outputs of classi ers solving different nlp tasks. Worldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected more than 59 million people and killed more than one of them… Add in-domain sentences In-domain hallucination Worldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected more than 59 million people and killed more than one of them. This review synthesised the latest evidence on the reduction of antipsychotic doses for stable individuals with schizophrenia…Add definitionsBackgroundWorldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected moreexplanation than 59 million Simplification Replace sentences Paraphrasing with simple terms SARS-Coherence Reorder Poor writing flow The first step is an accurate assessment of the population prevalence of past infections. Worldwide,sentencescoronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected more than 59 millionpeople and killed more than one of them…Number swapHuman errorsWorldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected morethan 64 million people and killed more than one of them…FaithfulnessEntity swap Synonym verb swap Antonym verbHuman errors Human errors Human errorsWorldwide, canine adenovirus (CaV-2), a severe acute respiratory syndrome, has infected more than 59 million people and killed more than one of them… Worldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, has infected more than 59 million people and stamped out more than one of them… Worldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, infected more than 59swapmillion people and saved more than one of them…NegateHuman errorsWorldwide, coronavirus 2 (SARS-CoV-2), a severe acute respiratory syndrome, hasn't infected morethan 59 million people and killed more than one of them.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "datasets.", "figure_data": "DatasetVersionWordSentenceCELLSAbstract (src.)283±132 11±6(n=6,311) PLS (tgt.)178±747±3Oracle Hypothesis 134±585±2GPT-simplified98±574±3PLABAAbstract (src.)240±9510±4(n=750)Adaptation (tgt.)244±9512±5", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Diagnostic datasets statistics (mean±std).", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Figure2: Average scores of existing metrics and our POMME score for perturbed texts. Scores are averaged in 10 bins by perturbation percentage. Markers denote the defined criteria associated with that perturbation. GPT-PPL is the only metric exhibiting sensitivity to the simplification perturbation (i.e., PPL decreases when simplification perturbation % increases, signifying simpler text). Median reported improvements in ACL'22 summarization and generation papers are ROUGE (+0.89), BLEU (+0.69), METEOR (+0.50), SARI (+1.71), BERTScore (+0.55), and PPL (-2.06).", "figure_data": "BioMedLM-PPLT5-PPLPOMMEDatasetsSource Target ∆ (↑) Source Target ∆ (↓) Source Target ∆ (↑)CELLS-0.360.360.720.52-0.52-1.04-0.880.881.76PLABA-0.79-0.140.650.290.310.02-1.08-0.450.63MSD3.303.300.0-1.89-1.94-0.055.195.240.05WikiSimple1.282.471.19-1.12-3.23-2.112.405.703.30", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "BioMedLM-PPL, T5-PPL and POMME scores for four simplification datasets, comparing source (complex text) and target (simple text). A higher POMME score indicates a higher degree of text simplification. The difference, denoted ∆, is calculated by subtracting the source score from the target score. Bold indicates statistical significance in the correct direction (i.e., target is simpler than source) with Bonferroni-Holm correction for multiple hypothesis testing(Holm, 1979). The CELLS dataset functions as the reference in all POMME computations.", "figure_data": "CELLSPLABAPerturb%PPLPOMMEPPL POMME20%-44.81-5.323.06-0.3840%-17.52-0.885.010.6960%-14.14-0.294.590.8380%-11.550.273.380.76100%-15.450.421.800.68", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Delta in PPL and POMME scores at various levels of perturbation. Bold indicates statistical significance in the correct direction with Bonferroni-Holm correction for multiple hypothesis testing", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "to compute POMME.", "figure_data": "Relative Change0.50 0.25 0.00 0.25 0.50Perturbed Percentage 0.0 0.2 0.4 0.6 0.8 1.0V. N. Adj. Adv. Num. Para. len Sent. len Specificity Conj. FamilarityFigure 3: Relative change of each lexical feature withrespect to the unperturbed state (0%). Different markersrepresent lexical feature categories.provement. For informativeness, ROUGE, BLEU,BERTScore, GPT-PPL, and QAEval are sensitiveto information deletion and irrelevant additions,but decrease with the addition of backgroundexplanations through keyword definitions. For co-herence, BERTScore and LENS excel in detectingperturbations, largely due to their ability to assessstructural and contextual sentence relationships.BERTScore, GPT-PPL, and QAEval generallyperform well for faithfulness-related perturbations,although GPT-PPL and BERTScore are somewhatsensitive to synonym verb swaps, an undesirabletrait. QAEval is best at being unresponsive tosynonym verb swaps. Number swaps, however, re-main undetected by all metrics. Results in Figure 2.Metrics effectively capture informativeness,coherence, and faithfulness, with room for im-", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overall score derived from the prompt-based evaluation for two settings: reference-free and referenceprovided. Bolded values indicate statistical significance in the correct direction with Bonferroni-Holm correction for multiple hypothesis testing", "figure_data": "", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" } ]
Yue Guo; Tal August; Gondy Leroy; Trevor Cohen; Lucy Lu Wang
[ { "authors": "Fernando Alva-Manchego; Louis Martin; Carolina Scarton; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "EASSE: Easier automatic sentence simplification evaluation", "year": "2019" }, { "authors": "Ron Artstein; Massimo Poesio", "journal": "Computational linguistics", "ref_id": "b1", "title": "Inter-coder agreement for computational linguistics", "year": "2008" }, { "authors": "Kush Attal; Brian Ondov; Dina Demner-Fushman", "journal": "Scientific Data", "ref_id": "b2", "title": "A dataset for plain language adaptation of biomedical abstracts", "year": "2023" }, { "authors": "Tal August; Katharina Reinecke; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Generating scientific definitions with controllable complexity", "year": "2022" }, { "authors": "Tal August; Lucy Lu Wang; Jonathan Bragg; Marti A Hearst; Andrew Head; Kyle Lo", "journal": "", "ref_id": "b4", "title": "Paper plain: Making medical research papers approachable to healthcare consumers with natural language processing", "year": "2022" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b5", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "L Isabel; Margaret G Beck; Gale M Mckeown; Jane A Sinatra; Loxterman", "journal": "Reading research quarterly", "ref_id": "b6", "title": "Revising social studies text from a text-processing perspective: Evidence of improved comprehensibility", "year": "1991" }, { "authors": "Steven Bird; Robert Dale; Bonnie J Dorr; Mark Thomas Bryan R Gibson; Min-Yen Joseph; Dongwon Kan; Brett Lee; Powley; Yee Dragomir R Radev; Fan Tan", "journal": "", "ref_id": "b7", "title": "The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics", "year": "2008" }, { "authors": "Elliot Bolton; David Hall; Michihiro Yasunaga; Tony Lee; Chris Manning; Percy Liang", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; T J Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeff Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b9", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Yixin Cao; Ruihao Shui; Liangming Pan; Min-Yen Kan; Zhiyuan Liu; Tat-Seng Chua", "journal": "", "ref_id": "b10", "title": "Expertise style transfer: A new task towards better communication between experts and laymen", "year": "2020" }, { "authors": "Yanran Chen; Steffen Eger", "journal": "", "ref_id": "b11", "title": "Menli: Robust evaluation metrics from natural language inference", "year": "2022" }, { "authors": "Daniel Deutsch; Tania Bedrax-Weiss; Dan Roth", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b12", "title": "Towards question-answering as an automatic metric for evaluating the content quality of a summary", "year": "2021" }, { "authors": "Ashwin Devaraj; Iain Marshall; Byron C Wallace; Junyi Jessy Li", "journal": "", "ref_id": "b13", "title": "Paragraph-level simplification of medical texts", "year": "2021" }, { "authors": "Wojciech Alexander R Fabbri; Bryan Kryściński; Caiming Mc-Cann; Richard Xiong; Dragomir Socher; Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "Summeval: Re-evaluating summarization evaluation", "year": "2021" }, { "authors": "Saadia Gabriel; Asli Celikyilmaz; Rahul Jha; Yejin Choi; Jianfeng Gao", "journal": "", "ref_id": "b15", "title": "Go figure: A meta evaluation of factuality in summarization", "year": "2020" }, { "authors": "Mingqi Gao; Jie Ruan; Renliang Sun; Xunjian Yin; Shiping Yang; Xiaojun Wan", "journal": "", "ref_id": "b16", "title": "Human-like summarization evaluation with chatgpt", "year": "2023" }, { "authors": "Tomas Goldsack; Zhihao Zhang; Chenghua Lin; Carolina Scarton", "journal": "Springer", "ref_id": "b17", "title": "Domain-driven and discourse-guided scientific summarisation", "year": "2023-04-02" }, { "authors": "Tanya Goyal; Junyi ; Jessy Li; Greg Durrett", "journal": "", "ref_id": "b18", "title": "News summarization and evaluation in the era of gpt-3", "year": "2022" }, { "authors": "Maarten Grootendorst", "journal": "", "ref_id": "b19", "title": "Keybert: Minimal keyword extraction with bert", "year": "2020" }, { "authors": "Yue Guo; Wei Qiu; Gondy Leroy; Sheng Wang; Trevor Cohen", "journal": "", "ref_id": "b20", "title": "Cells: A parallel corpus for biomedical lay language generation", "year": "2022" }, { "authors": "Yue Guo; Wei Qiu; Yizhong Wang; Trevor Cohen", "journal": "", "ref_id": "b21", "title": "Automated lay language summarization of biomedical scientific reviews", "year": "2021" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "Zellig Harris; Michael Gottfried; Thomas Ryckman; Anne Daladier; Paul Mattick", "journal": "Springer Science & Business Media", "ref_id": "b23", "title": "The form of information in science: analysis of an immunology sublanguage", "year": "2012" }, { "authors": "Tianxing He; Jingyu Zhang; Tianle Wang; Sachin Kumar; Kyunghyun Cho; James Glass; Yulia Tsvetkov", "journal": "", "ref_id": "b24", "title": "On the blind spots of model-based evaluation metrics for text generation", "year": "2022" }, { "authors": "Sture Holm", "journal": "Scandinavian journal of statistics", "ref_id": "b25", "title": "A simple sequentially rejective multiple test procedure", "year": "1979" }, { "authors": "Margaret Holmes-Rovner; Sue Stableford; Angela Fagerlin; John T Wei; Rodney L Dunn; Janet Ohene-Frempong; Karen Kelly-Blake; David R Rovner", "journal": "BMC Medical Informatics and Decision Making", "ref_id": "b26", "title": "Evidence-based patient choice: a prostate cancer decision aid in plain language", "year": "2005" }, { "authors": "Deepali Jain; Malaya Dutta Borah; Anupam Biswas", "journal": "Computer Science Review", "ref_id": "b27", "title": "Summarization of legal documents: Where are we now and the way forward", "year": "2021" }, { "authors": "Raghav Jain; Anubhav Jangra; Sriparna Saha; Adam Jatowt", "journal": "", "ref_id": "b28", "title": "A survey on medical document summarization", "year": "2022" }, { "authors": "Rixie Shankar Kanthara; Tiffany Ko Leong; Xiang Lin; Ahmed Masry; Megh Thakkar; Enamul Hoque; Shafiq Joty", "journal": "", "ref_id": "b29", "title": "Chart-to-text: A large-scale benchmark for chart summarization", "year": "2022" }, { "authors": "David Kauchak; Gondy Leroy; Alan Hogue", "journal": "Journal of the Association for Information Science and Technology", "ref_id": "b30", "title": "Measuring text difficulty using parse-tree frequency", "year": "2017" }, { "authors": "David Kauchak; Obay Mouradi; Christopher Pentoney; Gondy Leroy", "journal": "IEEE", "ref_id": "b31", "title": "Text simplification tools: Using machine learning to discover features that identify difficult text", "year": "2014" }, { "authors": "Wei-Jen Ko; Greg Durrett; Junyi Jessy Li", "journal": "", "ref_id": "b32", "title": "Domain agnostic real-valued specificity prediction", "year": "2019" }, { "authors": "Kalpesh Krishna; Erin Bransom; Bailey Kuehl; Mohit Iyyer; Pradeep Dasigi; Arman Cohan; Kyle Lo", "journal": "", "ref_id": "b33", "title": "Longeval: Guidelines for human evaluation of faithfulness in long-form summarization", "year": "2023" }, { "authors": "Lauren M Kuehne; Julian D Olden", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b34", "title": "Lay summaries needed to enhance science communication", "year": "2015" }, { "authors": "Gregoire Leroy; Emma L Carroll; Mike W Bruford; Andrew Dewoody; Allan Strand; Lisette Waits; Jinliang Wang", "journal": "Evolutionary Applications", "ref_id": "b35", "title": "Next-generation metrics for monitoring genetic erosion within populations of conservation concern", "year": "2018" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b36", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Alisa Liu; Maarten Sap; Ximing Lu; Swabha Swayamdipta; Chandra Bhagavatula; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "DExperts: Decoding-time controlled text generation with experts and anti-experts", "year": "2021" }, { "authors": "Junru Lu; Jiazheng Li; Byron C Wallace; Yulan He; Gabriele Pergola", "journal": "", "ref_id": "b38", "title": "Napss: Paragraph-level medical text simplification via narrative prompting and sentence-matching summarization", "year": "2023" }, { "authors": "Junyu Luo; Junxian Lin; Chi Lin; Cao Xiao; Xinning Gui; Fenglong Ma", "journal": "", "ref_id": "b39", "title": "Benchmarking automated clinical language simplification: Dataset, algorithm, and evaluation", "year": "2022" }, { "authors": "Zheheng Luo; Qianqian Xie; Sophia Ananiadou", "journal": "", "ref_id": "b40", "title": "Chatgpt as a factual inconsistency evaluator for abstractive text summarization", "year": "2023" }, { "authors": "Mounica Maddela; Yao Dou; David Heineman; Wei Xu", "journal": "", "ref_id": "b41", "title": "Lens: A learnable evaluation metric for text simplification", "year": "2022" }, { "authors": "Thomas Mccoy; Ellie Pavlick; Tal Linzen", "journal": "", "ref_id": "b42", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "year": "2019" }, { "authors": "Partha Mukherjee; Gondy Leroy; David Kauchak; Brianda Armenta Navarrete; Damian Y Diaz; Sonia Colina", "journal": "American Medical Informatics Association", "ref_id": "b43", "title": "The role of surface, semantic and grammatical features on simplification of spanish medical texts: A user study", "year": "2017" }, { "authors": "Brian Ondov; Kush Attal; Dina Demner-Fushman", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b44", "title": "A survey of automated methods for biomedical text simplification", "year": "2022" }, { "authors": "Aitor Ormazabal; Mikel Artetxe; Gorka Labaka; Aitor Soroa; Eneko Agirre", "journal": "", "ref_id": "b45", "title": "Principled paraphrase generation with parallel corpora", "year": "2022" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "", "ref_id": "b46", "title": "Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics", "year": "2021" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b47", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Nikhil Pattisapu; Nishant Prabhu; Smriti Bhati; Vasudeva Varma", "journal": "", "ref_id": "b48", "title": "Leveraging social media for medical text simplification", "year": "2020" }, { "authors": "Nicole Pitcher; Denise Mitchell; Carolyn Hughes", "journal": "", "ref_id": "b49", "title": "Template and guidance for writing a cochrane plain language summary", "year": "2022" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b50", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Marco Tulio; Ribeiro ; Carlos Guestrin; Sameer Singh", "journal": "", "ref_id": "b51", "title": "Are red roses red? evaluating consistency of question-answering models", "year": "2019" }, { "authors": "Tanay Ananya B Sai; Dixit; Yashpal Dev; Sreyas Sheth; Mitesh M Mohan; Khapra", "journal": "", "ref_id": "b52", "title": "Perturbation checklists for evaluating nlg evaluation metrics", "year": "2021" }, { "authors": "Akash Ananya B Sai; Mitesh M Kumar Mohankumar; Khapra", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b53", "title": "A survey of evaluation metrics used for nlg systems", "year": "2022" }, { "authors": "Marika Salo; Helena Haapio; Stefania Passera", "journal": "", "ref_id": "b54", "title": "Putting financial regulation to work: Using simplification and visualization for consumer-friendly information", "year": "2016" }, { "authors": "Reid Smith; Pamela Snow; Tanya Serry; Lorraine Hammond", "journal": "Reading Psychology", "ref_id": "b55", "title": "The role of background knowledge in reading comprehension: A critical review", "year": "2021" }, { "authors": "Neha Srikanth; Jessy Junyi; Li", "journal": "", "ref_id": "b56", "title": "Elaborative simplification: Content addition and explanation generation in text simplification", "year": "2020" }, { "authors": "Marlene Stoll; Martin Kerwer; Klaus Lieb; Anita Chasiotis", "journal": "Plos one", "ref_id": "b57", "title": "Plain language summaries: A systematic review of theory, guidelines and empirical research", "year": "2022" }, { "authors": "Saku Sugawara; Pontus Stenetorp; Kentaro Inui; Akiko Aizawa", "journal": "", "ref_id": "b58", "title": "Assessing the benchmarking capacity of machine reading comprehension datasets", "year": "2020" }, { "authors": "Elior Sulem; Omri Abend; Ari Rappoport", "journal": "", "ref_id": "b59", "title": "Bleu is not suitable for the evaluation of text simplification", "year": "2018" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b60", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Sayantan Byron C Wallace; Frank Saha; Iain J Soboczenski; Marshall", "journal": "", "ref_id": "b61", "title": "Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization", "year": "2021" }, { "authors": "Yequan Wang; Jiawen Deng; Aixin Sun; Xuying Meng", "journal": "", "ref_id": "b62", "title": "Perplexity from plm is unreliable for evaluating text quality", "year": "2022" }, { "authors": "Kristian Woodsend; Mirella Lapata", "journal": "", "ref_id": "b63", "title": "Wikisimple: Automatic simplification of wikipedia articles", "year": "2011" }, { "authors": "Dustin Wright; David Wadden; Kyle Lo; Bailey Kuehl; Arman Cohan; Isabelle Augenstein; Lucy Lu; Wang ", "journal": "", "ref_id": "b64", "title": "Generating scientific claims for zero-shot scientific fact checking", "year": "2022" }, { "authors": "Wei Xu; Courtney Napoles; Ellie Pavlick; Quanze Chen; Chris Callison-Burch", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b65", "title": "Optimizing statistical machine translation for text simplification", "year": "2016" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b66", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Yingxiu Zhao; Zhiliang Tian; Huaxiu Yao; Yinhe Zheng; Dongkyu Lee; Yiping Song; Jian Sun; Nevin L Zhang", "journal": "", "ref_id": "b67", "title": "Improving meta-learning for lowresource text classification and generation via memory imitation", "year": "2022" }, { "authors": "Yang Zhong; Chao Jiang; Wei Xu; Junyi Jessy Li", "journal": "", "ref_id": "b68", "title": "Discourse level factors for sentence deletion in text simplification", "year": "2020" } ]
[ { "formula_coordinates": [ 6, 102.08, 203.09, 156.11, 40.88 ], "formula_id": "formula_0", "formula_text": "Z(x) = log(x) -µ ref σ ref POMME = Z (PPL id ) -Z (PPL ood )" } ]
2023-05-29
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b27", "b49", "b70", "b15", "b68", "b13", "b55", "b5", "b15", "b26", "b51" ], "table_ref": [], "text": "Generative modeling of 3D humans from real-world data has shown promise to represent and synthesize diverse human shapes, poses, and motions. Especially, the ability to create realistic humans in diverse clothing and accessories (e.g. backpacks, scarves, and hats) is indispensable for a myriad of applications including VR/AR, entertainment, and virtual try-on. The early work [4, 28,36,50,71] has demonstrated success in modeling undressed human bodies from real-world scans. Recently, the research community has been focused on the generative modeling of clothed humans [13,16,38], to better represent humans in everyday life.\nRecent advancements in shape representations such as Neural Fields [69] mitigate the need for pre-defining topology or template of clothing, enabling to build animatable clothed humans from raw 3D scans [14,56]. Along with its advantage in strong expressive power for avatar modeling, this approach also allows the models to learn faithful interactions between objects and humans. However, since raw 3D scans do not provide a clear separation of different components, existing approaches typically treat humans, clothing, and accessories as an entangled block of geometry [13]. In this paper, we argue that this leads to suboptimal expressiveness and composability of the generative avatars. Many applications require more intuitive control to add, replace, or modify objects while maintaining human identity. To make avatars explicitly compositable with objects, some approaches propose to leverage synthetic data [6,16,27]. However, the manual creation of 3D assets remains a challenge and is extremely difficult to scale. Moreover, the physical interaction of bodies, clothing, and accessories in synthetic data tends to be less faithful due to the domain gap.\nIn contrast to prior methods, our goal is to build a compositional generative model of objects and humans from real-world observations. The core challenge lies in the difficulty of learning the composition and decomposition of objects in contact from raw 3D scans. Capturing objects in isolation does not lead to faithful composition due to the lack of realistic deformations induced by physical contact. Thus, while it is essential to collect 3D scan data on objects and humans in contact, the joint scanning of humans with objects only provides an entangled block of 3D geometry as mentioned, and accurately segmenting different components requires non-trivial 3D annotation efforts.\nUpon these challenges, our contributions are: scalable data capture protocol, unsupervised decomposition of objects and humans, and generalizable neural object composition. Scalable Data Capture. Capturing multiple identities with various poses and objects requires prohibitively large time and storage. To overcome this issue, we propose to collect human-object interactions with diverse poses only from a single subject, referred to as the \"source human\". To enable the decomposition of objects, we also capture the same person without any objects, where the deviation between two sets defines \"objects\" in our setup. Examples are shown in Fig. 2. This capture protocol offers sufficient diversity in poses and object types within a reasonable capture time. Unsupervised Decomposition of Objects. To separate objects from the source human, we leverage the expressiveness of the generative human model based on implicit surface representation [13]. We train a human module without objects, and then jointly optimize the latent codes of the avatar and a generative model for objects to best explain the 3D scans of the person with objects. While the human module accounts for state differences in pose and clothing, the object-only module learns to synthesize the residual geometry as an object layer in an unsupervised manner. Notably, objects in our work are defined as residual geometry that cannot be explained by the trained human-only module. Neural Object Composition. While the unsupervised decomposition successfully separates objects from the source human, we observe that naively composing it to novel identities from other datasets [52,75] leads to undesired artifacts and misalignment in the contact regions. To address this, we propose a neural composition method by introducing another composition MLP that takes latent features from both human and object modules to make a final shape prediction. Due to the local nature of MLPs, our approach plausibly composes objects to novel identities without retraining as in Fig. 1.\nOur experiments show that our compositional generative model is superior to existing approaches without explicit disentanglement of objects and humans [13]. In addition, we show that our model can be used for fine-grained controls including object removal from 3D scans and multiple object compositions on a human, demonstrating the utility and expressiveness of our approach beyond our training data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b27", "b49", "b70", "b0", "b1", "b36", "b14", "b38", "b41", "b48", "b53", "b54", "b69", "b13", "b16", "b23", "b39", "b40", "b55", "b61", "b28", "b29", "b30", "b19", "b20", "b32", "b17", "b52", "b59", "b11", "b10", "b14", "b38", "b43", "b44", "b48", "b57", "b15", "b36", "b70", "b4", "b34", "b62", "b76", "b8", "b44", "b46", "b65", "b66", "b72", "b73", "b41", "b44", "b66", "b72", "b8", "b44", "b71", "b16", "b39", "b6", "b45", "b60", "b5", "b75", "b53", "b54", "b69", "b18", "b15", "b26", "b67", "b63" ], "table_ref": [], "text": "3D Human Models. Representing plausible 3D human bodies while handling diverse variations in shapes and poses is a long-standing problem. Due to the challenge in modeling diverse shape variation, the early work [4, 28,36,50,71] mainly focuses on the undressed 3D human body by learning meshbased statistical models deformed from a template mesh. To model dressed 3D humans, the follow-up work [1,2,37] adds 3D offsets on top of the parametric undressed human body models to represent clothing. Yet, the topological constraints and the resolution of the template model restrict these methods from modeling arbitrary shapes of clothing with high-frequency details. Recently emerging deep implicit shape representation [15,39,42,49] provides a breakthrough in expressing 3D humans by leveraging neural networks for representing continuous 3D shape space, where its efficacy is demonstrated in reconstructing clothed humans with highfidelity from images [54,55,70]. There also has been an actively growing field to represent animatable 3D human avatars using 3D scans [13,14,17,24,40,41,56,62]. However, prior 3D human models have paid little attention to the joint modeling of humans and objects in close contact. 2D/3D Generative Models. Generative models intend to express the plausible variations over the latent space, which can be used to create diverse realistic samples. There have been extensive studies in 2D generative modeling to create realistic photos [29][30][31] via generative adversarial networks (GANs) [20,21], variational autoencoders (VAEs) [33], and more recently, diffusion models [18,23,53,60]. Generative 3D modeling has also been actively explored. By leveraging the availability of a large-scale 3D object scans [12], many approaches present generative models for 3D objects [11,15,39,44,45,49,58]. Relatively few approaches have been presented for generative 3D human modeling, due to the lack of available 3D datasets for humans [3,13,16,37,71]. We show that our scalable data capture protocol and compositional generative model enable the synthesis of 3D humans with diverse objects in novel poses. Compositional Models. Compositional generative models via neural networks have been explored to represent different components as independent models, representing a whole scene by compositing them together. These approaches pursue controlling or sampling one component without affecting the rest. The early approaches focus on building such models in 2D for creating realistic 2D images via generative models [5,35,63,77]. More recent approaches explore the compositional reasoning for 3D [9,34,45,47,66,67,73,74]. Most approaches in this direction aim at synthesizing realistic novel views by compositing NeRFs [42] for 3D objects and scenes [45,67,73] and for human faces [9,45,72]. However, these approaches do not consider mutual shape deformations between objects. Human bodies are also treated as a composition of multiple body parts. These approaches attain final composition output by either max-pooling the outputs of individual components [17,40] or by using another neural network [3,7,46,61]. While a recent work shows interactionaware 3D composition reasoning is possible for faces and eyeglasses with extensive annotations and data preprocessing [34], our approach supports diverse object categories without requiring any manual annotations. Garment Modeling. Due to the deformable nature of garments, capturing and modeling 3D clothing is challenging. Only a few 3D garment datasets have been presented [6,76], where laborious segmentation and post-processing are required to separate the garments from dummies or human bodies. While most methods reconstruct a clothed 3D human as a single chunk of geometry [54,55,70], there exist methods reconstructing the 3D clothing as a separate layer on top of parametric mesh model (e.g., SMPL) using segmentation [19] or synthetic 3D assets [16,27]. Virtual tryon has also been actively explored in graphics via physics simulation [68] and or synthetic data [64]. In contrast, our approach learns a generative clothing and accessory model from real-world observations in an unsupervised fashion." }, { "figure_ref": [ "fig_0" ], "heading": "Preliminaries", "publication_ref": [ "b42", "b31", "b13", "b10", "b24", "b13", "b21" ], "table_ref": [], "text": "Data Acquisition. To model humans and objects in contact, we capture two sets of datasets, S sh and S sh+o . S sh consists of 3D scans of a single identity, denoted as \"source human\" with various poses. S sh+o consists of 3D scans of the source human with a variety of objects or additional outwear as shown in Fig. 2. In this work, we choose coats, vests, backpacks, scarves, and hats to demonstrate the generality of our approach for outwear and everyday accessories. To sup- port the generative modeling of objects, we capture multiple objects in each category. In addition to S sh and S sh+o , we also use other 3D human dataset [75] to train another target generative human model for composition, denoted S th .\nWe collect 3D scans with a system with synchronized and calibrated 8 Azure Kinects (see supp. mat. for details). We apply KinectFusion [43] to fuse the depth maps, and then reconstruct watertight meshes with screened-poisson surface reconstruction [32]. We also detect 2D keypoints using OpenPose [10] and apply the multi-view extension of SMPLify [8] to obtain SMPL parameters [36] for each scan. Generative Articulated Models. We adopt the generative human model [13] which extends forward skinning with root finding [14] for cross-identity modeling. We briefly discuss the framework and highlight our key modifications. The key idea in gDNA [13] is to represent occupancy fields conditioned by identity-specific latent codes z in a canonical space, and transform them into a posed space using forward linear blend skinning (LBS). The occupancy field defined for the location x c of a person in the canonical space can be represented as follows:\no(x c ) = O(x c , G(z)),(1)\nwhere G(•) is a spatially varying feature generator taking the latent code. While the original work [13] uses 3D feature voxels for the output of G, we use a tri-plane feature representation [11], which achieves better performance with higher memory efficiency. The generated feature map is conditioned on the latent code z via adaptive instance normalization [25].\nTo query the occupancy fields in a posed space point x d , we transform the canonical coordinate x c as follows: where W i is the identity conditioned skinning network, which outputs LBS skinning weights for the i-th bone, and N is the warping network given SMPL shape parameters β ∈ R 10 . B i (β, θ) is the transformation of the i-th bone in SMPL model given SMPL pose parameters θ ∈ R 24×3 and β. To jointly learn the occupancy and deformation networks, we solve for x c in Eq. 2 given x d using iterative root finding [14]. We discard the surface normal prediction networks used in [13] in both canonical space and screen space. Instead of hallucinating details with fake normals, we propose to model detailed geometry by jointly representing shapes as SDF together with the occupancy fields. As we can directly supervise SDF on surface normals [22], we model detailed geometry as true surface. However, we empirically find that directly replacing the occupancy with SDF leads to unstable training. To mitigate instability, we propose a hybrid modeling of occupancy and SDF. We disable the backpropagation of gradients from SDF to the deformation networks so that it is only supervised by the occupancy head. See supp. mat. for details.\nx d = n b i=1 W i (N (x c , β), z) • B i (β, θ) • x c ,(2)" }, { "figure_ref": [ "fig_1" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our goal is to build a compositional generative model that composes generative objects on target humans from raw 3D scans. To this end, we introduce a generative human module and a generative object module, followed by a composition module. Fig. 3 shows an overview of our pipeline." }, { "figure_ref": [], "heading": "Human Module. The human module M", "publication_ref": [], "table_ref": [], "text": "h = (G h , O h , F h , D h )\nrepresents the geometry of the human part and it is composed of a feature generator G h , a decoder O h , and deformation networks D h = (W h , N h ), where W h and N h are a skinning weight network and a warping network as in Eq. 2. As an output, M h produces an occupancy value o h , a feature vector f h , and a signed distance d h in the canonical space:\n(o h , f h ) = O h (x c , G h (z h )),(3)\nd h = F h (x c , G h (z h )). (4\n)\nf h is the intermediate latent feature before the last layer, and z h is a learnable latent code to vary the geometry of the human part. Note that the hybrid modeling of occupancy and SDF is applied only to the human module as our losses for unsupervised object decomposition require occupancy.\nObject \n(o o , f o ) = O o (x c , G o (z o )),(5)\nwhere z o is a learnable latent code to vary the geometry of the object part." }, { "figure_ref": [ "fig_3" ], "heading": "Neural Object Composition", "publication_ref": [ "b16", "b39" ], "table_ref": [], "text": "Since the outputs of the human module and the object module share the same canonical space and deformation networks, compositing the occupancy of the human and object modules in a closed-form [17,40] is possible. However, we observe that this leads to misalignment in the contact regions and floating artifacts. To address these issues, we introduce a neural composition module parameterized by MLPs.\nThe composition module M comp = (O comp , D comp ) is used to integrate humans and objects in the canonical space. We directly feed the feature vectors f h and f o from the human module and object module respectively as inputs. M comp outputs the final occupancy value o comp , after composition in the canonical space:\no comp = O comp (x c , f h , f o )(6)\nSimilar to the human module, the deformation networks D comp = (W comp , N comp ) provide the mapping from the canonical space to the posed space. The entire model is illustrated in Fig. 4." }, { "figure_ref": [], "heading": "Unsupervised Object Decomposition", "publication_ref": [], "table_ref": [], "text": "To decompose object layers from raw 3D scans in an unsupervised manner, our key idea is to represent objects as the residual of human geometry. To this end, we first train the human module M h using S sh , the dataset of source human without objects, along with the learnable shape code z sh for each scan. This allows the human module to account for slight shape variations of the source human by changing z sh . In the next step, using S sh+o , the dataset of source human with objects, we jointly train all modules together. In particular, we freeze the human module M h while optimizing z sh , M o , z o , and M comp . Intuitively, the pretrained human module tries to handle the geometry of the human part via optimization of z sh , while the object parts, which cannot be expressed by M h , are handled by M o and z o . Given the composed occupancy o comp in Eq. 6 and the predicted occupancy of the human module o h , the target occupancy of the object module can be computed as (1 -o h ) • o comp . We jointly optimize the neural composition module and the object module M o in an end-to-end manner using the loss functions discussed in Sec. 4.3." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b48", "b21", "b58" ], "table_ref": [], "text": "Our system is trained using the datasets S th , S sh and S sh+o with their SMPL shape and pose parameters. Following the auto-decoding framework of [49], we jointly optimize the latent code z assigned for each scan along with the network weights during training. Every scan in each dataset is assigned its own latent code, denoted z th ∈ R L th for scans in S th , z sh ∈ R L sh for scans in S sh and z o ∈ R Lo for scans in S sh+o . For z o , we use one-hot encoding for each object category using the first 5 bits to enable random sampling from a specific category. Note that all latent codes are initialized with zero.\nTo allow the unsupervised decomposition of objects from the source human as discussed in Sec. 4.2, and to enable the creation of novel human identities with objects, we train two separate human modules M sh and M th . M sh is the instance of the human module for modeling shapes of the source human, and M th is another instance of the human module for generating novel target human shapes.\nTraining consists of three stages: We first train M th and z th with S th , to leverage the wide variation of shapes and poses of samples in S th for the multi-subject forward skinning module, D th . For later stages, D th is used to initialize other deformation networks with its warping network N th frozen, to let all samples share the same canonical space. Next, we train M sh and z sh with S sh . For the last stage, using all the samples, we train M o , M comp , z sh , and z o with the pre-trained M th , M sh and z th frozen. Note that z sh for the last stage are re-initialized as the mean of z sh after the second stage, denoted z sh . M comp models all training samples using the feature vector from either M th or M sh for the human part, and from M o for the object part. In the case of S th and S sh where scans are with no objects, we introduce a new latent code z emp as an alternative input to M o for no objects. Losses: For the first stage, we use losses following [13]. We use the binary cross entropy loss L th between the predicted occupancy of M th and the ground truth occupancy. Note that O d (•) and F d (•) denote the occupancy field and SDF in posed space, respectively. We also use guidance losses L bone , L joint and L warp to aid training. L bone encourages the occupancy of x bone to be one, where x bone are randomly selected points along the SMPL bones in canonical space. L joint encourages the skinning weights of SMPL joints to be 0.5 for connected two bones and 0 for all other bones. L warp encourages deformation network N to change body size consistently, by enforcing vertices of a fitted SMPL to warp to vertices of the mean SMPL shape, achieved by having shape parameter β as zero. Lastly, we use L reg th to regularize the latent code z th to be close to zero.\nL th = BCE((O d th (x c , G th (z th )), o gt ) (7) L bone = BCE((O th (x bone , G th (z th )), 1) (8) L joint = ∥W (x joint , z th ) -w gt ∥ (9) L warp = ∥N (v(β), β) -v(β 0 )∥ (10) L reg th = ∥z th ∥(11)\nFor training the SDF network, we use L1 loss L sdf between the predicted and the ground truth signed distance and L2 loss L nml between the gradients of SDF and the ground truth normals of points on the surface. We additionally use L igr for SDF to satisfy the Eikonal equation [22] and L bbox to prevent SDF values of off-surface points from being the zero-level surface as in [59].\nL sdf = |F d th (x c , G th (z th )) -d gt |(12)\nL nml = ∥∇F d th (x c , G th (z th )) -n gt ∥(13)\nL igr = (∥∇F th (x c , G th (z th ))∥ -1) 2 (14) L bbox = exp(-α • |F th (x c , G th (z th ))|), α ≫ 1 (15)\nFor the second stage, we use the binary cross entropy loss L sh between the predicted occupancy of M sh and the ground truth occupancy, and L reg sh to regularize the latent code z sh to be close to zero. Since we initialize D sh with pre-trained D th , additional guidance losses are not required.\nL sh = BCE((O d sh (x c , G sh (z sh )), o gt ) (16) L reg sh = ∥z sh ∥(17)\nFor the last stage, we use the binary cross entropy loss L comp between the predicted occupancy of M comp and the ground truth occupancy. We also use L o between the predicted occupancy of M o and the residual part of S sh+o where M h cannot explain. Moreover, we optimize z sh by using the binary cross entropy loss L f it between the output of M sh and the ground truth occupancy. Finally, we regularize z sh to be close to z sh and z o to be close to zero.\nL comp = BCE((O d comp (x c , f h , f o ), o gt )(18)\nL o = BCE((O o (x c , G o (z o )), (1 -o h ) • o comp ) (19\n)\nL f it = BCE((O d sh (x c , G sh (z sh )), o gt ) (20) L reg sh = ∥z sh -z sh ∥ (21) L reg o = ∥z o ∥(22)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate our generative composition model across various scenarios. We first demonstrate the quality of the random 3D avatar creations from our model and the disentangled natures of human and object controls. Quantitative and qualitative comparisons against the previous SOTA [13] are performed, incorporating a user study via CloudResearch Connect. We also conduct ablation studies to validate our design choices." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "Our 3D Scans: As described in Sec. 3, we use our multi-Kinect system to capture the source human with and without objects, S sh (180 samples) and S sh+o (342 samples). For S sh+o , we consider 4 categories of objects: 5 backpacks (77 samples in total), 6 outwear (94 samples), 8 scarves (89 samples), and 6 hats (82 samples).\nWe run quantitative evaluation by focusing on backpacks as other objects such as outwear are already incorporated in S th . We use another set with 300 samples of the source human with backpacks only, denoted as S sh+bp . To build a testing set for FID computation in this quantitative evaluation, we further capture 343 samples of 3 different unseen identities who wear unseen backpacks. We denote this test dataset, S unseen+bp . THuman2.0 [75]: THuman2.01 provides high-quality 3D dataset for dressed humans. We use 526 samples for S th ." }, { "figure_ref": [ "fig_4", "fig_4", "fig_5" ], "heading": "Qualitative Evaluation", "publication_ref": [], "table_ref": [], "text": "We demonstrate the expressive power and controllability of our composition model via inferences in various scenarios by controlling latent codes for humans z h and object z o . Random Generation. The 3D avatars created by attaching specific object latent codes z o to random sampled human codes z h are shown in Fig. 5 (bottom). The outputs of the human module M h are also shown on the top of Fig. 5 for reference. Our model enables the creation of diverse 3D avatars with controllable objects. Disentangled Controls over Human and Objects. To further test the disentangled nature of our composition model, we create 3D humans with objects by changing either human latent code or object latent code, as shown in Fig. 6. The examples on the top vary the human part by keeping the same object code that represents a scarf. On the bottom examples, we vary object codes for a fixed identity shown on the leftmost side. These results show the core advantage of our composition model in individual controls.\nInterpolation. Fig. 7 demonstrates smooth interpolation of each module without deteriorating the other module.\nComposition of Multiple Objects. Fig. 8 shows that our system allows the composition of multiple objects. To add multiple objects, we use the latent code of each object and get the occupancy and the feature vector of objects. Using the normalized occupancy of multiple objects as weights, we calculate the weighted sum of feature vectors. The aggregated feature is then fed to the composition module along with the human feature to get the final composition output. Note that our dataset has no such sample with multiple objects. Compared to our method, baselines suffer from generating outputs of diverse humans with complete objects." }, { "figure_ref": [ "fig_7" ], "heading": "Comparison with SOTA", "publication_ref": [ "b50", "b15", "b47", "b51" ], "table_ref": [], "text": "Since our method is the first generative model for compositing humans and objects, there is no direct competitor, and comparison with the previous non-compositional model such as gDNA is non-trivial. To make the assessment possible at our best, we consider a specific scenario where a user wants to create samples with a specific object category, being the backpack here. To provide such controllability on gDNA, we first extend the gDNA model with our dataset. Note that, in this evaluation, we use the same dataset S th , S sh , S sh+bp for training both our model and gDNA. Extending gDNA for Composition. We train gDNA model using the public code with our datasets. Both human-only outputs and the ones with a backpack can be sampled from the trained model. To intentionally generate outputs with a backpack, we search the latent codes associated with the training samples with backpacks and fit a gaussian from which we can perform a sampling. We denote this baseline method as 'gDNA (w/ object)'.\nThe second possible extension of gDNA is based on the arithmetic operation among gDNA's latent codes, which is widely used for GAN-based image manipulation [51]. We found that gDNA's original framework allows some level of composition by adding or subtracting the latent codes. Specifically, we choose a latent code z * sh for the source human without a backpack and another latent code z * sh+bp for the source human with the backpack. We simply take their subtraction z bp = z * sh+bp -z * sh , which can be considered as a residual for the backpack. We found that composition can be performed by adding this residual to another human's latent code, that is z bp + z th . We denote this baseline method as 'Arith. gDNA (w/ object)'. Qualitative Comparison with User Study. The visual comparison between ours and the extended gDNAs is shown in Fig. 9. In the first row, we show random samples generated from 'gDNA (w/ object)'. Since the human scans with the backpack are only of the source human's (other samples from S th do not have any backpack), the generated outputs lack shape variety for the human part, producing always the source human's identity. In the second row of Fig. 9, backpacks are added to novel identities; however, the method suffers from lack of details on both humans and objects. In contrast, the outputs of our method shown in the last row show strong generalization by creating diverse human identities with naturally attached detailed objects.\nTo further validate this comparison, we perform a user test (A/B test) on CloudResearch Connect. We render samples from three viewpoints (same views for all) and show ours with each baseline (A/B examples) in a random order to each subject. Each subject answers 5 questions per baseline by choosing more authentic 3D human samples. The data was collected from 50 subjects. The results are shown in the \"User Preference\" column in Tab. 1. As shown, our methods are preferred over extended gDNA baselines. Moreover, to confirm the diversity of identities in our method and 'gDNA (w/ object)', 50 subjects were shown the rendering of the source human and were asked to choose samples that don't resemble the source human. Samples of our method were chosen by 92.4%, indicating that 'gDNA (w/ object)' suffers to generate novel identities with a backpack. Quantiative Evaluation via FID. To evaluate the generation quality of our method, we compare Fréchet Inception Distance (FID) between the 2D normal renderings of the test dataset S unseen+bp and the generated outputs, following [13]. The result is shown in Tab. 1. 'gDNA (w/ object)' has a relatively better score than ours, due to the fact that it only samples 3D humans around S sh+bp , which are always close to the GT samples. A more fair comparison is between ours and 'Arith. gDNA (w/ object)', where both approaches try to attach the backpack to novel identities. Our method significantly outperforms this baseline. Performance on Fitting. We evaluate the expressiveness of our model by fitting it to unseen scans with objects. As a baseline, we consider gDNA [13] as it demonstrates bet- ter fitting results on 3D clothed human scans over other SOTA [16,48]. Besides the original gDNA trained with S th , we also consider gDNA trained with S th , S sh and S sh+bp ('gDNA (w/ object)') to enable fitting of the object part. We use scans with backpacks from Renderpeople2 [52] and captured dataset S unseen+bp for fitting comparison.\nAs shown in Tab. 2, our method reports better fitting accuracy than the baselines. Our method effectively fits the geometry of both humans and objects while baselines only reconstruct either the human part or the object part as shown in Fig. 10. Moreover, since our method separately models humans and objects, it enables the high-quality removal of objects after fitting." }, { "figure_ref": [ "fig_8" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Neural Composition. Our system provides two ways of extracting the final composition output. One is by using o comp : neural composition, and the other is by using the maximum value between o h and o o of queried points: naive composition. We verify the necessity of using neural composition in order to generate high-quality outputs of humans with objects. Compared to naive composition, neural composition remarkably reduces the artifacts induced by the imperfect fitting of the source human, resulting in lower FID values (Tab. 1). Qualitative comparison is presented in Fig. 11. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "We present a novel framework for learning a compositional generative model of humans and objects. Our compositional generative model provides separate control over the human part and the object part. To train our compositional model without manual annotation for the object geometries, we propose to leverage 3D scans of a single person with and without objects. Our results show that the learned generative model for the object part can be authentically transferred to novel human identities. Limitations and Future Work. While our approach is general and supports diverse objects, decomposing thin layers of clothing in an unsupervised manner remains a challenge due to the limited precision of 3D scans. Extending our approach to modeling from RGB images is also an exciting research direction for future work." }, { "figure_ref": [], "heading": "A. Implementation details A.1. Network Architectures", "publication_ref": [ "b23" ], "table_ref": [], "text": "Latent codes assigned to each scan, z th , z sh , and z o are 64-dimensional. For z o , we use its first 5 bits to encode the object category via one-hot encoding and optimize only the last 59 bits during training. The generator G of the human module and the object module generates the 256 × 256 × 64 feature image from a constant vector of size 256 × 16 × 16 via 4 layers of (bilinear upsampler with a scale factor of 2, 2D convolution of kernel size 3 and stride 1, adaIN for conditioning the generator with the latent code z, and leaky ReLU activations). The 256 × 256 × 64 output feature image is split into one 256 × 256 × 32 and two 256 × 128 × 32 to form a tri-plane feature map. Note that the feature map is 128dimensional along z-axis and 256-dimensional along other axes. The decoder for predicting the occupancy of the human module and the object module is a multi-layer perceptron having the intermediate neuron size of (256, 256, 256, 229, 1) with skip connection from the input features to the 4th layer and nonlinear activations of softplus with β = 100 except for the last layer that uses sigmoid. As an input, it takes the Cartesian coordinates in canonical space which are encoded using a positional encoding with 4 frequency components, and the 32-dimensional feature queried from the generated tri-plane. The decoder for predicting SDF of the human module has the same architecture as the decoder for predicting the occupancy, except that it has no activations for the last layer. The decoder for predicting the occupancy of the composition module has the same architecture as the decoders for predicting the occupancy of other modules. However, instead of taking in the feature from the generated tri-plane as an input, it takes in the intermediate latent feature vectors before the last layer of the decoders for predicting the occupancy of the human module and object module, which are 229-dimensional each.\nOur deformation networks D = (W, N ) follow the architecture of the deformer of gDNA [13]. The skinning network W is a multi-layer perceptron having the intermediate neuron size of (128,128,128,128,24) with nonlinear activations of softplus with β = 100, except for the last layer that uses softmax in order to get normalized skinning weights. As an input, it takes the Cartesian coordinates in canonical space and the latent code z ∈ R 64 of the training sample. The warping network N is also a multi-layer perceptron having the intermediate neuron size of (128, 128, 128, 128, 3) with nonlinear activations of softplus. As an input, it takes the Cartesian coordinates in canonical space and the SMPL shape parameter β ∈ R 10 of the training sample. The input Cartesian coordinates are passed to the last layer for the network to learn residual displacements." }, { "figure_ref": [], "heading": "A.2. Training Procedure", "publication_ref": [ "b13" ], "table_ref": [], "text": "Our training consists of three stages. First, we train M th and z th with S th with losses following [13,14] and additional losses to train the SDF network. The total loss L M th is as follows:\nL M th = L th + λ bone L bone + λ joint L joint + λ warp L warp (23\n)\n+λ reg th L reg th + L sdf + L nml + L igr + L bbox ,\nwhere λ warp = 10 and λ reg th = 10 -3 . We set λ bone = 1 and λ joint = 10 only for the first epoch and 0 afterwards. For the second stage, we train M sh and z sh with S sh with the total loss L M th being,\nL M sh = L sh + λ reg sh L reg sh ,(24)\nwhere λ reg sh = 10 -3 . As described in the main paper, since we initialize D sh with the pre-trained D th , additional guidance losses as in the first stage are not required. Note that since it is not our primary objective to model the detailed surface of the source human, we don't utilize the hybrid modeling of occupancy and SDF for M sh .\nFor the last stage, we train M o , M comp , z sh , and z o with the pre-trained M th , M sh and z th frozen. As described in the main paper, z sh for the last stage are re-initialized as the mean of z sh after the second stage. The total loss L is as follows:\nL = L comp + L o + λ f it L f it (25) +λ reg sh L reg sh + λ reg o L reg o ,\nwhere λ f it = 0.2, λ reg sh = 50, and λ reg sh = 10 -3 .\nWe train each stage with the Adam optimizer with a learning rate of 0.001 without decay. All stages are trained for 300 epochs." }, { "figure_ref": [], "heading": "A.3. Inference", "publication_ref": [], "table_ref": [], "text": "We generate the composited canonical shapes of general people with objects by random sampling z th and z o from the Gaussian distribution fitted to each set of latent codes. We then extract meshes using o comp with a resolution of 256 3 . We finally repose the output mesh using the SMPL pose parameter with the learned skinning fields." }, { "figure_ref": [], "heading": "B. Data", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1. Acquisition", "publication_ref": [ "b56", "b42", "b31" ], "table_ref": [], "text": "We collect 3D scans of the source human with and without objects using a system with synchronized and calibrated 8 Azure Kinects. We capture data 5FPS with the resolution of 2048 × 1536 for the RGB cameras, and 1024 × 1024 for the depth cameras. 3. Quantitative evaluation of the significance of using the hybrid modeling of occupancy and SDF is presented.\nusing COLMAP [57] and adjust the optimized camera extrinsics to real-world scale based on the corresponding depth maps. We apply KinectFusion [43] with the code from the repository 3 to fuse the captured depth maps with the voxel resolution of 1.5mm. We reconstruct watertight meshes from the fused output using screened-poisson surface reconstruction [32] of depth 9. In order to obtain SMPL parameters for each captured scan, we use the multi-view extension of SMPLify [8] with the code from the repository 4 . For each scan, we render images from 18 viewpoints and detect 2D keypoints using OpenPose [10], and apply the multi-view extension of SMPLify to estimate SMPL parameters for each scan." }, { "figure_ref": [], "heading": "B.2. Data Statistics", "publication_ref": [], "table_ref": [], "text": "We use 180 samples for S sh and 342 samples for S sh+o . For S sh+o , we consider 4 categories of objects: 5 backpacks (77 samples in total), 6 outwear (94 samples), 8 scarves (89 samples), and 6 hats (82 samples). For running the quantitative evaluation focused on backpacks, we use another set with 300 samples of the source human with 5 backpacks, denoted as S sh+bp . To build a testing set for FID computation, we further capture 343 samples of 3 different unseen identities who wear unseen backpacks, denoted as S unseen+bp . We also use 526 samples of THuman2.0 [75] for S th ." }, { "figure_ref": [], "heading": "C. Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_9" ], "heading": "C.1. Geometry Modeling with SDF", "publication_ref": [ "b13", "b25", "b64" ], "table_ref": [], "text": "As mentioned in the main paper, we model detailed geometry by jointly predicting SDF together with the occupancy fields. We find that directly replacing the occupancy with the SDF leads to failures in canonicalization. Among the set of correspondences resulting from multiple initials for the root finding algorithm, previous work that uses occupancy representation [13,14] determines the final correspondence by choosing the point with the highest estimated occupancy. However, in the case of the SDF representation, we empirically find out that choosing the point by only utilizing the estimated SDF leads to poor canonicalization. Moreover, using a single initial by linearly combining the skinning weights of the nearest neighbor on the fitted SMPL mesh and the inverse bone transformations as in [26,65] to incorrect canonicalization. Hence, we utilize a hybrid modeling of occupancy and SDF by leveraging the advantage of each representation. While directly supervising SDF on the surface normals, we select final correspondences and train the deformation networks using occupancy. For stable training, it is crucial to disable the backpropagation of gradients from the SDF head to the deformation networks and let only the occupancy head supervise them.\nWe verify the significance of predicting both occupancy and SDF over predicting only occupancy to generate outputs with higher frequency details. For each method, we reconstruct the ground truth data used for training with assigned latent codes. We compute the Chamfer distance and pointto-surface distance (P2S) between the ground truth and the reconstruction output. We also render 2D normal maps from fixed views and compute the L2 error (Normal). As demonstrated in Tab. 3, reconstruction outputs are improved when both occupancy and SDF are predicted. In Fig. 12 we show the qualitative comparison between samples generated via each method." }, { "figure_ref": [], "heading": "D. Quantitative Evaluation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1. FID Computation", "publication_ref": [], "table_ref": [], "text": "We compute FID score using the code from the repository 5 . For the test set, we render 2D normal maps in resolution 256 2 of 343 samples in S unseen+bp from 18 viewpoints, resulting in 6174 images. For each method, we generate 200 " }, { "figure_ref": [ "fig_10", "fig_11" ], "heading": "D.2. User Preference Study", "publication_ref": [], "table_ref": [], "text": "We perform two user preference studies (A/B test) via CloudResearch Connect. The first study aims to validate the generation quality of our method over all baselines, and the second study aims to validate the generation diversity of our method over 'gDNA (w/ object)'.\nFor the first user study, we show a sample generated with our method along with another sample generated with one of the baseline methods in random order. For each sample, we render 2D normal maps in resolution 256 2 from 3 viewpoints. We ask 50 subjects to answer 5 A/B pairs per baseline by choosing the preferred sample with a more authentic shape. An example of a question is presented in Fig. 13 For the second user study, we only compare our method with the baseline, 'gDNA (w/ object)', with a different protocol. In this study, we similarly render the normal maps from 3 viewpoints from each method and additionally show an image of the source human along with the A/B pairs. Then, we request the observers to choose the sample that looks more different from the source human. The test is intended to see whether the methods can produce diverse human identities with objects, sufficiently different from the source human's appearance. An example of a question is presented in Fig. 14. Similar to the first study, we ask 50 subjects to answer 5 A/B pairs by choosing the sample that better satisfies the question." }, { "figure_ref": [], "heading": "D.3. Fitting Comparison", "publication_ref": [], "table_ref": [], "text": "For fitting our model to unseen scans with objects, we follow the fitting process of gDNA [13]. During fitting, we optimize the latent code for the human part, z h , the latent code for the object part, z o , and the SMPL shape parameter β with other network frozen. We use M th for the human module. We initialize z h and z o each with 8 randomly sampled codes from the Gaussian distribution fitted to each set of latent codes. β is initialized with the obtained SMPL shape parameter during our data acquisition process. The loss L f itting used for fitting raw scans is as follows: \nwhere λ reg h = 50 and λ reg o = 50. We optimize for 500 iterations using the Adam optimizer with a learning rate of 0.01 without any weight decay or learning rate decay. Of 8 fitted outputs, the one with the minimum bi-directional Chamfer distance to the target scan is chosen as the final output." }, { "figure_ref": [], "heading": "E. Addtional Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Please refer to the supplementary video for additional qualitative results on individual control of the human and object modules, latent code interpolation, and composition of multiple objects." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements: This work of H. Joo and T. Kim was supported by SNU Creative-Pioneering Researchers Program and IITP grant funded by the Korean government (MSIT) [NO.2021-0-01343 and No.2022-0-00156]" } ]
Figure 1. From 3D scans of the "source human" in casual clothing (left top) and with additional outwear or objects (left bottom), our method automatically decomposes objects from the source human and builds a compositional generative model that enables 3D avatar creations of novel human identities with variety of outwear and objects (right) in an unsupervised manner.
NCHO: Unsupervised Learning for Neural 3D Composition of Humans and Objects
[ { "figure_caption": "Figure 2 .2Figure 2. Examples of Our Datasets. Top row: sample scans of S sh containing the source human without objects. Bottom row: sample scans of S sh+o containing the source human with objects.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. Overview. From captured scans of the source human with and without objects, our method succesfully decomposes objects from humans without any supervision, allowing a generative model to learn the shapes of various objects. These objects are then added to novel identities via neural composition, resulting in the creation of diverse human avatars with controllable objects.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Module. The object module M o = (G o , O o ) is responsible for modeling the geometry of the object part. Since the object module and the human module share the same canonical space, the object module does not require separate deformation networks. M o returns an occupancy value o o , and a feature vector f o , which is the intermediate latent feature before the last layer, in the canonical space:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Model. Given latent code z h , M h predicts the occupancy fields and SDFs for humans in canonical space. Similarly, with latent code zo, Mo predicts the occupancy fields for objects. The features f h and fo from each network are passed to Mcomp to predict the occupancy fields for final compositional outputs of humans and objects in the same canonical space.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Random Generation. Top row: randomly sampled outputs of the human module before composition. Bottom row: composition outputs of target humans on top with specific objects.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Disentangled Human and Object. Top row: composition outputs of the same object (a scarf), added to different human identities. Bottom row: composition outputs of different objects added to the single human identity shown in the leftmost column.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .78Figure 7. Interpolation. Top row: human module interpolation. Bottom row: object module interpolation. Notice that interpolating one module doesn't deteriorate the geometry of the other.", "figure_data": "", "figure_id": "fig_6", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Fitting and Object Removal. Compared to baselines, our method successfully explains both human shapes and object shapes, enabling the natural removal of objects after fitting.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Composition Comparison. While naive composition suffers from severe artifacts, neural composition reduces these artifacts and produces high-quality outputs.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 .12Figure 12. Qualitative Comparison on Introducing SDF Network in the Human Module. Top row: Generated outputs when trained with occupancy only. Bottom row: Generated outputs when trained with the hybrid modeling of occupancy and SDF. Additionally predicting the SDF improves the details of generated outputs.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. Example Images of the First User Study. Subjects are asked to choose the sample with a more authentic shape between top and bottom.", "figure_data": "", "figure_id": "fig_10", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Example Images of the Second User Study. Subjects are asked to choose the sample that does not resemble the shape of the source human shown on the left, between top and bottom.", "figure_data": "", "figure_id": "fig_11", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Lf itting = L comp + λ reg h L reg h + λ reg o L reg o (26) L comp = BCE(o comp , o unseen ) (27) L reg h = ∥z h ∥ (28) L reg o = ∥z o ∥,", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative evaluation of the importance of compositional modeling. User preference score reflects the frequency with which participants of our perceptual study favored each method over ours.", "figure_data": "MethodFIDUser PreferencegDNA (w/ object)41.7143.6%Arith. gDNA (w/ object)73.8113.6%Ours (Naive composition) 55.2922.4%Ours51.03 100% -(above)gDNA(w/ object)Arith. gDNA(w/ object)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Fitting accuracy comparison with the SOTA method [13].", "figure_data": "MethodPred-to-Scan↓ Scan-to-Pred↓gDNA0.01620.0190gDNA(w/ object)0.02180.0112Ours0.01160.0099TargetgDNAgDNA (w/ object)OursOurs (Object Removed)Fitting", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "We perform image-based calibration", "figure_data": "MethodChamfer↓P2S↓Normal↓Occ0.01400.01690.0092Occ & SDF0.00980.01280.0074", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Taeksoo Kim; Shunsuke Saito; Hanbyul Joo
[ { "authors": "Thiemo Alldieck; Marcus Magnor; Bharat Lal Bhatnagar; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b0", "title": "Learning to reconstruct people in clothing from a single rgb camera", "year": "2019" }, { "authors": "Thiemo Alldieck; Marcus Magnor; Weipeng Xu; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b1", "title": "Video based reconstruction of 3d people models", "year": "2018" }, { "authors": "Thiemo Alldieck; Hongyi Xu; Cristian Sminchisescu", "journal": "", "ref_id": "b2", "title": "imghum: Implicit generative models of 3d human shape and articulated pose", "year": "2021" }, { "authors": "Dragomir Anguelov; Praveen Srinivasan; Daphne Koller; Sebastian Thrun; Jim Rodgers; James Davis", "journal": "TOG", "ref_id": "b3", "title": "Scape: shape completion and animation of people", "year": "2005" }, { "authors": "Samaneh Azadi; Deepak Pathak; Sayna Ebrahimi; Trevor Darrell", "journal": "IJCV", "ref_id": "b4", "title": "Compositional gan: Learning image-conditional binary composition", "year": "2020" }, { "authors": "Bharat Lal Bhatnagar; Garvita Tiwari; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b5", "title": "Multi-garment net: Learning to dress 3d people from images", "year": "2019" }, { "authors": "Sourav Biswas; Kangxue Yin; Maria Shugrina; Sanja Fidler; Sameh Khamis", "journal": "", "ref_id": "b6", "title": "Hierarchical neural implicit pose network for animation and motion retargeting", "year": "2021" }, { "authors": "Federica Bogo; Angjoo Kanazawa; Christoph Lassner; Peter Gehler; Javier Romero; Michael J Black", "journal": "", "ref_id": "b7", "title": "Keep it smpl: Automatic estimation of 3d human pose and shape from a single image", "year": "2016" }, { "authors": "B R Mallikarjun; Ayush Tewari; Xingang Pan; Mohamed Elgharib; Christian Theobalt", "journal": "", "ref_id": "b8", "title": "gcorf: Generative compositional radiance fields", "year": "2022" }, { "authors": "Zhe Cao; Tomas Simon; Shih-En Wei; Yaser Sheikh", "journal": "", "ref_id": "b9", "title": "Realtime multi-person 2d pose estimation using part affinity fields", "year": "2017" }, { "authors": "Connor Z Eric R Chan; Matthew A Lin; Koki Chan; Boxiao Nagano; Shalini De Pan; Orazio Mello; Leonidas J Gallo; Jonathan Guibas; Sameh Tremblay; Khamis", "journal": "", "ref_id": "b10", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b11", "title": "Shapenet: An informationrich 3d model repository", "year": "2015" }, { "authors": "Tianjian Xu Chen; Jie Jiang; Jinlong Song; Michael J Yang; Andreas Black; Otmar Geiger; Hilliges", "journal": "", "ref_id": "b12", "title": "gdna: Towards generative detailed neural avatars", "year": "2008" }, { "authors": "Yufeng Xu Chen; Zheng; J Michael; Otmar Black; Andreas Hilliges; Geiger", "journal": "", "ref_id": "b13", "title": "Snarf: Differentiable forward skinning for animating non-rigid neural implicit shapes", "year": "2021" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b14", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Enric Corona; Albert Pumarola; Guillem Alenya; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b15", "title": "Smplicit: Topology-aware generative model for clothed people", "year": "2021" }, { "authors": "Boyang Deng; John P Lewis; Timothy Jeruzalski; Gerard Pons-Moll; Geoffrey Hinton; Mohammad Norouzi", "journal": "", "ref_id": "b16", "title": "Nasa neural articulated shape approximation", "year": "2020" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "NeurIPS", "ref_id": "b17", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Yao Feng; Jinlong Yang; Marc Pollefeys; Michael J Black; Timo Bolkart", "journal": "", "ref_id": "b18", "title": "Capturing and animation of body and clothing from monocular video", "year": "2022" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "NeurIPS", "ref_id": "b19", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b20", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Amos Gropp; Lior Yariv; Niv Haim; Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b21", "title": "Implicit geometric regularization for learning shapes", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "JMLR", "ref_id": "b22", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "TOG", "ref_id": "b23", "title": "Avatarclip: Zero-shot text-driven generation and animation of 3d avatars", "year": "2022" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b24", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Boyi Jiang; Yang Hong; Hujun Bao; Juyong Zhang", "journal": "", "ref_id": "b25", "title": "Selfrecon: Self reconstruction your digital avatar from monocular video", "year": "2022" }, { "authors": "Boyi Jiang; Juyong Zhang; Yang Hong; Jinhao Luo; Ligang Liu; Hujun Bao", "journal": "", "ref_id": "b26", "title": "Bcnet: Learning body and cloth shape from a single image", "year": "2020" }, { "authors": "Hanbyul Joo; Tomas Simon; Yaser Sheikh", "journal": "", "ref_id": "b27", "title": "Total capture: A 3d deformation model for tracking faces, hands, and bodies", "year": "2018" }, { "authors": "Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen", "journal": "ICLR", "ref_id": "b28", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2017" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b29", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Tero Karras; Samuli Laine; Miika Aittala; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b30", "title": "Analyzing and improving the image quality of stylegan", "year": "2020" }, { "authors": "Michael Kazhdan; Hugues Hoppe", "journal": "TOG", "ref_id": "b31", "title": "Screened poisson surface reconstruction", "year": "2013" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "ICLR", "ref_id": "b32", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Junxuan Li; Shunsuke Saito; Tomas Simon; Stephen Lombardi; Hongdong Li; Jason Saragih", "journal": "", "ref_id": "b33", "title": "Megane: Morphable eyeglass and avatar network", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Ersin Yumer; Oliver Wang; Eli Shechtman; Simon Lucey", "journal": "", "ref_id": "b34", "title": "St-gan: Spatial transformer generative adversarial networks for image compositing", "year": "2018" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "SIGGRAPH Asia", "ref_id": "b35", "title": "SMPL: A skinned multiperson linear model", "year": "2015" }, { "authors": "Qianli Ma; Jinlong Yang; Anurag Ranjan; Sergi Pujades; Gerard Pons-Moll; Siyu Tang; Michael J Black", "journal": "", "ref_id": "b36", "title": "Learning to dress 3d people in generative clothing", "year": "2020" }, { "authors": "Qianli Ma; Jinlong Yang; Siyu Tang; Michael J Black", "journal": "", "ref_id": "b37", "title": "The power of points for modeling humans in clothing", "year": "2021" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b38", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Marko Mihajlovic; Shunsuke Saito; Aayush Bansal; Michael Zollhoefer; Siyu Tang", "journal": "", "ref_id": "b39", "title": "Coap: Compositional articulated occupancy of people", "year": "2022" }, { "authors": "Marko Mihajlovic; Yan Zhang; Michael J Black; Siyu Tang", "journal": "", "ref_id": "b40", "title": "Leap: Learning articulated occupancy of people", "year": "2021" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b41", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Shahram Richard A Newcombe; Otmar Izadi; David Hilliges; David Molyneaux; Andrew J Kim; Pushmeet Davison; Jamie Kohi; Steve Shotton; Andrew Hodges; Fitzgibbon", "journal": "", "ref_id": "b42", "title": "Kinectfusion: Real-time dense surface mapping and tracking", "year": "2011" }, { "authors": "Thu Nguyen-Phuoc; Chuan Li; Lucas Theis; Christian Richardt; Yong-Liang Yang", "journal": "", "ref_id": "b43", "title": "Hologan: Unsupervised learning of 3d representations from natural images", "year": "2019" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b44", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021" }, { "authors": "Atsuhiro Noguchi; Xiao Sun; Stephen Lin; Tatsuya Harada", "journal": "", "ref_id": "b45", "title": "Neural articulated radiance field", "year": "2021" }, { "authors": "Julian Ost; Fahim Mannan; Nils Thuerey; Julian Knodt; Felix Heide", "journal": "", "ref_id": "b46", "title": "Neural scene graphs for dynamic scenes", "year": "2021" }, { "authors": "Pablo Palafox; Aljaž Božič; Justus Thies; Matthias Nießner; Angela Dai", "journal": "", "ref_id": "b47", "title": "Npms: Neural parametric models for 3d deformable shapes", "year": "2021" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b48", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Georgios Pavlakos; Vasileios Choutas; Nima Ghorbani; Timo Bolkart; A A Ahmed; Dimitrios Osman; Michael J Tzionas; Black", "journal": "", "ref_id": "b49", "title": "Expressive body capture: 3D hands, face, and body from a single image", "year": "2019" }, { "authors": "Alec Radford; Luke Metz; Soumith Chintala", "journal": "ICLR", "ref_id": "b50", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "year": "2016" }, { "authors": " Renderpeople", "journal": "", "ref_id": "b51", "title": "", "year": "2018" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b52", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b53", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b54", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "Shunsuke Saito; Jinlong Yang; Qianli Ma; Michael J Black", "journal": "", "ref_id": "b55", "title": "Scanimate: Weakly supervised learning of skinned clothed avatar networks", "year": "2021" }, { "authors": "L Johannes; Jan-Michael Schonberger; Frahm", "journal": "", "ref_id": "b56", "title": "Structurefrom-motion revisited", "year": "2016" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "NeurIPS", "ref_id": "b57", "title": "Graf: Generative radiance fields for 3d-aware image synthesis", "year": "2020" }, { "authors": "Julien Vincent Sitzmann; Alexander Martel; David Bergman; Gordon Lindell; Wetzstein", "journal": "NeurIPS", "ref_id": "b58", "title": "Implicit neural representations with periodic activation functions", "year": "2020" }, { "authors": "Jascha Sohl-Dickstein; Eric A Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b59", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Shih-Yang Su; Frank Yu; Michael Zollhöfer; Helge Rhodin", "journal": "NeurIPS", "ref_id": "b60", "title": "A-nerf: Articulated neural radiance fields for learning human shape, appearance, and pose", "year": "2021" }, { "authors": "Garvita Tiwari; Nikolaos Sarafianos; Tony Tung; Gerard Pons-Moll", "journal": "", "ref_id": "b61", "title": "Neural-gif: Neural generalized implicit functions for animating people in clothing", "year": "2021" }, { "authors": "Yi-Hsuan Tsai; Xiaohui Shen; Zhe Lin; Kalyan Sunkavalli; Xin Lu; Ming-Hsuan Yang", "journal": "", "ref_id": "b62", "title": "Deep image harmonization", "year": "2017" }, { "authors": "Raquel Vidaurre; Igor Santesteban; Elena Garces; Dan Casas", "journal": "CGF", "ref_id": "b63", "title": "Fully convolutional graph neural networks for parametric virtual try-on", "year": "2020" }, { "authors": "Shaofei Wang; Katja Schwarz; Andreas Geiger; Siyu Tang", "journal": "", "ref_id": "b64", "title": "Arah: Animatable volume rendering of articulated human sdfs", "year": "2022" }, { "authors": "Ziyan Wang; Timur Bagautdinov; Stephen Lombardi; Tomas Simon; Jason Saragih; Jessica Hodgins; Michael Zollhofer", "journal": "", "ref_id": "b65", "title": "Learning compositional radiance fields of dynamic human heads", "year": "2021" }, { "authors": "Qianyi Wu; Xian Liu; Yuedong Chen; Kejie Li; Chuanxia Zheng; Jianfei Cai; Jianmin Zheng", "journal": "", "ref_id": "b66", "title": "Object-compositional neural implicit surfaces", "year": "2022" }, { "authors": "Donglai Xiang; Timur Bagautdinov; Tuur Stuyck; Fabian Prada; Javier Romero; Weipeng Xu; Shunsuke Saito; Jingfan Guo; Breannan Smith; Takaaki Shiratori", "journal": "TOG", "ref_id": "b67", "title": "Dressing avatars: Deep photorealistic appearance for physically simulated clothing", "year": "2022" }, { "authors": "Yiheng Xie; Towaki Takikawa; Shunsuke Saito; Or Litany; Shiqin Yan; Numair Khan; Federico Tombari; James Tompkin; Vincent Sitzmann; Srinath Sridhar", "journal": "CGF", "ref_id": "b68", "title": "Neural fields in visual computing and beyond", "year": "2022" }, { "authors": "Yuliang Xiu; Jinlong Yang; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b69", "title": "Icon: implicit clothed humans obtained from normals", "year": "2022" }, { "authors": "Hongyi Xu; Eduard Gabriel Bazavan; Andrei Zanfir; Rahul William T Freeman; Sukthankar", "journal": "", "ref_id": "b70", "title": "Ghum & ghuml: Generative 3d human shape and articulated pose models", "year": "2020" }, { "authors": "Yang Xue; Yuheng Li; Krishna Kumar Singh; Yong Jae Lee", "journal": "", "ref_id": "b71", "title": "Giraffe hd: A high-resolution 3d-aware generative model", "year": "2022" }, { "authors": "Bangbang Yang; Yinda Zhang; Yinghao Xu; Yijin Li; Han Zhou; Hujun Bao; Guofeng Zhang; Zhaopeng Cui", "journal": "", "ref_id": "b72", "title": "Learning object-compositional neural radiance field for editable scene rendering", "year": "2021" }, { "authors": "Hong-Xing Yu; Leonidas J Guibas; Jiajun Wu", "journal": "", "ref_id": "b73", "title": "Unsupervised discovery of object radiance fields", "year": "2022" }, { "authors": "Tao Yu; Zerong Zheng; Kaiwen Guo; Pengpeng Liu; Qionghai Dai; Yebin Liu", "journal": "", "ref_id": "b74", "title": "Function4d: Real-time human volumetric capture from very sparse consumer rgbd sensors", "year": "2021" }, { "authors": "Heming Zhu; Yu Cao; Hang Jin; Weikai Chen; Dong Du; Zhangye Wang; Shuguang Cui; Xiaoguang Han", "journal": "", "ref_id": "b75", "title": "Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images", "year": "2020" }, { "authors": "Jun-Yan Zhu; Philipp Krahenbuhl; Eli Shechtman; Alexei A Efros", "journal": "", "ref_id": "b76", "title": "Learning a discriminative model for the perception of realism in composite images", "year": "2015" } ]
[ { "formula_coordinates": [ 3, 382.2, 554.89, 163.58, 11.03 ], "formula_id": "formula_0", "formula_text": "o(x c ) = O(x c , G(z)),(1)" }, { "formula_coordinates": [ 3, 342.2, 680.86, 203.58, 30.5 ], "formula_id": "formula_1", "formula_text": "x d = n b i=1 W i (N (x c , β), z) • B i (β, θ) • x c ,(2)" }, { "formula_coordinates": [ 4, 48.95, 553.38, 237.42, 21.61 ], "formula_id": "formula_2", "formula_text": "h = (G h , O h , F h , D h )" }, { "formula_coordinates": [ 4, 111.78, 655.11, 175.25, 11.72 ], "formula_id": "formula_3", "formula_text": "(o h , f h ) = O h (x c , G h (z h )),(3)" }, { "formula_coordinates": [ 4, 122.61, 670.05, 160.54, 11.72 ], "formula_id": "formula_4", "formula_text": "d h = F h (x c , G h (z h )). (4" }, { "formula_coordinates": [ 4, 283.16, 672.44, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 4, 372.35, 353.87, 173.43, 11.72 ], "formula_id": "formula_6", "formula_text": "(o o , f o ) = O o (x c , G o (z o )),(5)" }, { "formula_coordinates": [ 4, 373.31, 593.11, 172.47, 11.72 ], "formula_id": "formula_7", "formula_text": "o comp = O comp (x c , f h , f o )(6)" }, { "formula_coordinates": [ 5, 342.34, 484.86, 203.44, 71.5 ], "formula_id": "formula_8", "formula_text": "L th = BCE((O d th (x c , G th (z th )), o gt ) (7) L bone = BCE((O th (x bone , G th (z th )), 1) (8) L joint = ∥W (x joint , z th ) -w gt ∥ (9) L warp = ∥N (v(β), β) -v(β 0 )∥ (10) L reg th = ∥z th ∥(11)" }, { "formula_coordinates": [ 5, 358.58, 650.87, 187.2, 12.71 ], "formula_id": "formula_9", "formula_text": "L sdf = |F d th (x c , G th (z th )) -d gt |(12)" }, { "formula_coordinates": [ 5, 350.18, 667.39, 195.6, 12.71 ], "formula_id": "formula_10", "formula_text": "L nml = ∥∇F d th (x c , G th (z th )) -n gt ∥(13)" }, { "formula_coordinates": [ 5, 323.31, 683.57, 222.47, 26.69 ], "formula_id": "formula_11", "formula_text": "L igr = (∥∇F th (x c , G th (z th ))∥ -1) 2 (14) L bbox = exp(-α • |F th (x c , G th (z th ))|), α ≫ 1 (15)" }, { "formula_coordinates": [ 6, 88.87, 158.75, 198.16, 26.67 ], "formula_id": "formula_12", "formula_text": "L sh = BCE((O d sh (x c , G sh (z sh )), o gt ) (16) L reg sh = ∥z sh ∥(17)" }, { "formula_coordinates": [ 6, 87.37, 307.83, 199.66, 12.69 ], "formula_id": "formula_13", "formula_text": "L comp = BCE((O d comp (x c , f h , f o ), o gt )(18)" }, { "formula_coordinates": [ 6, 63.24, 324, 219.64, 11.72 ], "formula_id": "formula_14", "formula_text": "L o = BCE((O o (x c , G o (z o )), (1 -o h ) • o comp ) (19" }, { "formula_coordinates": [ 6, 282.88, 326.39, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 6, 87.83, 340.53, 199.2, 41.61 ], "formula_id": "formula_16", "formula_text": "L f it = BCE((O d sh (x c , G sh (z sh )), o gt ) (20) L reg sh = ∥z sh -z sh ∥ (21) L reg o = ∥z o ∥(22)" }, { "formula_coordinates": [ 12, 310.02, 151.28, 233.43, 20.91 ], "formula_id": "formula_17", "formula_text": "L M th = L th + λ bone L bone + λ joint L joint + λ warp L warp (23" }, { "formula_coordinates": [ 12, 541.63, 163.55, 4.15, 8.64 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 12, 327.7, 179.18, 198.58, 9.65 ], "formula_id": "formula_19", "formula_text": "+λ reg th L reg th + L sdf + L nml + L igr + L bbox ," }, { "formula_coordinates": [ 12, 363.91, 259.55, 181.87, 10.32 ], "formula_id": "formula_20", "formula_text": "L M sh = L sh + λ reg sh L reg sh ,(24)" }, { "formula_coordinates": [ 12, 360.92, 423.62, 184.86, 24.59 ], "formula_id": "formula_21", "formula_text": "L = L comp + L o + λ f it L f it (25) +λ reg sh L reg sh + λ reg o L reg o ," } ]
2023-09-30
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b11", "b25", "b0", "b48", "b6", "b8", "b35", "b22", "b13", "b31", "b28", "b19", "b40", "b5", "b28", "b19", "b40", "b5", "b31", "b20", "b31", "b20", "b17", "b8", "b28", "b40", "b19", "b5", "b7", "b42", "b31", "b20", "b10", "b16", "b3", "b24", "b23", "b41", "b4" ], "table_ref": [], "text": "Compositional visual recognition is a fundamental characteristic of human intelligence (Lake et al., 2017) but it is challenging for modern deep learning systems. For example, humans can easily recognize unseen sliced tomatoes after seeing sliced potatoes and red tomatoes. Such a compositional zero-shot learning (CZSL) capability is valuable in that, novel visual concepts from a huge combinatorial semantic space could be recognized without \"seeing\" any of their training data. For example, C-GQA (Naeem et al., 2021) dataset contains 413 states and 674 objects. This implies a total of at least 278K compositional classes in an open world while only 2% of them are accessible in training. Therefore, CZSL can significantly reduce the need for large-scale training data.\nTraditional vision-based methods either directly learn the visual feature of compositions, or try to first decompose the visual data into representations of simple primitives, i.e., states and objects, and then learn to re-compose the compositions (Misra et al., 2017;Atzmon et al., 2020;Zou et al., 2020;Huynh & Elhamifar, 2020;Karthik et al., 2022;Tokmakov et al., 2019;Naeem et al., 2021;Zhang et al., 2022b;Mancini et al., 2021;Li et al., 2022). Thanks to the recent large pre-trained vision-language models (VLM) such as CLIP (Radford et al., 2021), recent state-of-the-art CZSL methods have been developed (Nayak et al., 2023;Lu et al., 2023;Xu et al., 2022;Huang et al., 2023). For instance, CSP (Nayak et al., 2023) inherits the hard prompt template of the CLIP, i.e., a photo of [state][object] where only the embeddings of the state-object pairs are trained. The following methods (Lu et al., 2023;Xu et al., 2022;Huang et al., 2023) use soft prompt introduced in CoOp (Zhou et al., 2022b), where the embeddings of the prompt template are jointly optimized, leading to a better CZSL performance. The impressive performance of CLIP-based CZSL methods benefits from the sufficiently good feature alignment between the image and text modalities, and the prompting techniques for adapting the aligned features to recognizing compositional classes. Despite the success of existing CLIP-based methods, we find several key considerations to prompt the pre-trained CLIP for better CZSL modeling. First, the diversity and informativeness of prompts are both important to distinguish between compositional classes. CZSL can be treated as zeroshot learning on fine-grained categories, which requires a fine-grained context to prompt the CLIP model (Radford et al., 2021;Lu et al., 2022). However, to contextualize a class with fine granularity, the hard prompt in Radford et al. (2021) suffers from the heuristic design of prompt templates, and a single prompt for each class lacks diversity to capture the intra-class variance of visual data (Fig. 1a). Though the ProDA (Lu et al., 2022) proposes to learn a collection of prompts that formulate classspecific distribution to address the diversity, the lack of language informativeness in their prompts limits their performance on fine-grained compositional categories. Second, the entanglement between visual primitives, e.g.red and tomatoes in Fig. 1b, incurs difficulty in learning decomposable visual representations that are useful for compositional generalization (Liu et al., 2022;Karthik et al., 2022), while such a capability is missing in (Nayak et al., 2023;Xu et al., 2022). Though the more recent work (Lu et al., 2023;Huang et al., 2023) learn to decompose the primitives and considers the re-composed compositional predictions, their language-only decomposition and probability-level mixup potentially limit the generalizability in the open-world.\nIn this paper, we propose a novel CLIP-based method for the CZSL task by prompting the languageinformed distributions (PLID) over both the compositional and primitive categories. To learn the diverse and informative textual class representations, the PLID leverages off-the-shelf large language models (LLM) to build the class-specific distributions and to enhance the class embeddings. Furthermore, we propose a visual language primitive decomposition (VLPD) module to decompose the image data into simple primitives. Eventually, the compositional classification is enhanced by our stochastic logit mixup (SLM), which takes the merits of both the compositional and primitive recognitions. The proposed PLID shows state-of-the-art performance on CZSL benchmarks such as MIT-States (Isola et al., 2015), UT-Zappos (Yu &Grauman, 2014), andC-GQA (Naeem et al., 2021).\nNote that our method is orthogonal to the existing hard prompt (Radford et al., 2021), soft prompt tuning (Zhou et al., 2022b), and prompt distribution learning (Lu et al., 2022;Kwon et al., 2023;Liu et al., 2023;Derakhshani et al., 2023). We advocate prompting the distribution of informative LLM-based class descriptions. From a classification perspective, this is grounded on the classificationby-description (Menon & Vondrick, 2023;Maniparambil et al., 2023;Yan et al., 2023;He et al., 2023), that LLM-generated text enables more informative class representations. Compared to the deterministic soft/hard prompt aforementioned, our distribution modeling could capture the intraclass diversity for better zero-shot generalization. Compared to the existing prompt distribution learning approaches, the class context is more linguistically interpretable and provides fine-grained descriptive information about the class. Our method is also parameter-efficient without the need to optimize a large collection of prompts. Specific to the CZSL task, the enhanced class embeddings by LLM descriptions enable visual language primitive decomposition and decision fusion in both compositional and primitive space, which eventually benefits the generalization to the unseen.\nIn summary, the contributions are as follows. a) We develop a PLID method that advocates prompting the language-informed distribution for compositional zero-shot learning, which is orthogonal to existing soft/hard and distributional prompt learning. b) We propose primitive decomposition and stochastic logit mixup to fuse the classification decision from compositional and primitive predictions. c) We empirically show that PLID could achieve superior performance to prior arts in both the closed-world and open-world settings on MIT-States, UT-Zappos, and C-GQA datasets." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b31", "b34", "b9", "b20", "b10", "b16", "b3", "b27", "b14", "b18", "b25", "b0", "b6", "b48", "b8", "b17", "b21", "b43", "b12", "b19", "b40", "b5" ], "table_ref": [], "text": "Prompt Learning in VLM Vision-Language Models (VLM) such as the CLIP (Radford et al., 2021) pre-trained on web-scale datasets recently gained substantial attention for their strong zeroshot recognition capability on various downstream tasks. Such a capability is typically achieved by performing prompt engineering to adapt pre-trained VLMs. Early prompting technique such as the hard prompt in CLIP uses the heuristic template \"a photo of [CLS]\" as the textual input. Recently, the soft prompt tuning method in CoOp (Zhou et al., 2022b), CoCoOp (Zhou et al., 2022a), and ResPT (Razdaibiedina et al., 2023) that uses learnable embedding as the textual context of class names significantly improved the model adaptation performance. This technique is further utilized in MaPLe (Khattak et al., 2023) that enables multi-modal prompt learning for both image and text. However, the prompts of these methods are deterministic and lack the diversity to capture the appearance variety in fine-grained visual data, so they are prone to overfitting the training data. To handle this issue, ProDA (Lu et al., 2022) explicitly introduces a collection of soft prompts to construct the class-specific Gaussian distribution, which results in better zero-shot performance and inspires the recent success of PPL (Kwon et al., 2023) in the dense prediction task. Similarly, the PBPrompt (Liu et al., 2023) uses neural networks to predict the class-specific prompt distribution and utilizes optimal transport to align the stochastically sampled soft prompts and image patch tokens. The recent work (Derakhshani et al., 2023) assumes the latent embedding of prompt input follows a Gaussian prior and adopts variational inference to learn the latent distribution. In this paper, in order to take the merits of the informativeness of hard prompt and the diversity of distributional modeling, we adopt the soft prompt to adapt the distributions supported by LLM-generated class descriptions.\nCompositional Zero-Shot Learning (CZSL) For a long period, the CZSL task has been studied from a vision-based perspective in literature. They either directly learn the compositional visual features or disentangle the visual features into simple primitives, i.e., states and objects. For example, (Nagarajan & Grauman, 2018;Li et al., 2020;Naeem et al., 2021) performs a direct classification by projecting the compositional visual features into a common feature space, and (Lu et al., 2016;Misra et al., 2017;Atzmon et al., 2020;Huynh & Elhamifar, 2020;Zou et al., 2020;Karthik et al., 2022;Liu et al., 2022) decompose the visual feature into simple primitives so that the compositional recognition can be achieved by learning to recompose from the primitives. Though the recent largescale pre-trained CLIP model shows impressive zero-shot capability, it is found to struggle to work well for compositional reasoning (Ma et al., 2023;Yuksekgonul et al., 2023;Lewis et al., 2022). Thanks to the recent prompt learning (Zhou et al., 2022b), the CZSL task has been dominated by CLIP-based approaches (Nayak et al., 2023;Lu et al., 2023;Xu et al., 2022;Huang et al., 2023). The common idea is to prompt the frozen CLIP model to separately learn the textual embeddings of simple primitives, which empirically show strong compositionality for zero-shot generalization. However, these methods tend to overfitting due to the lack of prompt diversity or language informativeness. In this paper, based on the frozen CLIP, we leverage LLMs to enhance the compositionality of text embeddings and propose to decompose both the image and text modalities for better compositional recognition in an open world." }, { "figure_ref": [ "fig_0" ], "heading": "PRELIMINARIES", "publication_ref": [ "b31" ], "table_ref": [], "text": "CZSL Task Formulation The CZSL task aims to recognize images of a compositional category y ∈ C, where the semantic space C is a Cartesian product between the state space S = {s 1 , . . . , s |S| } and object space O = {o 1 , . . . , o |O| }, i.e., C = S × O. For example, as shown in Fig. 1, a model trained on images of red apple and sliced tomatoes needs to additionally recognize an image of sliced apple. In training, only a set of seen compositions is available. In closed-world testing, the model needs to recognize images from both the seen compositions in C (s) and the unseen compositions in C (u) that are assumed to be feasible, where the cardinality |C (s) ∪ C (u) | ≪ |C| since most of the compositions in C are practically not feasible. In open-world testing, the model needs to recognize images given any composition in C.\nVLMs for CZSL Large pre-trained VLMs such as CLIP (Radford et al., 2021) have recently been utilized by CSP (Nayak et al., 2023) for the CZSL task. The core idea of CSP is to represent the text embeddings of states in S and objects in O as learnable parameters and contextualize them with the hard prompt template \"a photo of [s][o]\" as the input of the CLIP text encoder, where [s] ∈ S and [o] ∈ O. Given an image x, by using the cosine similarity (cos) as the logit, the class probability of the composition y is defined as p θ (y|x) = softmax(cos(v, t y )), where θ are the |S| + |O| learnable parameters, v and t y are the image feature and class text embedding, respectively.\n[1]The photo shows a [s][o]. [2]A [s][o] is pictured. … [M]A [o] in the photo appears [s]. LLMs TFE [s] [o] 𝐩 ! 𝐩 \" 𝐩 # ⋯ Prompted Composition Class text encoder (frozen) image encoder (frozen) ⋯ 𝑥 (!) VFE 𝒒 ! 𝐃 (&) VLPD 𝒗 𝒕 ! 𝑥 (\") 𝑥 (') ℰ ! ℰ \" language-informed distributions image input: 𝒙 SLM \"bicolor cat\" ⋯ ⋯ ℒ ( (𝒙, 𝑠) ℒ ) (𝒙, 𝑜) ℒ & (𝒙,\nIn training, the prediction p θ (ŷ|x) is supervised by multi-class cross-entropy loss. In CZSL testing, a test image is recognized by finding the compositional class c ∈ C which has the maximum cos(v, t c ). The CSP method is simple, parameter-efficient, and largely outperforms traditional approaches. However, due to the lack of diversity and informativeness in prompting, the zero-shot capability of CLIP is not fully exploited by CSP for the CZSL task." }, { "figure_ref": [ "fig_1" ], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "Overview Fig. 2 shows an overview of the PLID. The basic idea is to use LLMs to generate sentence-level descriptions for each compositional class, and learn to prompt the class-wise text distributions (supported by the descriptions) to be aligned with image data. Besides, we introduce visual language primitive decomposition (VLPD) and stochastic logit mixup (SLM) to enable recognition at both compositional and primitive levels. In testing, an image is recognized by fusing the decisions from the directly predicted and the recomposed compositions." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "PROMPTING LANGUAGE-INFORMED DISTRIBUTION Motivation", "publication_ref": [ "b31", "b20", "b10", "b16", "b3", "b24", "b24", "b19", "b5", "b40", "b19", "b37", "b20", "b20", "b20", "b48", "b8", "b17", "b39", "b36", "b19" ], "table_ref": [ "tab_4" ], "text": "To adapt the large pre-trained CLIP (Radford et al., 2021) to downstream tasks, recent distributional prompt learning (Lu et al., 2022;Kwon et al., 2023;Liu et al., 2023;Derakhshani et al., 2023) shows the importance of context diversity by distribution modeling for strong generalization. Motivated by the inherent fine-granularity of compositional recognition in the CZSL task, we argue that not only the context diversity but also the context informativeness by language modeling, are both important factors to adapt CLIP to the zero-shot learning task. The insight behind this is that the sentence-level descriptions could contextualize compositional classes in a more fine-grained manner than the prior arts. Therefore, we propose to address the two factors by learning to Prompt the Language-Informed Distributions (PLID) for the CZSL task.\nCompositional Class Description To generate diverse and informative text descriptions for each compositional class, we adopt a similar way as (Menon & Vondrick, 2023) by prompting an LLM that shows instruction-following capability. An example below shows the format of the LLM instruction. m is a linguistically complete sentence. Different to (Menon & Vondrick, 2023) that aims to interpret the zero-shot recognition by attribute phrases from LLMs, we utilize the LLM-based sentence-level descriptions in the CZSL task for two benefits: 1) provide diverse and informative textual context for modeling the class distributions that capture the intra-class variance, and 2) enhance the class embedding with fine-grained descriptive information.\nLanguage-Informed Distribution (LID) For both the image and text modalities, we use the frozen CLIP model and learnable feature enhancement modules to represent the visual and language features, which are also adopted in existing CZSL literature (Lu et al., 2023;Huang et al., 2023).\nSpecifically, for the text modality, each composition y is tokenized and embedded by CLIP embedding layer and further prompted by concatenating with learnable context vectors, i.e.,\n\"[p 1 ] . . . [p L ][s][o]\",\nwhere p 1:L is initialized by \"a photo of\" and shared with all classes. Followed by the frozen CLIP text encoder E T , the embedding of class y is\nq y = E T ([p 1 ] . . . [p L ][s][o]) where q y ∈ R d .\nFollowing the CZSL literature (Xu et al., 2022;Lu et al., 2023), here the soft prompt p 1:L and primitive embeddings [s][o] are learnable while E T is frozen in training.\nTo simultaneously address the lack of diversity and informativeness of the soft prompts, we propose to formulate the class-specific distributions supported by the texts S (y) and learn to prompt these distributions. Specifically, we encode S (y) by the frozen CLIP text encoder: D (y) = E T (S (y) ), where D (y) ∈ R M ×d . Then, we use D (y) to enhance q y by t y = Ψ TFE (q y , D (y) ) where Ψ TFE is the text feature enhancement (TFE) implemented by cross attention (Vaswani et al., 2017). Similarly, given an image x, to mitigate the loss of fine-grained cues, we augment it with N views to be X = {x (1) , . . . , x (N ) }. Followed by the frozen CLIP visual encoder E V , the feature of x is enhanced by We treat the enhanced text feature t y of class y as the class mean and t y + D (y) as the distribution support points (DSP) that follow the Gaussian N (t y , Σ y ). The motivation of t y + D (y) is to enable the flexibility of DSP to traverse around in the d dimensional space in training since t y is trainable while D (y) are pre-trained. For all |C (s) | (denoted as C) seen compositional classes, we build joint Gaussian distributions N (µ 1:C , Σ 1:C ) similar to ProDA (Lu et al., 2022), where the means µ 1:C ∈ R C×d are given by t y over C classes, and the covariance Σ 1:C ∈ R d×C×C is defined across C classes for each feature dimension from DSP.\nv = Ψ VFE (E V (x), E V (X))\nRemark: Compared to the ProDA (Lu et al., 2022) that learns a collection of non-informative prompts, our DSPs are language-informed by D (y) that provides more fine-grained descriptive information to help recognition and decomposition. Besides, our method is more parameter-efficient than ProDA since we only have a single soft prompt to learn. This is especially important for the CZSL task where there is a huge number of compositional classes. Lastly, we highlight the benefit of performing the intra-and inter-class covariance optimization induced by the learning objective of distribution modeling, which will be introduced below.\nLearning Objective Given the visual feature v ∈ R d of image x and the text embeddings t 1:C from class-wise joint distributions N (µ 1:C , Σ 1:C ), according to the (Lu et al., 2022), minimizing the cross-entropy loss is equivalent to minimizing the upper bound of negative log-likelihood (NLL):\nNLL(x, y) = -log E t 1:C p(y|v, t 1:C ) ≤ -log exp(h y /τ ) C k=1 exp((h k + h (m) k,y )/τ ) := L y (x, y),(1)\nwhere the compositional logit h y = cos(v, t y ), the pairwise margin h Motivation Considering the fundamental challenge in the CZSL task, that the visual primitives are inherently entangled in an image, an unseen composition in testing can be hardly identified if its object (or its state) embedding is overfitted to the visual data of seen compositions. To this end, it is better to inherit the benefits of the decomposerecompose paradigm (Zou et al., 2020;Karthik et al., 2022;Liu et al., 2022) by decomposing visual features into simple primitives, i.e., states and objects, from which the recomposed decision can be leveraged for zero-shot recognition. Thanks to the compositionality of CLIP (Wolff et al., 2023;Trager et al., 2023), such motivation can be achieved by the visual-language primitive decomposition (VLPD). See Fig. 4 and we explain it below. Based on VLPD, we propose the stochastic logit mixup to fuse the directly learned compositions and the recomposed ones.\nVLPD Specifically, we use two parallel neural networks f s and f o to decompose v into the state visual feature f s (v) and object visual feature f o (v), respectively, under the supervision of text features. To get the supervision, we group t y over the subset Y o , in which all compositions share the same given object o (see vertical ellipses in Fig. 4), and group t y over the subset Y s , in which all compositions share the same given state s (see horizontal ellipses in Fig. 4). Thus, given a state s and an object o, the predicted object logit h s and state logit h o are computed by\nh s = cos   f s (v), 1 |Y s | y∈Ys t y   , h o = cos   f o (v), 1 |Y o | y∈Yo t y   .\n(2)\nNote that we use f s and f o to decompose visual features v, which is different from DFSP (Lu et al., 2023) that only decomposes the compositional logits. In experiments, we show the superiority of performing both visual and language decomposition in Table 5.\nFollowing the spirit of distribution modeling, we also introduce the distributions over state and object categories, where the corresponding DSP, denoted as D (s) and D (o) , are obtained by grouping D (y) over Y s and Y o , respectively. This leads to the following upper-bounded cross-entropy losses: With the individual f s and f o , it is safe to have p(y|v) = p(s|v) • p(o|v) that induces p(y|v) ∝ exp((h s + h o )/τ ). Therefore, the recomposed logit matrix H (rc) ∈ R |S|×|O| is a Cartesian sum between h (s) ∈ R |S| and h (o) ∈ R |O| , i.e., H (rc) = h (s) ⊕ h (o)⊤ , where h (s) contains all state logits and h (o) contains all object logits. See the red and blue squares in Fig. ( 4), respectively.\nL s (x, s) = -log exp(h s /τ ) |S| k=1 exp((h k + h (m) k,s )/τ ) , L o (x, o) = -log exp(h o /τ ) |O| k=1 exp((h k + h (m) k,o )/τ ) ,(3) where h" }, { "figure_ref": [], "heading": "Stochastic Logit Mixup Given the recomposed logit h (rc) y", "publication_ref": [ "b2", "b5", "b1" ], "table_ref": [], "text": "∈ H (rc) and the directly learned compositional logit h y , we propose a stochastic logit mixup (SLM) method for decision fusion by sampling a coefficient λ from a Beta prior distribution:\nhy = (1 -λ)h y + λh (rc) y , λ ∼ Beta(a, b),(4)\nwhere (a, b) are hyperparameters indicating the prior preference for each decision. In training, we replace the h y and h k of Eq. ( 1) with the mixed logit hy and hk , respectively. In testing, we use the expectation of the Beta distribution which is a/(a + b).\nThe insights behind the SLM are that the Beta distribution indicates a prior to h y or h (rc)\ny . It provides the flexibility of which compositional decision to trust in, and the stochasticity of the coefficient λ inherently introduces a regularization effect in training (Carratino et al., 2022). Moreover, compared to softmax probability mixup (Huang et al., 2023), our logit mixup avoids the limitation of softmax normalization over a huge number of compositional classes, that rich information of class relationship is lost after softmax normalization according to (Bang et al., 2022). Such class relationships are even more important in the CZSL problem as indicated in (Naeem et al., 2021). " }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation", "publication_ref": [ "b7", "b42", "b30", "b28", "b19", "b28", "b29" ], "table_ref": [], "text": "We perform experiments on three CZSL datasets, i.e., MIT-States (Isola et al., 2015), UT-Zappos (Yu & Grauman, 2014), and C-GQA (Naeem et al., 2021), following the standard splitting protocols in CZSL literature (Purushwalkam et al., 2019;Nayak et al., 2023;Lu et al., 2023). See dataset details in the Appendix E. We report the metrics in both closed-world (CW) and open-world (OW) settings, including the best seen accuracy (S), the best unseen accuracy (U), the best harmonic mean (H) between the seen and unseen accuracy, and the area under the curve (AUC) of unseen versus seen accuracy. For OW evaluation, following the CSP (Nayak et al., 2023), we adopt the feasibility calibration by GloVe (Pennington et al., 2014) to filter out infeasible compositions." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b19" ], "table_ref": [], "text": "We implement the PLID based on the CSP codebase in PyTorch. The CLIP architecture ViT-L/14 is used by default. Without mentioning, we generate M = 64 texts and augment an image with N = 8 views, and adopt Beta(1, 9) as prior. The dropout rates of TFE and VFE are set at 0.5. We use a single NVIDIA 6000Ada GPU for training and testing. Following (Lu et al., 2023), we use Adam optimizer with base learning rate 5e-5, and steply decay it with the factor of 0.5 every 5 training epochs for a total of 20 epochs. Other details are in the Appendix E." }, { "figure_ref": [ "fig_10", "fig_12", "fig_14", "fig_13" ], "heading": "MAIN RESULTS", "publication_ref": [ "b20" ], "table_ref": [ "tab_0", "tab_1", "tab_0", "tab_2", "tab_3", "tab_3", "tab_4", "tab_0", "tab_0", "tab_4", "tab_4", "tab_0", "tab_0", "tab_4", "tab_4" ], "text": "The results are reported in Table 1. We compare with the CZSL baselines that are developed on the same frozen CLIP model. The table shows that under both the closed-world and open-world test settings, our proposed PLID method achieves the best performance in most metrics on the three datasets. Note that ProDA (Lu et al., 2022) also formulates the class-wise Gaussian distributions to address the intra-class diversity, but it can only outperform CLIP and CoOp on all metrics. This indicates the importance of both diversity and informativeness for the CZSL task. On the UT-Zappos dataset, the PLID outperforms the DFSP in terms of S, H, and AUC by 0.6%, 5.2%, and 2.7% respectively, while inferior to the DFSP on the best unseen metric. The potential reason is that DFSP fuses the text features into the image images, which better preserves the generalizability of CLIP for the small downstream UT-Zappos dataset. Note that the HPL method uses prompt learning and recognition at both compositional and primitive levels, but it performs only slightly better than CSP and way worse than our method, indicating that traditional prompt learning helps but is not enough to adapt the CLIP model to the CZSL task. Ablation Study In Table 2, we show the contribution of the major components in the PLID model. It is clear that all components are beneficial. Here we highlight some important observations: (1) Our LID method significantly improves the performance compared to the baseline (a) and is much better than ProDA (20.43% vs 16.1% of AUC cw ) when referring to Table 1. This implies that modeling the distribution by way of ProDA is not sufficient, but language informativeness is critical and preferred for the CZSL task. (2) Rows (c)(d)(e) show that TFE, VFE, and OPT-1.3B can further achieve some performance gains. (3) Rows (f)(g) show that VLPD benefits more in the open-world setting while the SLM contributes more in the closed-world setting. Effect of LLM In Table 3, we analyze the choice of LLMs by comparing PLID using the pre-trained T5 (Raffel et al., 2020a) and OPT (Zhang et al., 2022a). It shows the performance varies across CZSL datasets. Note that the quality of the generated texts by OPT is much better than T5 (see examples in Appendix B), the results imply that the higher text quality on the large C-GQA dataset leads to better CZSL performance. Besides, on the UT-Zappos dataset, the better OPT does not show better closed-world performance. The reason could be that UT-Zappos is too small and its commercial shoe images do not exhibit diverse visual backgrounds.\nEffect of LID In Table 4, we further investigate at which semantic level the language-informed distribution (LID) should be applied. Denote the Gaussian distribution on state, object, and composition as N s , N o , and N y , respectively. The Table 4 results clearly show the superiority of applying LID on all three semantic levels. This indicates the generality of language-informed distribution towards many potential zero-shot or open-vocabulary recognition problems.\nDesign Choice of VLPD In Table 5, we validate the design choices of VLPD, including the model without primitive decomposition, only decompose text into primitives, and our decomposition on both visual and language primitives (VLPD). The results show the clear advantage of our VLPD design choice. Note that DFSP also has primitive decomposition but only on text modality. Our better performance thus indicates the need for decomposition on both visual and image. ( 1,1) (1,9) (9,1) (5,5 (1,1) (1,9) (9,1) (5,5) Here (1, 1) implies random sampling while (5, 5) implies equally trusted. images ground truth prediction \"small laptop\" \"heavy gear\" \"splintered palm\" \"small dog\" \"small laptop\" \"heavy gear\" \"splintered palm\" \"small dog\" \"engraved computer\" \"engraved floor\" \"heavy water\" \"huge wave\" \"ruffled pasta\" \"cooked pasta\" success cases failure cases Hyperparameters In Fig. 5, we quantitatively show the impact of the number of generated text descriptions M and the number of augmented image views N . It shows that the best performance is achieved when M = 64 and N = 8. We note that more augmented image views slightly decrease the performance, which could be attributed to the overfitting of the seen compositions.\nIn Fig. 6, we show the impact of the Beta prior parameters (a, b). We set them to (1, 1) for random sampling, (1, 9) for preference to the composition, (9, 1) for preference to re-composition, and (5, 5) for equal preference, respectively. It reveals that trusting more of the directly learned composition by Beta(1, 9) achieves the best results. Qualitative Analysis We use the tSNE to visualize the generated text embeddings D and the learned DSP from or PLID model in Fig. 7, where the same set of 10 compositional classes are randomly selected from MIT-States dataset. It shows that by learning the distribution of each composition from LLM-generated texts using Eq. ( 1) and (3) and TFE module, compositional class embeddings can be distributed more compactly in each class (small intra-class variance), and better separated among multiple classes (large inter-class distance). In Appendix F, we show primitive-level tSNE embedding visualizations that reveal the same observation.\nIn Fig. 8, we show some success and failure cases of our PLID model. For example, the heavy water case indicates an incorrect label while PLID could correctly predict it as huge wave. This shows the robustness of PLID against noisy labels. The last two failure cases reveal PLID still could make mistakes on the state prediction (cooked pasta) and object prediction (engraved floor), which indicates there is still a long way to go for the CZSL problem." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel CLIP-based compositional zero-shot learning (CZSL) method named PLID. It leverages the generated text description of each class from large language models to formulate the class-specific Gaussian distributions. By softly prompting these language-informed distributions, PLID could achieve diversified and informative class embeddings for fine-grained compositional classes. Besides, we decompose the visual embeddings of image data into simple primitives that contain the basic states and objects, from which the re-composed predictions are derived to calibrate the prediction by our proposed stochastic logit mixup strategy. Experimental results show the superiority of the PLID method to prior arts on all common CZSL datasets." }, { "figure_ref": [], "heading": "A BROADER IMPACT AND LIMITATIONS", "publication_ref": [], "table_ref": [], "text": "Broader Impact This work can be broadly extended to more downstream multi-modality applications, such as general zero-shot learning, text-image retrieval, text-to-image generation, etc., when the class composition is not especially taken into consideration. Besides, the central idea of LLM-grounded modality alignment is not limited to text and image, but any modality that could reveal the semantic categories in practice is promising to explore in the future. The potential negative societal impact is that, the developers should be cautious by carefully examining the societal biases indicated by the generated textual class descriptions, even though the large language models we used are publicly accessible.\nLimitations One limitation is that the primitive decomposition could be difficult to learn when the states are non-visual concepts like smelly, hot, etc., even by the pre-trained CLIP model. Another limitation is that the generated descriptions by LLMs are not grounded to the image such that some distraction from generated descriptions could be introduced." }, { "figure_ref": [], "heading": "B GENERATING COMPOSITIONAL CLASS DESCRIPTIONS", "publication_ref": [ "b4", "b15" ], "table_ref": [], "text": "In this work, we choose T5 and OPT models as the LLMs for compositional class description generation. For the T5 model, we follow the same setting as (He et al., 2023) that uses the T5-base model for word-to-sentence generation. The T5-base model was pre-trained on the Colossal Clean Crawled Corpus dataset (Raffel et al., 2020b) and finetuned on the CommonGen dataset (Lin et al., 2020). Take the painted ceiling as an example, the results from T5-base model are:\nwhere the Keywords is followed by the words of the state, object, and the word randomly picked from the set {photo, image, picture}. Using the same example painted ceiling as T5-base model, the generated sentences are: arXiv preprint -The painting of the ceiling features an intricate pattern of intricate gold-colored paint and is framed by a white background. -The ceiling has been painted with the pictures of these three characters, all arranged together. -In the picture, the ceiling is covered in bright, colorful paintwork that has been painted on by hands painted white. The colors have been selected carefully. -In the picture, the ceiling features painted decoration. The decoration resembles the surface of the sea, and has been painted in shades of blue. -The photograph captures both the bright colors of the painting atop the ceiling and the subtle shades of light reflecting off of it. -The large picture shows a large pattern painted onto the ceiling.\nThe blue line shows paint dripping down. -The wall behind the picture shows three different painted ceilings, in bright contrasting colors. A vibrant sky and blue skies are depicted against the dark brick wall. -The ceiling of the room depicted in the painting could very well be painted in a few hours. The details of each object are clearly defined in its placement and position. -Another photo of the same scene, this time featuring a ceiling painted in a stunning, white color. -A painted ceiling is shown, painted according to a specific design. this is a typical design that can also include decorative or functional elements. -... It is clear that the generated class descriptions are much more diverse and informative than those of the OPT model." }, { "figure_ref": [], "heading": "C COVARIANCE SHARING", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "For the CZSL task, the spatial complexity of computing the covariance matrix Σ 1:C is O(|C (s) | 2 d) which could be too heavy to compute if the number of the compositions is too large. For example, the C-GQA dataset contains 278K seen compositions which result in around 6 × 10 13 floating elements of Σ 1:C for 768-dim text features. To handle this issue, we instead implement the Σ 1:C by sharing the covariance across attributes given the same object. This implies that the model is encouraged to learn the object-level distributions. Specifically, similar to the VLPD module of the main paper, we compute the mean µ 1:|O| and covariance Σ 1:|O| over the objects by grouping t y and D (y) with object labels:\nt o = 1 |Y o | y∈Yo t y , D (o) = 1 |Y o | y∈Yo D (y) ,(5)\nwhere Y o is the subset of compositions in Y that contains the same object as y. Then, all the pairwise margins H\n(m) o ∈ R |O|×|O| in object space can be mapped back to H (m) ∈ R C×C in a compositional space by sharing it with all compositions in Y o . This could significantly reduce the computation load of the covariance while compromising the accuracy of distribution modeling.\nSince the distribution modeling for both our PLID and ProDA is not applicable to the C-GQA dataset, we use the MIT States dataset to show the negative impact of sharing the covariance (see Table 6). It shows that the covariance sharing can significantly save the GPU memory (17.6 vs 32.5 GB), while still performing much better than ProDA." }, { "figure_ref": [], "heading": "D PRIMITIVE-LEVEL GAUSSIAN MODELING", "publication_ref": [], "table_ref": [], "text": "To formulate the Gaussian distributions over the state classes and the object classes, we group the text embeddings of composition descriptions D by Eq. ( 5 \nk,o = f o (v) ⊤ A k,o f o (v),(6)\nwhere the index k ranges within [1, |S|] for computing the state classification loss L s , and ranges within [1, |O|] for computing the object classification loss L o , respectively." }, { "figure_ref": [], "heading": "E MORE IMPLEMENTATION DETAILS", "publication_ref": [ "b7", "b42", "b30", "b28", "b19" ], "table_ref": [], "text": "Datasets We perform experiments on three CZSL datasets, i.e., MIT-States (Isola et al., 2015), UT-Zappos (Yu & Grauman, 2014), and C-GQA (Naeem et al., 2021). MIT-States consists of 115 states and 245 objects, with 53,753 images in total. Following (Purushwalkam et al., 2019;Nayak et al., 2023;Lu et al., 2023) " }, { "figure_ref": [ "fig_15" ], "heading": "F MORE RESULTS", "publication_ref": [], "table_ref": [], "text": "Primitive-level Visualization In addition to the tSNE visualization of Gaussian distributions over the composition-level classes, we provide the visualizations of the primitive-level classes in Fig. 9. These figures show that our model could learn better text distributions over state classes and object classes than those of the pre-trained LLMs." } ]
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e.g., sliced tomatoes, where the model is learned only from the seen compositions, e.g., sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, i.e., state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., PLID, for the CZSL task. Specifically, the PLID leverages pre-trained large language models (LLM) to 1) formulate the language-informed class distributions which are diverse and informative, and 2) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module and a stochastic logit mixup (SLM) strategy are proposed to dynamically fuse the decisions from the compositional and the primitive logit space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the PLID to the prior arts.
PROMPTING LANGUAGE-INFORMED DISTRIBUTION FOR COMPOSITIONAL ZERO-SHOT LEARNING
[ { "figure_caption": "Figure 1 :1Figure 1: Challenges of compositional recognition. (a) images of the same compositional class appear differently due to diverse visual backgrounds or foregrounds. (b) red tomatoes and sliced tomatoes are visually correlated because 1) both are tomatoes object, and 2) the object tomatoes is inherently entangled with the state red, resulting in the need of primitive decomposition.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of PLID. The CZSL task is formulated to align the feature of image x with the learnable text features of compositional class y = (s, o) based on frozen CLIP (ET and EV ). We propose the languageinformed distributions (LID) which are constructed by the LLM-generated class descriptions and the soft prompts p1:L for each state-object pair (s, o). The features of the image and text are enhanced by text and visual feature enhancement (TFE and VEF). Furthermore, we propose the visual language primitive decomposition (VLPD) module to recompose the compositional logits, which are further fused with the compositional logit between ty and v by our stochastic logit mix-up (SLM). With the compositional and primitive recognition, our model is jointly trained by loss functions Ly(x, y), Ls(x, s), and Lo(x, o).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Keywords: sliced, potato, picture Output: The picture features a beautifully arranged plate of thinly sliced potatoes. ### See the Appendix B for more details. For each composition y = (s, o), we generate M descriptions denoted as S (y) = {S", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "where Ψ VFE is the visual feature enhancement (VFE) by cross attention.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Hybrid prompting for intraand inter-class covariance optimization.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "= v ⊤ A k,y v/(2τ ) and A ∈ R d×C×C is given by A k,y = Σ kk + Σ yy -Σ ky -Σ yk . The covariance A k,y indicates the correlation between the k-th out of C classes and the target class y on each of d feature dimensions. The insight of minimizing L y (x, y) is illustrated in Fig.3, which encourages minimizing intra-class variance by Σ yy and Σ kk , and maximizing inter-class separability indicated by Σ ky and Σ yk . In Appendix C, we discuss our workaround by covariance sharing when C is too large to compute A.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: VLPD for recomposing.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "are determined the same way as h (m) k,y in Eq. (1). See details in Appendix D.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a): the baseline that uses mean pooling of text embeddings from T5-generated sentences. (b): add distribution modeling. (c): change the mean pooling to the cross-attention. (d): augment images followed by cross-attention aggregation. (e): change T5-base LLM to the OPT-1.3B. (f): add VLPD followed by the fixed logit fusion. (g): change the fusion to a stochastic manner, which reaches to our full PLID.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Impact of M and N . We set N = 8 for the Fig. 5a, while we set M = 64 for the Fig. 5b.", "figure_data": "", "figure_id": "fig_10", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Impact of (a, b). Here (1, 1) implies random sampling while (5, 5) implies equally trusted.", "figure_data": "", "figure_id": "fig_12", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Qualitative results. We show the success and failure cases of prediction on the MIT-States test set.", "figure_data": "", "figure_id": "fig_13", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: tSNE visualization of the text embeddings.", "figure_data": "", "figure_id": "fig_14", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure9: tSNE visualization of the primitive-level text embeddings (states: Fig.9a and 9b, objects: Fig.9c and 9d). This figure clearly shows that, compared to the raw embeddings by pre-trained LLMs, our method achieves better distributions over both the state and object classes.", "figure_data": "", "figure_id": "fig_15", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "CZSL results of Closed-and Open-World settings on three datasets. Baseline results are from published literature, where the PCVL was not evaluated on the C-GQA dataset such that we use \"-\" instead.", "figure_data": "MethodMIT-StatesUT-ZapposC-GQASU H AUCSU H AUCSU H AUCCLIP (Radford et al., 2021) 30.2 46.0 26.1 11.015.8 49.1 15.6 5.07.5 25.0 8.6 1.4CoOp (Zhou et al., 2022b) 34.4 47.6 29.8 13.552.1 49.3 34.6 18.820.5 26.8 17.1 4.4ProDA 1 (Lu et al., 2022)37.4 51.7 32.7 16.163.7 60.7 47.6 32.7----ClosedCSP (Nayak et al., 2023)46.6 49.9 36.3 19.464.2 66.2 46.6 33.028.8 26.8 20.5 6.2PCVL (Xu et al., 2022)48.5 47.2 35.3 18.364.4 64.0 46.1 32.2----HPL (Wang et al., 2023)47.5 50.6 37.3 20.263.0 68.8 48.2 35.030.8 28.4 22.4 7.2DFSP (Lu et al., 2023)46.9 52.0 37.3 20.666.7 71.7 47.2 36.038.2 32.0 27.1 10.5PLID49.7 52.4 39.0 22.167.3 68.8 52.4 38.738.8 33.0 27.9 11.0CLIP (Radford et al., 2021) 30.1 14.3 12.8 3.015.7 20.6 11.2 2.27.5 4.6 4.0 0.3CoOp (Zhou et al., 2022b) 34.6 9.3 12.3 2.852.1 31.5 28.9 13.221.0 4.6 5.5 0.7ProDA 1 (Lu et al., 2022)37.5 18.3 17.3 5.163.9 34.6 34.3 18.4----OpenCSP (Nayak et al., 2023)46.3 15.7 17.4 5.764.1 44.1 38.9 22.728.7 5.2 6.9 1.2PCVL (Xu et al., 2022)48.5 16.0 17.7 6.164.6 44.0 37.1 21.6----HPL (Wang et al., 2023)46.4 18.9 19.8 6.963.4 48.1 40.2 24.630.1 5.8 7.5 1.4DFSP (Lu et al., 2023)47.5 18.5 19.3 6.866.8 60.0 44.0 30.338.3 7.2 10.4 2.4PLID49.1 18.7 20.4 7.367.6 55.5 46.6 30.839.1 7.5 10.6 2.5", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study.", "figure_data": "LID TFE VFE OPT VLPD SLMHcwAUCcwHowAUCow(a)35.4118.5617.375.56(b)✓37.0620.4318.656.50(c)✓✓37.7621.0719.056.62(d)✓✓✓37.8721.0919.706.95(e)✓✓✓✓38.8021.6719.617.01(f)✓✓✓✓✓38.4221.6920.247.31(g)✓✓✓✓✓✓38.9722.1220.417.34", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Effect of LLMs on three CZSL datasets.", "figure_data": "LLMMIT-StatesUT-ZapposC-GQAHcw AUCcw How AUCowHcw AUCcw How AUCowHcw AUCcw How AUCowT538.41 21.53 20.46 7.3454.76 40.18 44.18 28.4726.94 10.65 9.77 2.35OPT 38.97 22.12 20.41 7.3452.38 38.67 46.61 30.8427.87 11.04 10.55 2.545.2 MODEL ANALYSISNs No Ny Hcw AUCcw How AUCow38.44 21.67 19.53 6.99✓ ✓38.30 21.62 19.49 6.95✓ 38.49 21.90 19.93 7.20✓ ✓ ✓ 38.97 22.12 20.41 7.34", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Effect of LID on classes of states (Ns), objects (No), and compositions (Ny).", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Effect of VLPD. The three rows indicate no decomposition, decompose textonly, and decompose both (full VLPD).", "figure_data": "text image Hcw AUCcw How AUCow37.94 20.98 19.67 6.98✓38.40 21.31 19.99 7.13✓✓38.97 22.12 20.41 7.34", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "), resulting in the distribution support points Effect of covariance sharing on MIT-States dataset. All methods use the same batch size of 64 for a fair comparison of GPU memory. (DSP) t o + D (o) and t s + D (s) for a given object class o and state class s, respectively. The DSPs are assumed to follow the state distribution N (t s , Σ s ) or the object distribution N (t o , Σ o ), where the covariances Σ s and Σ o are determined by D (s) and D (o) , respectively. Eventually, given the decomposed state visual features f s (v) and object visual features f o (v), the logit margin terms are defined as h (m) k,s = f s (v) ⊤ A k,s f s (v), and h", "figure_data": "VariantsMem.(GB)H cwAUC cwH owAUC owProDA (Lu et al., 2022)32.532.7116.1117.305.11PLID (w. ShareCov)17.638.50 (-0.47%) 21.69 (-0.43%) 19.81 (-0.60%) 7.04 (-0.30%)PLID (full)22.238.9722.1220.417.34(m)", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ", it is split into 1,262 seen and 300/400 unseen compositions for training and validation/testing, respectively. UT-Zappos contains 16 states and 12 objects for 50,025 images in total, and it is split into 83 seen and 15/18 unseen compositions for training and validation/testing. C-GQA contains 453 states and 870 objects for 39,298 images, and it is split into 5,592 seen and 1,040/923 unseen compositions for training and validation/testing, respectively, resulting in 7,555 and 278,362 target compositions in closed-and open-world settings.Implementation Our model is implemented on top of the CSP(Nayak et al., 2023) codebase, which extends the CLIP model for compositional zero-shot learning. To tokenize the generated long sentences of each compositional class, we set the context length to the default value of 77 in the original CLIP model. For the soft prompt embeddings, we set the context length of text encoder to 8 for all datasets. We use the dropout rate of 0.3 for the learnable state and object embeddings. In training, we follow the DFSP(Lu et al., 2023) that uses the performance of the validation set for model selection. The rest hyperparameters of our final model on each dataset are listed in Table7. Hyperparameters of model implementation.", "figure_data": "HyperparametersMiT-States UT-Zappos C-GQAmax epochs502520base learning rate0.000050.00010.00001weight decay0.000020.000010.00001number of text descriptions643264number of image views888attention dropout0.50.10.1weights of primitive loss0.10.010.01", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
Wentao Bao; Lichang Chen; Heng Huang; Yu Kong
[ { "authors": "Yuval Atzmon; Felix Kreuk; Uri Shalit; Gal Chechik", "journal": "", "ref_id": "b0", "title": "A causal view of compositional zero-shot recognition", "year": "2020" }, { "authors": "Duhyeon Bang; Kyungjune Baek; Jiwoo Kim; Yunho Jeon; Jin-Hwa Kim; Jiwon Kim; Jongwuk Lee; Hyunjung Shim", "journal": "", "ref_id": "b1", "title": "Logit mixing training for more reliable and accurate prediction", "year": "2022" }, { "authors": "Luigi Carratino; Moustapha Ciss; é ; Rodolphe Jenatton; Jean-Philippe Vert", "journal": "JMLR", "ref_id": "b2", "title": "On mixup regularization", "year": "2022" }, { "authors": "Mohammad Mahdi Derakhshani; Enrique Sanchez; Adrian Bulat; Guilherme Turrisi Victor; Da Costa; G M Cees; Georgios Snoek; Brais Tzimiropoulos; Martinez", "journal": "", "ref_id": "b3", "title": "Bayesian prompt learning for image-language model generalization", "year": "2023" }, { "authors": "Ruifei He; Shuyang Sun; Xin Yu; Chuhui Xue; Wenqing Zhang; Philip Torr; Song Bai; Xiaojuan Qi", "journal": "ICLR", "ref_id": "b4", "title": "Is synthetic data from generative models ready for image recognition?", "year": "2023" }, { "authors": "Siteng Huang; Biao Gong; Yutong Feng; Yiliang Lv; Donglin Wang", "journal": "", "ref_id": "b5", "title": "Troika: Multi-path cross-modal traction for compositional zero-shot learning", "year": "2023" }, { "authors": "Dat Huynh; Ehsan Elhamifar", "journal": "", "ref_id": "b6", "title": "Compositional zero-shot learning via fine-grained dense feature composition", "year": "2020" }, { "authors": "Phillip Isola; Joseph J Lim; Edward H Adelson", "journal": "", "ref_id": "b7", "title": "Discovering states and transformations in image collections", "year": "2015" }, { "authors": "Shyamgopal Karthik; Massimiliano Mancini; Zeynep Akata", "journal": "", "ref_id": "b8", "title": "Kg-sp: Knowledge guided simple primitives for open world compositional zero-shot learning", "year": "2022" }, { "authors": "Muhammad Uzair Khattak; Hanoona Rasheed; Muhammad Maaz; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b9", "title": "Maple: Multi-modal prompt learning", "year": "2023" }, { "authors": "Hyeongjun Kwon; Taeyong Song; Somi Jeong; Jin Kim; Jinhyun Jang; Kwanghoon Sohn", "journal": "", "ref_id": "b10", "title": "Probabilistic prompt learning for dense prediction", "year": "2023" }, { "authors": " Brenden M Lake; Joshua B Tomer D Ullman; Samuel J Tenenbaum; Gershman", "journal": "Behavioral and brain sciences", "ref_id": "b11", "title": "Building machines that learn and think like people", "year": "2017" }, { "authors": "Martha Lewis; Qinan Yu; Jack Merullo; Ellie Pavlick", "journal": "", "ref_id": "b12", "title": "Does clip bind concepts? probing compositionality in large image models", "year": "2022" }, { "authors": "Xiangyu Li; Xu Yang; Kun Wei; Cheng Deng; Muli Yang", "journal": "", "ref_id": "b13", "title": "Siamese contrastive embedding network for compositional zero-shot learning", "year": "2022" }, { "authors": "Yong-Lu Li; Yue Xu; Xiaohan Mao; Cewu Lu", "journal": "", "ref_id": "b14", "title": "Symmetry and group in attribute-object compositions", "year": "2020" }, { "authors": "Wangchunshu Bill Yuchen Lin; Ming Zhou; Pei Shen; Chandra Zhou; Yejin Bhagavatula; Xiang Choi; Ren", "journal": "", "ref_id": "b15", "title": "CommonGen: A constrained text generation challenge for generative commonsense reasoning", "year": "2020" }, { "authors": "Xinyang Liu; Dongsheng Wang; Miaoge Li; Zhibin Duan; Yishi Xu; Bo Chen; Mingyuan Zhou", "journal": "", "ref_id": "b16", "title": "Patch-token aligned bayesian prompt learning for vision-language models", "year": "2023" }, { "authors": "Zhe Liu; Yun Li; Lina Yao; Xiaojun Chang; Wei Fang; Xiaojun Wu; Yi Yang", "journal": "", "ref_id": "b17", "title": "Simple primitives with feasibility-and contextuality-dependence for open-world compositional zero-shot learning", "year": "2022" }, { "authors": "Cewu Lu; Ranjay Krishna; Michael Bernstein; Li Fei-Fei", "journal": "", "ref_id": "b18", "title": "Visual relationship detection with language priors", "year": "2016" }, { "authors": "Xiaocheng Lu; Ziming Liu; Song Guo; Jingcai Guo", "journal": "", "ref_id": "b19", "title": "Decomposed soft prompt guided fusion enhancing for compositional zero-shot learning", "year": "2023" }, { "authors": "Yuning Lu; Jianzhuang Liu; Yonggang Zhang; Yajing Liu; Xinmei Tian", "journal": "", "ref_id": "b20", "title": "Prompt distribution learning", "year": "2022" }, { "authors": "Zixian Ma; Jerry Hong; Mustafa Omer Gul; Mona Gandhi; Irena Gao; Ranjay Krishna", "journal": "", "ref_id": "b21", "title": "Crepe: Can vision-language foundation models reason compositionally?", "year": "2023" }, { "authors": "Massimiliano Mancini; Muhammad Ferjad Naeem; Yongqin Xian; Zeynep Akata", "journal": "", "ref_id": "b22", "title": "Open world compositional zero-shot learning", "year": "2021" }, { "authors": "Mayug Maniparambil; Chris Vorster; Derek Molloy; Noel Murphy; Kevin Mcguinness; E O' Noel; Connor", "journal": "", "ref_id": "b23", "title": "Enhancing clip with gpt-4: Harnessing visual descriptions as prompts", "year": "2023" }, { "authors": "Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b24", "title": "Visual classification via description from large language models", "year": "2023" }, { "authors": "Ishan Misra; Abhinav Gupta; Martial Hebert", "journal": "", "ref_id": "b25", "title": "From red wine to red tomato: Composition with context", "year": "2017" }, { "authors": "Muhammad Ferjad Naeem; Yongqin Xian; Federico Tombari; Zeynep Akata", "journal": "", "ref_id": "b26", "title": "Learning graph embeddings for compositional zero-shot learning", "year": "2021" }, { "authors": "Tushar Nagarajan; Kristen Grauman", "journal": "", "ref_id": "b27", "title": "Attributes as operators: factorizing unseen attribute-object compositions", "year": "2018" }, { "authors": "Peilin Nihal V Nayak; Stephen H Yu; Bach", "journal": "", "ref_id": "b28", "title": "Learning to compose soft prompts for compositional zero-shot learning", "year": "2023" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b29", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Senthil Purushwalkam; Maximilian Nickel; Abhinav Gupta; Marc'aurelio Ranzato", "journal": "", "ref_id": "b30", "title": "Task-driven modular networks for zero-shot compositional learning", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b31", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "JMLR", "ref_id": "b32", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "JMLR", "ref_id": "b33", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Anastasia Razdaibiedina; Yuning Mao; Rui Hou; Madian Khabsa; Mike Lewis; Jimmy Ba; Amjad Almahairi", "journal": "", "ref_id": "b34", "title": "Residual prompt tuning: Improving prompt tuning with residual reparameterization", "year": "2023" }, { "authors": "Pavel Tokmakov; Yu-Xiong Wang; Martial Hebert", "journal": "", "ref_id": "b35", "title": "Learning compositional representations for few-shot recognition", "year": "2019" }, { "authors": "Matthew Trager; Pramuditha Perera; Luca Zancato; Alessandro Achille; Parminder Bhatia; Bing Xiang; Stefano Soatto", "journal": "", "ref_id": "b36", "title": "Linear spaces of meanings: the compositional language of vlms", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b37", "title": "Attention is all you need", "year": "2017" }, { "authors": "Henan Wang; Muli Yang; Kun Wei; Cheng Deng", "journal": "", "ref_id": "b38", "title": "Hierarchical prompt learning for compositional zero-shot recognition", "year": "2023" }, { "authors": "Max Wolff; Wieland Brendel; Stuart Wolff", "journal": "", "ref_id": "b39", "title": "The independent compositional subspace hypothesis for the structure of clip's last layer", "year": "2023" }, { "authors": "Guangyue Xu; Parisa Kordjamshidi; Joyce Chai", "journal": "", "ref_id": "b40", "title": "Prompting large pre-trained vision-language models for compositional concept learning", "year": "2022" }, { "authors": "An Yan; Yu Wang; Yiwu Zhong; Chengyu Dong; Zexue He; Yujie Lu; William Wang; Jingbo Shang; Julian Mcauley", "journal": "", "ref_id": "b41", "title": "Learning concise and descriptive attributes for visual recognition", "year": "2023" }, { "authors": "Aron Yu; Kristen Grauman", "journal": "", "ref_id": "b42", "title": "Fine-grained visual comparisons with local learning", "year": "2014" }, { "authors": "Mert Yuksekgonul; Federico Bianchi; Pratyusha Kalluri; Dan Jurafsky; James Zou", "journal": "ICLR", "ref_id": "b43", "title": "When and why vision-language models behave like bags-of-words, and what to do about it?", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b44", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Tian Zhang; Kongming Liang; Ruoyi Du; Xian Sun; Zhanyu Ma; Jun Guo", "journal": "", "ref_id": "b45", "title": "Learning invariant visual representations for compositional zero-shot learning", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b46", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "IJCV", "ref_id": "b47", "title": "Learning to prompt for visionlanguage models", "year": "2022" }, { "authors": "Yixiong Zou; Shanghang Zhang; Ke Chen; Yonghong Tian; Yaowei Wang; José Mf Moura", "journal": "ACM MM", "ref_id": "b48", "title": "Compositional few-shot recognition with primitive discovery and enhancing", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 110.22, 82.78, 386.49, 109.89 ], "formula_id": "formula_0", "formula_text": "[1]The photo shows a [s][o]. [2]A [s][o] is pictured. … [M]A [o] in the photo appears [s]. LLMs TFE [s] [o] 𝐩 ! 𝐩 \" 𝐩 # ⋯ Prompted Composition Class text encoder (frozen) image encoder (frozen) ⋯ 𝑥 (!) VFE 𝒒 ! 𝐃 (&) VLPD 𝒗 𝒕 ! 𝑥 (\") 𝑥 (') ℰ ! ℰ \" language-informed distributions image input: 𝒙 SLM \"bicolor cat\" ⋯ ⋯ ℒ ( (𝒙, 𝑠) ℒ ) (𝒙, 𝑜) ℒ & (𝒙," }, { "formula_coordinates": [ 5, 423.63, 215.56, 81.62, 9.65 ], "formula_id": "formula_1", "formula_text": "\"[p 1 ] . . . [p L ][s][o]\"," }, { "formula_coordinates": [ 5, 318.72, 235.9, 187.02, 11.22 ], "formula_id": "formula_2", "formula_text": "q y = E T ([p 1 ] . . . [p L ][s][o]) where q y ∈ R d ." }, { "formula_coordinates": [ 5, 119.89, 357.5, 104.32, 9.84 ], "formula_id": "formula_3", "formula_text": "v = Ψ VFE (E V (x), E V (X))" }, { "formula_coordinates": [ 5, 123.39, 630.32, 381.28, 27.21 ], "formula_id": "formula_4", "formula_text": "NLL(x, y) = -log E t 1:C p(y|v, t 1:C ) ≤ -log exp(h y /τ ) C k=1 exp((h k + h (m) k,y )/τ ) := L y (x, y),(1)" }, { "formula_coordinates": [ 6, 155.42, 324.24, 301.16, 33.68 ], "formula_id": "formula_5", "formula_text": "h s = cos   f s (v), 1 |Y s | y∈Ys t y   , h o = cos   f o (v), 1 |Y o | y∈Yo t y   ." }, { "formula_coordinates": [ 6, 107.64, 439.16, 397.03, 50.33 ], "formula_id": "formula_6", "formula_text": "L s (x, s) = -log exp(h s /τ ) |S| k=1 exp((h k + h (m) k,s )/τ ) , L o (x, o) = -log exp(h o /τ ) |O| k=1 exp((h k + h (m) k,o )/τ ) ,(3) where h" }, { "formula_coordinates": [ 6, 217.32, 596.04, 287.34, 13.25 ], "formula_id": "formula_7", "formula_text": "hy = (1 -λ)h y + λh (rc) y , λ ∼ Beta(a, b),(4)" }, { "formula_coordinates": [ 14, 213.57, 541.02, 291.1, 26.8 ], "formula_id": "formula_8", "formula_text": "t o = 1 |Y o | y∈Yo t y , D (o) = 1 |Y o | y∈Yo D (y) ,(5)" }, { "formula_coordinates": [ 15, 327.81, 270.87, 176.86, 13.23 ], "formula_id": "formula_9", "formula_text": "k,o = f o (v) ⊤ A k,o f o (v),(6)" } ]
10.1162/tacl_a_00366
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b15", "b23", "b22", "b21", "b32", "b31", "b9", "b36", "b25", "b2" ], "table_ref": [], "text": "Opinions and sentiments are essential to human communication, beliefs, and behaviors (Liu, 2012). Although sentiment analysis is often performed at the sentence or document level, it is insufficient to capture the fine-grained opinion and sentiment information (Poria et al., 2020). To this end, aspectbased sentiment analysis (ABSA) is the study of how opinions express sentiment towards specific aspect targets in a text (Pontiki et al., 2014). Aspect sentiment triplet extraction (ASTE) (Peng et al., 2020) is a subtask of ABSA which unifies previous subtasks (Yin et al., 2016;Yang and Cardie, 2012;Klinger and Cimiano, 2013) to extract the opinion term, aspect target, and the expressed sentiment.\nAlthough ASTE has become a more established subtask with many existing methods (Zhang et al., 2022), current methods are limited to the in-domain setting for two domains. In practice, it is beneficial for models to generalize well to new domains as domain-specific labeled data is often scarce (Wang and Pan, 2018). Hence, this motivates us to pose two main research questions: (1) Can existing ASTE methods generalize across multiple domains? (2) How can ASTE methods adapt to new domains in the presence of domain-specific unlabeled data? To answer these research questions, we propose a domain-expanded ASTE benchmark to address the in-domain, out-of-domain, and cross-domain settings across a more diverse set of domains. We support the new benchmark by annotating more than 4,000 data samples for two new domains based on hotel and cosmetics product reviews. Therefore, we can combine the new domains with the two existing domains to construct a domain-expanded dataset with four domains as shown in Figure 1.\nTo investigate the domain generalization of existing ASTE methods, we evaluate five methods on our dataset for the in-domain and out-of-domain settings. Our analysis reveals a significant gap between in-domain and out-of-domain performance, which provides an opportunity for future domain adaptation methods to bridge the gap (Chen and Qian, 2021). On the other hand, we find that generative methods have more potential for cross-domain transfer due to their strong out-of-domain performance.\nIn summary, our main contributions include: (1) As ASTE methods are limited to the in-domain setting for two domains, we propose a domainexpanded benchmark to cover the in-domain, outof-domain and cross-domain settings. (2) We annotate more than 4000 samples for two new domains based on hotel and cosmetics product reviews to support the new benchmark. (3) Our analysis for the existing models on the new benchmark reveals insights for the generalization of language models for aspect-based sentiment analysis tasks." }, { "figure_ref": [], "heading": "Domain-Expanded ASTE Benchmark", "publication_ref": [], "table_ref": [], "text": "To expand the domain scope of the existing ASTE datasets, we annotate samples for two new domains. We further propose a modified evaluation method for ASTE to compare generative models more fairly. To investigate domain generalization for ASTE, we provide an empirical comparison of in-domain and out-of-domain performance for five existing methods." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "Given an input sentence x containing n words, ASTE aims to predict a set of sentiment triplets where each triplet (t, o, p) corresponds to the aspect target, opinion, and sentiment polarity, respectively. Each aspect target t and opinion o are text spans in the sentence. The sentiment polarity belongs to the label set of {POS, NEG, NEU}, which corresponds to positive, negative, and neutral sentiment, respectively." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [ "b21", "b0", "b8", "b19", "b29" ], "table_ref": [], "text": "We construct a dataset with four domains by leveraging two domains from existing datasets (Peng et al., 2020) the Hotel and Cosmetics domains from TripAdvisor Reviews (Angelidis et al., 2021) and Amazon Reviews (He and McAuley, 2016;McAuley et al., 2015) respectively. We collect 8000 samples from each domain corpus and use the spaCy tool to tokenize the review texts and label their partof-speech tags. To denoise the raw samples, we remove reviews that do not contain any nouns or adjectives. We also leverage the existing Laptop and Restaurant domains from ASTE-Data-V2 (Xu et al., 2020). Within the Laptop and Restaurant domains, we remove duplicate samples and retain the existing triplet annotations." }, { "figure_ref": [], "heading": "Data Annotation", "publication_ref": [ "b21", "b29", "b1", "b26", "b1" ], "table_ref": [], "text": "For annotation, we follow the same data format as existing datasets (Peng et al., 2020;Xu et al., 2020). Specifically, annotators are provided with each tokenized review sentence as input. They are required to annotate all valid sentiment triplets in the text according to the task formulation in Section 2.1. We include the detailed annotation guideline in the appendix. To ensure the quality of data annotation, we conduct quality checking for each batch of annotated data. Specifically, for each annotation batch, 10% of the samples are randomly selected for manual checking. If more than 10% of the selected samples contain errors, we provide detailed feedback and request annotators to amend the batch. We engage two independent annotators to label the data and engage a third annotator to resolve any annotation disagreements. Following previous works in data annotation for ABSA (Barnes et al., 2018), we measure the inter-annotator agreement using the AvgAgr metric (Wiebe et al., 2005):\nAvgAgr(a, b) = 1 2 |a ∩ b| |a| + |a ∩ b| |b| (1)\nwhere a and b are the set of annotations by the first and second annotators, respectively. Intuitively, the agreement value is the average of precision and recall between the two annotators. Hence, the Table 4: Evaluation results for out-of-domain ASTE. We repeat each experiment with 5 random seeds to report the average F 1 scores. We also report the average precision (P ), recall (R), and F 1 scores across all domain-pairs. perfect agreement is 1 while no agreement is 0. We report the inter-annotator agreement for the Hotel and Cosmetics domain in Table 2. We observe that the agreement scores are high and comparable to previous ABSA datasets ( Barnes et al., 2018). We report the statistics 1 of the domain-expanded dataset such as the number of reviews, sentiment triplets, and unique aspect targets in Table 1." }, { "figure_ref": [ "fig_0" ], "heading": "Domain Generalization of Existing Models", "publication_ref": [ "b6", "b28", "b27", "b28", "b17", "b11", "b3", "b20" ], "table_ref": [ "tab_2" ], "text": "To investigate how existing ASTE methods perform across multiple domains, we compare the in-domain and out-of-domain performance of five existing models on our domain-expanded dataset. To compare the performance of generative methods fairly with other methods, we evaluate all models using a standardized metric that only considers unique triplet predictions. Notably, the GAS model (Zhang et al., 2021b) is affected by duplicate triplets in 3% of predictions due to the repetition problem in language models (Fu et al., 2021). In practice, it is possible to achieve a recall score greater than 100% by generating a correct triplet multiple times, thus skewing the evaluation results. Unfortunately, several ASTE methods (Zhang et al., 2021a,b;Xu et al., 2021) do not consider duplicate triplets in the evaluation. Hence, we only consider unique triplets in model predictions. 1 We include more detailed analysis in Appendix A.2.\nFor the in-domain results, we train and evaluate methods on the same domain. For the out-ofdomain results, we train the methods on a source domain and directly evaluate them on a different target domain. For the out-of-domain setting and cross-domain setting, the model can access the source-domain train set and dev set. However, in the out-of-domain setting, the model cannot access the target domain unlabeled data. To compare diverse techniques, we include models2 based on tagging (Wu et al., 2020), span enumeration (Xu et al., 2021), machine reading comprehension (Liu et al., 2022), and sequence-to-sequence learning (Zhang et al., 2021a,b). Although the generative models were originally implemented with T5 (Raffel et al., 2020), we also report results3 for BART (Lewis et al., 2020) which has a similar parameter count to BERT (Devlin et al., 2019) for the other methods. Note that we use the original training hyperparameters for the respective models.\nTo investigate the domain generalization of existing methods, we compare the in-domain performance in Table 3 and out-of-domain performance in Table 4. We find that there is a large gap of 15.6 F 1 on average between in-domain and out-of- Table 5: Evaluation results for cross-domain ASTE. We repeat each experiment with 5 random seeds to report the average F 1 scores. We also report the average precision (P ), recall (R), and F 1 scores across all domain-pairs.\ndomain results, which indicates that current methods have a large area of improvement regarding domain generalization. On the other hand, this also suggests a significant opportunity for domain adaptation methods to bridge the source-target domain gap when unlabeled domain-specific data is available. While we observe that models with higher in-domain performance generally also demonstrate higher out-of-domain performance, this is not guaranteed. Specifically, we find that discriminative methods (i.e. GTS, Span-ASTE, RoBMRC) have a larger domain gap on average (16.8 F 1 ) compared to generative methods (i.e. Paraphrase, GAS) which enjoy a smaller domain gap on average (14.7 F 1 ). The stronger domain generalization of generative methods may be due to the natural output format that exploits label semantics (Paolini et al., 2021). For instance, it can be easier to predict the triplet (sushi, fresh, positive) in Figure 1, as \"sushi\" is a food item which is semantically related to \"fresh\". Furthermore, generative methods can solely fine-tune pretrained language model (PLM) parameters, whereas discriminative methods use both pretrained PLM parameters that are fine-tuned and task-specific parameters that are trained from scratch. Hence, it can be easier for generative models to learn in low-resource settings.\n3 Cross-Domain Experiments" }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [], "table_ref": [], "text": "For all pretrained language models, we use the base version, namely BERT-base-uncased, BARTbase and T5-base. For the self-training framework, we use the corresponding original hyperparameters during source domain training for the GAS and Paraphrase models." }, { "figure_ref": [], "heading": "Cross-Domain Baseline Methods", "publication_ref": [ "b7", "b14", "b28", "b3" ], "table_ref": [], "text": "As there are no prior methods for cross-domain ASTE to our knowledge, we implement baselines based on supervised methods for ASTE and domain adaptation methods from other ABSA subtasks.\nFeature Adaptation Baseline To investigate the effectiveness of ABSA domain adaptation methods, we integrate UDA4 (Gong et al., 2020) with an existing ASTE model. Concretely, UDA is a domain adaptation method that learns domain-invariant features on unlabeled data through auxiliary tasks such as part-of-speech and dependency relation prediction. As UDA is designed for End2End ABSA (Li et al., 2019) which is framed as a sequence-labeling task, it is not directly applicable to ASTE. However, we integrate it with the existing Span-ASTE (Xu et al., 2021) model by using the trained UDA model parameters as initialization weights. This is consistent with the original framework of UDA which first trains a model to learn domain-invariant features and then uses the model parameters to initialize a sequence-labeling model. Currently, UDA is only compatible methods that use masked language models such as BERT (Devlin et al., 2019)." }, { "figure_ref": [], "heading": "Self-Training Baseline", "publication_ref": [ "b5", "b18", "b10", "b37", "b28" ], "table_ref": [], "text": "Recently, self-training (French et al., 2018) has been a promising method of domain adaptation for several tasks (Luo et al., 2022;Kulshreshtha et al., 2021;Zhu and Hauff, 2022) and is generally compatible with most models. Hence, we implement self-training with existing ASTE methods and apply it to the cross-domain setting. Concretely, we use self-training with the Span-ASTE (Xu et al., 2021), GAS (Zhang et al., 2021b) and Paraphrase (Zhang et al., 2021a) models. We implement a three-stage process that trains the model on the source domain labeled data, predicts pseudo-labels on the target domain unlabeled data, and finally trains on the source domain labeled data and pseudo-labeled target domain data." }, { "figure_ref": [], "heading": "Cross-Domain ASTE Results", "publication_ref": [], "table_ref": [], "text": "We report the evaluation results 5 for cross-domain ASTE in Table 5. For reference, we also include the out-of-domain results of the original models for Span-ASTE, Paraphrase, and GAS. Although feature-based domain adaptation (UDA) has shown promise in other ABSA subtasks, we observe only a slight improvement over the original Span-ASTE. This suggests that the sequence-labeling auxiliary objectives of UDA result in less useful representations for ASTE. On the other hand, we find that self-training is a strong baseline method that consistently improves the performance of all models in our experiments. However, self-training alone is naive as it may generate many false positive and false negative triplets during pseudo-labeling due to a large number of possible sentiment triplets." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b31", "b4", "b21" ], "table_ref": [], "text": "Aspect-Based Sentiment Analysis Early works on ABSA focused on extracting different sentiment elements, such as ATE (Liu et al., 2015), OTE (Yang and Cardie, 2012), ASC (Dong et al., 2014). In recent years, compound ABSA tasks have been introduced to jointly address multiple subtasks, including (Peng et al., 2020) and ASQP (Zhang et al., 2021a). In this work, we focus on ASTE which has not been addressed in the out-of-domain and cross-domain settings." }, { "figure_ref": [], "heading": "Domain Adaptation", "publication_ref": [ "b12", "b25", "b33", "b13", "b30", "b20" ], "table_ref": [], "text": "Existing methods for crossdomain ABSA can be broadly categorized as rulebased, feature-based, or self-training approaches. Rule-based approaches leverage syntax-based rules or topic lexicons (Li et al., 2012), while featurebased approaches aim to bridge the gap between domains through domain-independent structural information (Wang and Pan, 2018). On the other hand, self-training approaches use language models to generate synthetic data or pseudo-labels for the target domain (Yu et al., 2021;Li et al., 2022). 5 For clarity, we denote BART-based models with the subscript \"B\" and T5-based models with the subscript \"T\".\nGenerative Models for ABSA Recent works have demonstrated strong results with sequence-tosequence models (Zhang et al., 2021b,a). Notably, it is possible to unify multiple subtasks in an end-toend manner by designing a suitable generation template (Yan et al., 2021). Furthermore, generative methods can generalize better to low-resource settings by leveraging label semantics (Paolini et al., 2021) " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "As the existing aspect sentiment triplet extraction (ASTE) benchmarks are limited to the in-domain setting for two domains, we propose a domainexpanded benchmark to cover the in-domain, outof-domain and cross-domain settings. To support the new benchmark, we annotate data samples for two new domains based on hotel and cosmetic reviews. We compare the in-domain and out-ofdomain performance for five existing methods and show that while there is a significant performance gap for out-of-domain inference, generative methods have a strong potential for domain generalization." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Annotation Guide", "publication_ref": [], "table_ref": [], "text": "This section illustrates the guideline for human annotators. This task is a fine-grained sentiment analysis task where opinion terms, their aspect targets, and their expressed sentiments should be extracted together. Each sample contains one or multiple sentences which have been tokenized and labeled with indices. The annotation steps are as follows:\n1. Read and understand the text sample and find out opinion terms as well as aspect target terms. Note that these terms should be explicit and the target term should not be a pronoun. If there is no opinion term or aspect target term, the sample is marked as \"Invalid\".\n2. If the sample contains opinion terms and aspect target terms, check whether there are aspect-opinion pairs. If not, the sample should also be marked as \"Invalid\".\n3. Determine the expressed sentiment of these pairs and record the spans of aspect-opinion pairs and their expressed sentiment in a 3tuple format. Note that each sentence can have multiple triplets.\nFor example, given a review \"The room was huge but terribly furnished\". We can find two aspect-opinion pairs (room, huge) with positive sentiment and (room, terribly furnished) with negative sentiment. The triplets of this text sample should be recorded in this format: ([1], [3], \"POS\"), ([1], [5,6], \"'NEG'), where the index of the first token is 0.\nThere are several special cases that may make annotators hard to determine. We give a uniform guide here:\n• Articles such as \"the\", \"a\", and \"an\" should not be included in target terms.\n• Separate conjoined terms. For example, \"The bedroom and washroom are big and clean\". \"Bedroom and washroom\" should be recorded as two separate terms \"bedroom\" and \"washroom\". Opinion terms \"big\" and \"clean\" should also be separated.\n• It might be hard to determine whether some adverbs should be included in opinion terms. We should include these adverbs if they have a large influence on the sentiment polarity of the opinion term. For example, \"This room is too big.\" The opinion term should be \"too big\" instead of \"big\", since \"too\" makes the opinion term express an obvious negative sentiment." }, { "figure_ref": [], "heading": "A.2 More Details of Datasets", "publication_ref": [], "table_ref": [], "text": "Table A.2 shows more details of our domainexpanded ASTE dataset. We can observe that our annotated hotel and cosmetics domains contain a larger average sample length and their label distribution is more balanced than previous restaurant and laptop domains." }, { "figure_ref": [], "heading": "A.3 Dataset Examples", "publication_ref": [], "table_ref": [], "text": "Table 7 presents five examples for each domain.\nThe standard of triplet formulation is the same across four domains and aspect target terms are domain-specific, indicating that our domainexpanded dataset can be well used as a crossdomain ASTE benchmark." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* * Yew Ken is a student under the Joint PhD Program between Alibaba and SUTD." }, { "figure_ref": [], "heading": "Domain Example Triplets", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Restaurant", "publication_ref": [], "table_ref": [], "text": "The service is awful .\n(service, awful, negative) The chicken dinner was real good .\n(chicken dinner, good, positive) The food is reliable and the price is moderate .\n(food, reliable, positive), (price, moderate, neutral) Staffs are not that friendly , but the taste covers all .\n(staffs, not that friendly, negative), (taste, covers all, positive) Prices are in line .\n(prices, in line, neutral)" }, { "figure_ref": [], "heading": "Laptop", "publication_ref": [], "table_ref": [], "text": "The keyboard feels good and I type just fine on it .\n(keyboard, good, positive) The battery gets so HOT it is scary .\n(battery, HOT, negative), (battery, scary, negative) It 's great for streaming video and other entertainment uses .\n(streaming video, great, positive), (entertainment uses, great, positive) This mouse is terrific .\n(mouse, terrific, positive) Of course my warranty runs out next month .\n(warranty, runs out, neutral)" }, { "figure_ref": [], "heading": "Hotel", "publication_ref": [], "table_ref": [], "text": "The smell was only slightly less prominent in our corner suite at the end of the hallway . (smell, prominent, neutral) Also , the garbage trucks that frequent the ally are loud .\n(garbage trucks, loud, negative) In the morning you can enjoy a free breakfast with many choices .\n(breakfast, enjoy, positive), (breakfast, free, positive) The price was reasonable compared to the other options in the area .\n(price, reasonable, positive) My fiancé opened the window shades and we had a huge brick wall for a view .\n(brick wall, huge, neutral)" }, { "figure_ref": [], "heading": "Cosmetics", "publication_ref": [], "table_ref": [], "text": "It use to be one of the best products in the market .\n(products, best, positive) This is a very heavy cover -up that feels heavy on your face .\n(cover-up, heavy, neutral) Flimsy is really not a great thing when it 's 20 bucks .\n(Flimsy, not a great thing, negative) I ordered the blonde color , but it really is a little dark .\n(color, blonde, neutral), (color, dark, neutral) I love Essie but the formula on this one is awful .\n(Essie, love, positive), (formula, awful, negative) " } ]
Aspect Sentiment Triplet Extraction (ASTE) is a subtask of Aspect-Based Sentiment Analysis (ABSA) that considers each opinion term, their expressed sentiment, and the corresponding aspect targets. However, existing methods are limited to the in-domain setting with two domains. Hence, we propose a domain-expanded benchmark to address the in-domain, out-ofdomain and cross-domain settings. We support the new benchmark by annotating more than 4000 data samples for two new domains based on hotel and cosmetics reviews. Our analysis of five existing methods shows that while there is a significant gap between in-domain and out-ofdomain performance, generative methods have a strong potential for domain generalization. Our datasets, code implementation and models are available at https://github.com/DAMO-NLP-SG/domain-expanded-aste.
Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet Extraction
[ { "figure_caption": "Figure 1 :1Figure 1: ASTE data samples for the Hotel, Laptop, Cosmetics, and Restaurant domains, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Evaluation results for in-domain ASTE. We repeat each experiment with 5 random seeds to report the average precision (P ), recall (R), and F 1 scores. We also report the average scores across all domains.GTS(Wu et al., 2020) 35.05 52.75 49.41 34.01 32.68 40.98 38.08 24.31 32.77 55.73 49.86 49.94 46.28 37.86 41.65 Span-ASTE (Xu et al., 2021) 41.62 55.55 51.23 37.34 33.48 42.52 43.55 31.00 34.30 57.31 54.36 51.44 49.55 40.52 44.58 RoBMRC (Liu et al., 2022) 36.17 58.17 52.67 37.77 35.57 41.26 41.81 26.97 32.12 60.47 51.10 55.73 53.53 38.46 44.76 Paraphrase B (Zhang et al., 2021a) 41.31 52.83 48.87 38.14 33.53 43.38 41.51 28.31 33.15 56.13 55.54 52.87 47.96 40.38 43.84 GAS B (Zhang et al., 2021b) 41.33 53.53 50.68 38.16 33.97 43.96 42.07 29.15 34.16 55.93 54.92 52.73 48.26 40.89 44.27 Paraphrase T (Zhang et al., 2021a) 43.99 56.49 50.81 41.71 39.09 48.02 43.85 28.45 34.68 59.74 59.15 56.14 50.12 44.07 46.90 GAS T (Zhang et al., 2021b) 46.18 59.10 52.71 40.77 37.88 48.25 46.10 29.81 34.97 59.57 60.47 56.54 51.04 44.88 47.76", "figure_data": "MethodHotelLaptopCosmeticsRestaurantAverageL→H C→H R→H H→L C→L R→L H→C L→C R→C H→R L→R C→R P.R.F 1", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Span-ASTE(Xu et al., 2021) 41.62 55.55 51.23 37.34 33.48 42.52 43.55 31.00 34.30 57.31 54.36 51.44 49.55 40.52 44.58 with UDA(Gong et al., 2020) 42.22 54.82 51.19 37.49 33.21 43.74 44.85 29.73 34.09 57.97 54.38 51.16 49.56 40.71 44.70 with Self-Training 43.63 56.44 50.85 36.51 32.51 43.55 43.13 30.61 34.47 57.72 55.30 52.12 48.65 41.55 44.82 ", "figure_data": "MethodHotelLaptopCosmeticsRestaurantAverageL→H C→H R→H H→L C→L R→L H→C L→C R→C H→R L→R C→R P.R.F 1Paraphrase B (Zhang et al., 2021a) 41.31 52.83 48.87 38.14 33.53 43.38 41.51 28.31 33.15 56.13 55.54 52.87 47.96 40.38 43.84with Self-Training43.51 53.75 49.66 39.91 34.82 44.81 42.48 29.88 33.97 57.65 57.50 55.12 49.03 42.10 45.30GAS B (Zhang et al., 2021b)41.33 53.53 50.68 38.16 33.97 43.96 42.07 29.15 34.16 55.93 54.92 52.73 48.26 40.89 44.27with Self-Training44.30 55.74 51.02 39.23 35.04 46.70 42.28 30.27 35.29 57.13 57.56 54.84 49.08 42.99 45.83Paraphrase T (Zhang et al., 2021a) 43.99 56.49 50.81 41.71 39.09 48.02 43.85 28.45 34.68 59.74 59.15 56.14 50.12 44.07 46.90with Self-Training45.36 57.76 52.24 42.12 40.61 48.85 43.37 28.72 35.55 61.03 60.66 58.07 50.89 45.26 47.91GAS T (Zhang et al., 2021b)46.18 59.10 52.71 40.77 37.88 48.25 46.10 29.81 34.97 59.57 60.47 56.54 51.04 44.88 47.76with Self-Training48.21 59.95 53.07 42.46 38.96 49.99 46.98 29.88 37.17 60.99 61.85 57.41 51.56 46.62 48.97", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "More details of our domain-expanded ASTE dataset. We report the average length of samples and the percentage of positive (POS%), neutral (NEU%) and negative (NEG%) triplets respectively.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Ken Yew; Chia; Hui Declare; Wei Chendeclare; Guizhen Handeclare; Chen; Sharifah Mahani Aljunied; Soujanya Poriadeclare Lidong
[ { "authors": "Stefanos Angelidis; Reinald Kim Amplayo; Yoshihiko Suhara; Xiaolan Wang; Mirella Lapata", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b0", "title": "Extractive opinion summarization in quantized transformer spaces", "year": "2021" }, { "authors": "Jeremy Barnes; Toni Badia; Patrik Lambert", "journal": "European Language Resources Association (ELRA", "ref_id": "b1", "title": "MultiBooked: A corpus of Basque and Catalan hotel reviews annotated for aspect-level sentiment classification", "year": "2018" }, { "authors": "Zhuang Chen; Tieyun Qian", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Bridge-based active domain adaptation for aspect term extraction", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Li Dong; Furu Wei; Chuanqi Tan; Duyu Tang; Ming Zhou; Ke Xu", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Adaptive recursive neural network for target-dependent Twitter sentiment classification", "year": "2014" }, { "authors": "Geoff French; Michal Mackiewicz; Mark Fisher", "journal": "", "ref_id": "b5", "title": "Self-ensembling for visual domain adaptation", "year": "2018" }, { "authors": "Zihao Fu; Wai Lam; Anthony ; Man-Cho So; Bei Shi", "journal": "", "ref_id": "b6", "title": "A theoretical analysis of the repetition problem in text generation", "year": "2021" }, { "authors": "Chenggong Gong; Jianfei Yu; Rui Xia", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Unified feature and instance based domain adaptation for aspect-based sentiment analysis", "year": "2020" }, { "authors": "Ruining He; Julian Mcauley", "journal": "", "ref_id": "b8", "title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", "year": "2016" }, { "authors": "Roman Klinger; Philipp Cimiano", "journal": "", "ref_id": "b9", "title": "Joint and pipeline probabilistic models for fine-grained sentiment analysis: Extracting aspects, subjective phrases and their relations", "year": "2013" }, { "authors": "Devang Kulshreshtha; Robert Belfer; Iulian Vlad Serban; Siva Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Back-training excels selftraining at unsupervised domain adaptation of question generation and passage retrieval", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Fangtao Li; Sinno Jialin Pan; Ou Jin; Qiang Yang; Xiaoyan Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Cross-domain co-extraction of sentiment and topic lexicons", "year": "2012" }, { "authors": "Junjie Li; Jianfei Yu; Rui Xia", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Generative cross-domain data augmentation for aspect and opinion co-extraction", "year": "2022" }, { "authors": "Zheng Li; Xin Li; Ying Wei; Lidong Bing; Yu Zhang; Qiang Yang", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Transferable end-to-end aspect-based sentiment analysis with selective adversarial learning", "year": "2019" }, { "authors": "Bing Liu", "journal": "Synthesis lectures on human language technologies", "ref_id": "b15", "title": "Sentiment analysis and opinion mining", "year": "2012" }, { "authors": "Pengfei Liu; Shafiq Joty; Helen Meng", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Finegrained opinion mining with recurrent neural networks and word embeddings", "year": "2015" }, { "authors": "Shu Liu; Kaiwen Li; Zuhe Li", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "A robustly optimized BMRC for aspect sentiment triplet extraction", "year": "2022" }, { "authors": "Hongyin Luo; Shang-Wen; Mingye Li; Seunghak Gao; James Yu; Glass", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Cooperative self-training of machine reading comprehension", "year": "2022" }, { "authors": "Julian Mcauley; Christopher Targett; Qinfeng Shi; Anton Van Den; Hengel", "journal": "", "ref_id": "b19", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "Giovanni Paolini; Ben Athiwaratkun; Jason Krone; Jie Ma; Alessandro Achille; Rishita Anubhai; Cicero Nogueira Dos Santos; Bing Xiang; Stefano Soatto", "journal": "", "ref_id": "b20", "title": "Structured prediction as translation between augmented natural languages", "year": "2021" }, { "authors": "Haiyun Peng; Lu Xu; Lidong Bing; Fei Huang; Wei Lu; Luo Si", "journal": "", "ref_id": "b21", "title": "Knowing what, how and why: A near complete solution for aspect-based sentiment analysis", "year": "2020" }, { "authors": "Maria Pontiki; Dimitris Galanis; John Pavlopoulos; Harris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "SemEval-2014 task 4: Aspect based sentiment analysis", "year": "2014" }, { "authors": "Soujanya Poria; Devamanyu Hazarika; Navonil Majumder; Rada Mihalcea", "journal": "", "ref_id": "b23", "title": "Beneath the tip of the iceberg: Current challenges and new directions in sentiment analysis research", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b24", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Wenya Wang; Sinno Jialin Pan", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Recursive neural structural correspondence network for crossdomain aspect and opinion co-extraction", "year": "2018" }, { "authors": "Janyce Wiebe; Theresa Wilson; Claire Cardie", "journal": "Language Resources and Evaluation", "ref_id": "b26", "title": "Annotating expressions of opinions and emotions in language", "year": "2005" }, { "authors": "Zhen Wu; Chengcan Ying; Fei Zhao; Zhifang Fan; Xinyu Dai; Rui Xia", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Grid tagging scheme for aspect-oriented fine-grained opinion extraction", "year": "2020" }, { "authors": "Lu Xu; Yew ; Ken Chia; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Learning span-level interactions for aspect sentiment triplet extraction", "year": "2021" }, { "authors": "Lu Xu; Hao Li; Wei Lu; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Position-aware tagging for aspect sentiment triplet extraction", "year": "2020" }, { "authors": "Hang Yan; Junqi Dai; Tuo Ji; Xipeng Qiu; Zheng Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "A unified generative framework for aspect-based sentiment analysis", "year": "2021" }, { "authors": "Bishan Yang; Claire Cardie", "journal": "", "ref_id": "b31", "title": "Extracting opinion expressions with semi-Markov conditional random fields", "year": "2012" }, { "authors": "Yichun Yin; Furu Wei; Li Dong; Kaimeng Xu; Ming Zhang; Ming Zhou", "journal": "IJCAI/AAAI Press", "ref_id": "b32", "title": "Unsupervised word and dependency path embeddings for aspect term extraction", "year": "2016-07" }, { "authors": "Jianfei Yu; Chenggong Gong; Rui Xia", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Crossdomain review generation for aspect-based sentiment analysis", "year": "2021" }, { "authors": "Wenxuan Zhang; Yang Deng; Xin Li; Yifei Yuan; Lidong Bing; Wai Lam; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Aspect sentiment quad prediction as paraphrase generation", "year": "2021" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Towards generative aspect-based sentiment analysis", "year": "2021" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "", "ref_id": "b36", "title": "A survey on aspect-based sentiment analysis: Tasks, methods, and challenges", "year": "2022" }, { "authors": "Peide Zhu; Claudia Hauff", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Unsupervised domain adaptation for question generation with Do-mainData selection and self-training", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 322.28, 684.76, 202.86, 24.43 ], "formula_id": "formula_0", "formula_text": "AvgAgr(a, b) = 1 2 |a ∩ b| |a| + |a ∩ b| |b| (1)" } ]
10.18653/v1/P17-1171
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b31", "b26", "b17", "b17", "b15", "b12", "b22", "b18", "b8", "b14" ], "table_ref": [], "text": "Contrast consistency (Gardner et al., 2020) is a crucial aspect for neural models in NLP. Models are expected to identify perturbations in the text input and decide whether such a semantic shift leads to a different label. To evaluate this consistency, contrast sets have been introduced in various tasks such as sentiment analysis (Wu et al., 2021), natural language inference (Ross et al., 2022), and reading comprehension (Longpre et al., 2021) by minimally modifying the original input (\"Pet Sematary\")\n(\"Australian one-cent coin\") (\"Pet Sematary 2\") (\"Pet Sematary\")\n(\"Australian one-cent coin\")\n𝑞 3 𝑝 3\nFigure 1: Above: Trained on question q 1 but not a contrast one q 2 , DPR generated an overly similar embedding of q 2 with q 1 's and thus falsely retrieved p 1 . We aim to identify q 2 as a distinct question and retrieve p 2 instead. Below: The performance of DPR-based OpenQA models on the standard NQ question set and our contrast set of minimally edited questions (MEQs).\nto reverse the original label. However, to our best knowledge, there is no study on the contrast consistency in open-domain question answering (OpenQA). In OpenQA, even a slight modification of a word or two can alter the meaning of the question, which leads to a completely different answer.\nTo maintain contrast consistency, models are expected to predict the corresponding answer when such semantic shift occurs.\nStudying contrast consistency in OpenQA poses unique challenges. Firstly, collecting appropriate contrast sets is difficult. While contrast sets have been developed for reading comprehension (Longpre et al., 2021;Li et al., 2022), they typically replaced an entity (e.g., Barack Obama was born in Hawaii) in given context with another entity (e.g., Barack Obama was born in New York), leading to a different answer to the given question (e.g., Where was Barack Obama born?). Constructing such contrast sets does not necessitate the factuality of the perturbed context, as the answer depends solely on the context rather than world knowledge. However, in the absence of evidence context, the perturbed questions in OpenQA must be factually answerable in accordance with world knowledge, which is beyond what rule-based methods can do. Secondly, achieving contrast consistency is challenging for OpenQA models which usually follow the \"retrieve-then-read\" pipeline (Lewis et al., 2020). In addition to the challenge of predicting answers from a contrast context as in reading comprehension, models also face the challenge of mapping the perturbed question with its corresponding evidence passage in a large corpus. The latter requires the retriever to distinguish the minimal semantic difference between embeddings of the perturbed question and the original question, which is ignored in typical retriever training.\nTo fill this gap in OpenQA, we propose to create contrast sets using Minimally Edited Questions (MEQs). Given a question q and its answer a, an MEQ q ′ is defined as a question that possesses high lexical and semantic similarity with q, while having a distinct answer a ′ (a ′ ̸ = a). For example, in Figure 1, changing \"Pet Sematary 2\" to \"Pet Sematary\" generates an MEQ that resembles the original question but has a distinct answer (\"Coweta County, Georgia\"→\"Maine\"). We use the training set of an existing benchmark as the original questions because neural OpenQA models exhibit high performance on them. Thus, we are able to evaluate the models' ability of distinguishing MEQs by measuring their performance on the MEQ contrast set. Specifically, we collect MEQs for training questions in the Natural Questions (NQ) benchmark (Kwiatkowski et al., 2019) from two sources, namely (1) InstructGPT-based question generation (Ouyang et al., 2022) then crowdsource annotation and (2) the AmbigQA dataset (Min et al., 2020).\nWe find that the state-of-the-art OpenQA models which employ the dense passage retriever (DPR) (Karpukhin et al., 2020) struggle on our MEQ contrast sets. As shown in Figure 1, DPRretrieved passages lead to 63% downstream QA accuracy on training set and 43% on standard test set. However, the accuracy drops to 20%~25% on our MEQ contrast sets. The problem lies in the contrastive training process of DPR. The model is trained to optimize question embeddings to be closer to their positive passage embeddings2 than negative passage embeddings. This paradigm does not provide explicit signals for understanding the relationships between questions, which causes the generated question embeddings to be insensitive to minimal discrepancies. As a result, the model generates overly similar embeddings for the MEQ and the original question, leading to incorrect passage retrieval for the MEQ. In fact, the overlap between the retrieved passages of the original question and those of its MEQ is as high as ~70%, which reflects DPR's limited ability in distinguishing the questions. To overcome such limitations, it is necessary to complement DPR training with signals on inter-question relationships. Besides building the mapping between questions and passages, DPR needs to know which questions are the same and which are different.\nIn this pioneering study, we propose a simple and effective method based on a query-side contrastive loss to improve the performance of DPR on MEQs. Specifically, in order to learn inter-question relationships, DPR is trained to distinguish between paraphrase questions and semantically different questions. To achieve this, we obtain synthetic MEQs for training questions from the machine-created QA corpus, PAQ (Lewis et al., 2021), as augmented data. Experiments demonstrate that learning the query-side contrastive loss on the augmented MEQs improves the performance of DPR on contrast sets, without sacrificing its performance on standard open-domain questions in the NQ test set." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Open-Domain Question Answering", "publication_ref": [ "b34", "b1", "b25", "b8", "b2", "b11", "b4", "b10", "b3" ], "table_ref": [], "text": "OpenQA is a task that aims to answer user questions without any specified context, thereby testing the ability of QA systems to retrieve, comprehend, and utilize world knowledge (Zhu et al., 2021). The state-of-the-art approach in OpenQA is a twostage pipeline, consisting of evidence retrieval and answer prediction (Chen et al., 2017).\nIn the evidence retrieval stage, a retriever model finds evidence passages from a large corpus (e.g., Wikipedia) based on their relevance to the question. Traditional retrievers like BM25 (Robertson and Zaragoza, 2009) perform lexical matching to measure such relevance scores. Recently, DPR (Karpukhin et al., 2020) revolutionized the field by employing dual BERT (Devlin et al., 2019) encoders to compute embeddings for the question and the passage, respectively. It searches evidence passages based on the inner product of question and passage embeddings. Despite subsequent approaches have sought to improve the architecture of the retriever by using fine-grained question-passage interactions (Khattab and Zaharia, 2020) or enhancing global embedding training (Gao and Callan, 2021), DPR remains the most widely-used model due to its simplicity and efficiency. However, the capability of DPR in distinguishing contrastive information has not been thoroughly studied. In this work, we use MEQs as contrast sets and show that DPR has limited contrast consistency when solving MEQs.\nIn the answer prediction stage, a reader model encodes and fuses the representations of all passages, then predicts an answer by extracting a span (Kedia et al., 2022), generating a free-form sequence (Izacard and Grave, 2021), or using a hybrid approach (Fajcik et al., 2021). While answer prediction is also challenging on MEQs, our approach mainly focuses on the retrieval part which is the bottleneck of solving the MEQs in OpenQA." }, { "figure_ref": [], "heading": "Contrast Sets", "publication_ref": [ "b5", "b5", "b9", "b31", "b26", "b17", "b32", "b15", "b24", "b23" ], "table_ref": [], "text": "NLP Benchmark datasets are typically comprised of i.i.d. examples that are randomly divided into training and test sets. Conversely, contrast sets refer to data created from small yet label-changing modifications to the existing examples (Gardner et al., 2020). Such characteristics make contrast sets an ideal testbed for evaluating contrast consistency. For example, Gardner et al. (2020) and Kaushik et al. (2020) employed humans to modify linguistic patterns on tasks like syntactic parsing, relation extraction, and claim verification. On sentiment analysis and language inference tasks, controlled text modification models could automatically generate contrast sets (Wu et al., 2021;Ross et al., 2022). In reading comprehension, rulebased algorithms created contrast sets by replacing the answer with another entity (Longpre et al., 2021;Ye et al., 2021;Li et al., 2022). In videoto-text matching, a pre-trained T5 model was used to find replacements for verbs and entities in the original caption (Park et al., 2022).\nNevertheless, building contrast sets to evaluate contrast consistency in OpenQA has not been explored yet, where data collection must guarantee the factuality of MEQs. The most relevant work is (Paranjape et al., 2022) which automatically generated perturbed questions for data augmentation on QA datasets. However, we focus on collecting challenging MEQs to evaluate model consistency instead of data augmentation. Moreover, their generated questions did not meet the requirements of MEQs. The limited accuracy of the question generation model would lead to lots of noise instead of perfect factuality. Also, their method did not ensure the minimality of edits. Therefore, their generated data cannot be used as challenging contrast sets to evaluate contrast consistency in OpenQA.\n3 Task: Contrast Consistency on MEQs" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "In this work, we study minimally edited questions (MEQ) as challenging contrast sets in OpenQA. Suppose we have two questions q and q ′ with answers a and a ′ respectively, where q is the original question in the training set and q ′ is an MEQ of q. In this study, the minimality of edits is measured in two aspects: lexical distance d ℓ (q, q ′ ) and semantic distance d s (q, q ′ ). That is to say, q ′ needs to satisfy d ℓ (q, q ′ ) ≤ ϵ ℓ , d s (q, q ′ ) ≤ ϵ s and a ′ ̸ = a, where ϵ ℓ and ϵ s are distance thresholds." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "To evaluate DPR on MEQ contrast sets, we consider metrics on both ranking and retrieval evaluation. Besides, we run end-to-end QA experiments using the passages retrieved by DPR.\nRanking evaluation measures the model's ability to differentiate a positive passage from negative passages, by ranking a set of candidate passages based on the relevance score to the question. We collect 50 candidates for each question, including a positive passage, 30 hard negative passages and 19 random negative passages. Hard negatives are the top-ranked passages in BM25 retrieval that do not contain the answer. We report Mean Rank (MR) and Mean Reciprocal Rank (MRR) of the positive passage.\nRetrieval evaluation tests the model's ability to retrieve passages relevant to answering the ques-tion from a large corpus. Our retrieval corpus contains ~21M passages from Wikipedia. We calculate Recall@k, the number of passages containing the answer in top-k retrieved passages. End-to-end QA evaluation checks whether the retrieved passages contain useful information for predicting the correct answer. The retrieved passages are fed into a Fusion-in-Decoder (FiD) reader (Izacard and Grave, 2021) trained on NQ. We calculate Exact Match between model predictions and answers.\n4 Data: MEQ Contrast Sets" }, { "figure_ref": [], "heading": "Dataset Construction", "publication_ref": [ "b22", "b18" ], "table_ref": [], "text": "Based on the above evaluation metrics, we collect two MEQ contrast sets to evaluate models' contrast consistency. The first set, referred to as MEQ-GPT, is generated using InstructGPT (Ouyang et al., 2022) then manually filtered and annotated with answers by crowdsource workers. The second set, named MEQ-AmbigQA, is sourced from the AmbigQA dataset (Min et al., 2020). The construction of our contrast sets consists of four phases: question collection, MEQ filtering, answer annotation, and evidence passage annotation." }, { "figure_ref": [], "heading": "Collection of Candidate MEQs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MEQ-InstructGPT", "publication_ref": [ "b0" ], "table_ref": [], "text": "Generating answerable MEQs is very difficult for crowdsource workers who are not domain experts. It is hard for them to determine which modifications to the original question result in an answerable MEQ without extensive Internet searches. However, recent GPT-3 models have demonstrated their ability to possess vast amount of knowledge through massive pre-training (Brown et al., 2020). Therefore, we first utilize the InstructGPT model (textdavinci-002) to generate a set of MEQ candidates, and leave the answer annotation task to crowdsource workers. The input to InstructGPT is of the form\n[I, x 1 , • • • , x t , q, a],\nwhere I is the instruction \"Generate a similar question that has a different answer\". {x i } t i=1 are in-context demonstrations that are manually created, where each x i is a tuple [q i , a i , q ′ i , a ′ i ] (q ′ i is the MEQ of q i ). The original question q and answer a are appended to the input, prompting InstructGPT to generate a new question q ′ and its answer a ′ to complete the sequence. For each input q, we sample 10 completions from InstructGPT to generate a set of candidate MEQs." }, { "figure_ref": [], "heading": "MEQ-AmbigQA", "publication_ref": [], "table_ref": [], "text": "The AmbigQA dataset initially targeted a subset of NQ consisting of ambiguous questions. The dataset was introduced to decompose each ambiguous question into multiple disambiguated questions, each of which is a slight modification of the original question. For each NQ question covered in AmbigQA, its corresponding disambiguated questions are considered as its candidate MEQs and are delivered to the subsequent filtering phase ( §4.1.2). However, such questions are limited as we set strict criteria for MEQs, so we need more data generated by InstructGPT for solid evaluation." }, { "figure_ref": [], "heading": "MEQ Filtering", "publication_ref": [], "table_ref": [], "text": "To build challenging contrast sets, a series of criteria are applied to eliminate unqualified candidates and select MEQs based on the definition in §3.1.\n1. Quality control: We do not allow q and q ′ to differ in question words (e.g., how, what), or if the only word that q ′ adds to q falls into {first,last,new,next,original,not}.\nWe have found that InstructGPT frequently adds these words to create MEQs, but they usually lead to unanswerable questions.\n2. Lexical distance: Word-level edit distance is used as d ℓ (q, q ′ ), and we remove q ′ if d ℓ (q, q ′ ) = 0 or d ℓ (q, q ′ ) > 3." }, { "figure_ref": [], "heading": "Semantic distance:", "publication_ref": [ "b16", "b29" ], "table_ref": [], "text": "The cosine similarity of semantic embeddings is used to measure d s (q, q ′ ). We remove q ′ if cos(h q , h q ′ ) < 0.95 which indicates non-negligible semantic discrepancy. The semantic embedding h should be generated by a sentence embedding model. Here we use the question encoder of the unsupervised dense retrieval model Contriever (Izacard et al., 2021).\n4. Paraphrase filtering: q ′ is discarded if it is determined as a paraphrase to q by a paraphrase detection model. Here we use a RoBERTa-large (Liu et al., 2019) fine-tuned on the Quora Question Pairs dataset (Wang et al., 2019) for paraphrase classification." }, { "figure_ref": [], "heading": "Answer Difference", "publication_ref": [ "b30" ], "table_ref": [], "text": ": q ′ is discarded if a ′ = a.\nFor AmbigQA questions, since they are originally human-annotated, we ask human volunteers to check whether a ′ and a are aliases to the same entity. For GPT-generated questions, the inspection of answer difference is Semantic similarity is computed by Contriever (Izacard et al., 2021). For NQ-train and NQ-test, edit distance and semantic similarity are computed between random question pairs. For MEQ contrast sets, they are computed between the original question and its MEQ.\nincluded in the answer annotation process, which we will elaborate in §4.1.3.\nAmong GPT-generated questions, for a certain original question q, there may be multiple MEQ candidates that pass the above filtering. In such cases, the question that is generated most frequently across 10 samples is selected as the most confident MEQ by InstructGPT. This is similar to the self-consistency idea in Wang et al. (2022)." }, { "figure_ref": [], "heading": "Answer Annotation", "publication_ref": [ "b33" ], "table_ref": [], "text": "Due to the limited accuracy of InstructGPT in directly answering open-domain questions (Yu et al., 2023), we recruit crowdsource workers to annotate the answer of each candidate MEQ generated by InstructGPT. Before human annotation, we first check the answer generated by Instruct-GPT via Google Search. If Google Search returns an highlighted answer box which matches the InstructGPT-generated answer, we skip the subsequent human labeling step. For the remaining questions, we recruit human annotators from Surge AI3 for data labeling. We ask them the following questions: Q1. Is q ′ a good variation to q? Bad variations include being unanswerable or having the same answer with q, and are discarded from our dataset.\nQ2. If q ′ is deemed a good variation, find the answer a ′ using search engines. If necessary, the question may have multiple answers.\nQuality control To ensure answer correctness, each question is answered by two different annotators. If the annotators disagree on the answer or if either annotator determines the question is an bad variation, the question is discarded. Since the answers are free-form responses, we manually check whether the answers given by two annotators are aliases to the same entity. If the response of the first annotator matches exactly with the answer provided by InstructGPT, we do not recruit a second annotator to reduce costs." }, { "figure_ref": [], "heading": "Gold Evidence Passages", "publication_ref": [], "table_ref": [], "text": "As mentioned in §3.2, ranking evaluation on MEQs needs gold evidence passages as positive examples, so we collect them from Wikipedia for our contrast sets. For MEQ-AmbigQA, we utilize the semi-oracle evidence documents4 provided by the original authors, dividing them into 100-word passages. Then, we identify the first passage that contains the gold answer. For MEQ-GPT, our initial step involves finding candidate evidence passages that include the gold answer. This is achieved by retrieving Wiki passages with BM25 and selecting the top 3 passages that contain the answer. Next, we recruit human annotators from Surge AI to assess whether any of these passages provide sufficient evidence for answering the question. The highest-ranked passage that passed human annotation is chosen as the gold evidence passage. Finally, both contrast sets have a subset of questions paired with a corresponding gold evidence passage." }, { "figure_ref": [], "heading": "Dataset Analysis", "publication_ref": [], "table_ref": [], "text": "The full dataset is composed of 3,343 MEQs (2,293 from InstructGPT and 1,050 from Am-bigQA). Each of these MEQs has its original question in the NQ training set. Among them, 1,229 (53.6%) InstructGPT questions and 625 (59.5%) AmbigQA questions are paired with a gold evidence passage from Wikipedia. We use this subset in ranking evaluation and the full set in retrieval and end-to-end QA evaluation." }, { "figure_ref": [], "heading": "Data statistics", "publication_ref": [], "table_ref": [], "text": "We summarize basic statistics of the MEQ contrast sets compared to the original NQ questions. As shown in Where did the Titanic make its maiden voyage from? A: Southampton Q: Where did the Titanic make its maiden voyage to? A: New York AmbigQA are longer because the original Am-bigQA annotators usually added conditions to disambiguate the original NQ questions. Besides, AmbigQA does not impose a limit on the answer length, while we limit each answer in MEQ-GPT to at most 5 words, consistent with NQ. The number of answers per question is lower in MEQ-GPT than in MEQ-AmbigQA, because most answers are obtained through strict text matching on candidate answers from two sources. In addition, we observe that MEQ-GPT has a smaller edit distance and higher semantic similarity between q and q ′ , making it hard for models to distinguish them." }, { "figure_ref": [ "fig_1" ], "heading": "Types of edits", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We review and categorize different types of minimal edits that are used to create MEQs. Since MEQ-AmbigQA primarily consists of edits that add specifications to the original NQ question, we consider MEQ-GPT as a more natural representation of minimal edits. As shown in Table 2, the edits in MEQ-GPT involve nouns (28.0%), verbs (18.5%), adjectives (18.2%), numbers (14.2%), ordinals (9.2%), dates (6.6%), prepositions/conjunctions (2.9%) and others (2.4%). A word cloud of the edited words is given in Figure 2. We also observe that 22.5% of the total edits are antonym edits where a word in the original question is replaced by its antonym. Our dataset of diverse MEQs provides a comprehensive evaluation of contrast consistency." }, { "figure_ref": [], "heading": "Challenges of MEQ Contrast Sets", "publication_ref": [], "table_ref": [], "text": "The collected MEQ contrast sets are challenging for the widely-used DPR-based OpenQA system, although these perturbed questions are only minimal edits to the well-learned training questions.\nAs shown in Figure 1, the model significantly underperforms on the contrast sets, where the passage ranking score of DPR decreases by 39% and 45% compared to NQ-train, and by 29% and 18% compared to NQ-test. This makes a substantial impact on the QA performance, with the accuracy being 69% and 60% lower on the two contrast sets compared to NQ-train, and 54% and 40% lower than NQ-test. The results show that the collected MEQs are much harder to solve than random test questions, which indicates our contrast sets can serve as testbeds for evaluating the contrast consistency of OpenQA. " }, { "figure_ref": [], "heading": "𝒒", "publication_ref": [], "table_ref": [], "text": "Figure 3: Above: the original contrastive training of DPR. Below: our improved DPR with the query-side contrastive loss, where q + and q -are obtained through data augmentation.\nencoders map the input sequence to a dense vector as its semantic representation. The relevance score s(q, p) between a question q and a passage p is defined as the dot product of their representations:\ns(q, p) = E Q (q) ⊺ E P (p)\nDPR is trained via a contrastive loss. Given a positive passage p + and a set of negative passages {p - i } n i=1 to a certain question, the model is trained to maximize the relevance score between q and p + , while minimizing the relevance score between q and each p - i . The loss function is:\nL QP = -log exp(s(q, p + )) exp(s(q, p + )) + n i=1 exp(s(q, p - i ))\nThe above training paradigm works well on retrieving passages for random test questions, but does not perform as effectively on MEQ contrast sets, as discussed in §1 and §4.3. The training loss L QP does not provide explicit signals for DPR to learn the relationships between questions. As a result, the question embeddings are insensitive to minimal discrepancies, which prevents the model from identifying the MEQ as a distinct question after seeing the original question in training. This causes DPR to generate an overly similar embedding for the MEQ, leading to a high overlap in the retrieved passages and low contrast consistency." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "We propose to improve the contrast consistency of DPR by introducing a query-side contrastive loss to distinguish between paraphrase questions and MEQs which are positive and negative question examples for an original question, respectively. We devise a data augmentation approach to collect synthetic question examples to train this loss." }, { "figure_ref": [], "heading": "Data Augmentation", "publication_ref": [ "b19", "b28", "b14" ], "table_ref": [], "text": "For a training question q, its positive example q + is a synthetic paraphrase question which is slightly different from q and has the same answer; its negative question q -is a synthetic MEQ with a different answer.\nTo obtain q + , we leverage back translation provided by the nlpaug5 package. The original question q is translated to another language and then translated back to produce a new phrasing of q. We used translation models of 6 languages provided by Ng et al. (2019) and Tiedemann and Thottingal (2020). Questions that are identical to q (i.e., edit distance = 0) or classified as \"not paraphrase\" by the paraphrase detection model used in §4.1.2 are eliminated. The remaining questions constitute a candidate set of positive questions from which a random q + is sampled in each epoch.\nTo obtain q -, synthetic MEQs are retrieved from the machine-built QA corpus PAQ (Lewis et al., 2021). All questions in PAQ that are similar to q are retrieved by the question retriever in the work of PAQ. Then, the MEQ requirements specified in §4.1.2 are applied to filter the retrieved synthetic questions. The remaining questions con-stitute a candidate set of negative questions from which a random q -is sampled in each epoch.\nApart from learning the relationships among q, q + , and q -, the loss L QP can be augmented to learn the relevance between synthetic questions and their corresponding passages. Because q + is a paraphrase question mapping the passages of q, it does not have to be involved in L QP . To train on q -, its positive passage is the Wikipedia passage that was used to generate the question during the construction of PAQ; its negative passages are collected from the top-ranked passages retrieved by BM25 which do not contain the answer." }, { "figure_ref": [], "heading": "Model Training", "publication_ref": [ "b27" ], "table_ref": [], "text": "To provide more supervision signals and prevent overfitting, we randomly sample q + , q -, and p - for each training question q in each epoch. This means while the original training questions remain fixed, a different set of augmented questions is used. For explicit supervision on inter-question relationships, given q, DPR is trained to assign higher relevance scores to its paraphrase question (q + ) and lower relevance scores to its MEQ (q -). The relevance score of any pair of questions (q 1 , q 2 ) is calculated as the inner product of their embeddings: s(q 1 , q 2 ) = E Q (q 1 ) ⊺ E Q (q 2 ). Specifically, we consider three forms of query-side constrastive loss functions in experiments:\n(1) InfoNCE Loss (van den Oord et al., 2018), which differentiates the positive question from a set of m negative questions. Besides the synthetic MEQ which is considered as a hard negative, the other questions in the same batch are included as random negatives. The loss function is:\nL QQ = -log\nexp (s (q, q + )) exp(s(q, q + )) + m j=1 exp(s(q, q - j ))\n.\n(2) Dot Product Loss, which directly penalizes the relevance score between a sample question q and its augmented MEQ: L QQ = s(q, q -).\n(3) Triplet Loss (Schroff et al., 2015), which trains the model to assign a higher relevance score to q + compared to q -, enfored by a margin α:\nL QQ = max 0, α -s(q, q + ) + s(q, q -) .\nThe final training loss of our improved DPR is L = L QP + λL QQ , where the hyperparameter λ weights the trade-off between the loss terms." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In experiments, we compare our proposed training method against the original training setting of DPR. After training the models on the NQ training set, we test them on the standard NQ test set as well as two MEQ contrast sets that we collected in this work." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b8", "b20" ], "table_ref": [], "text": "We augment the training set with M =33k synthetic MEQs and train DPR with both L QP and L QQ . We consider the following baselines:\n• Vanilla DPR. This is the original training setting of DPR, proposed by (Karpukhin et al., 2020). The model is trained only with L QP on the standard NQ training set.\n• DPR with random augmented questions. This model is trained only with L QP , but we add M random synthetic questions from PAQ to the training set. This is to rule out the effect of simply adding more synthetic data.\n• DPR with augmented MEQs. This model uses the same set of M synthetic MEQs retrieved from PAQ as data augmentation, but is trained only with L QP . We use this variant to test if L QQ is necessary in model training.\nBesides, we test the performance of BM25 on retrieval as a reference. Recent research has shown that larger retrievers may exhibit better generalization (Ni et al., 2022). Therefore, in addition to the standard DPR which is built on BERT-Base, we use BERT-Large as the backbone model to see: (1) whether MEQ contrast sets are still challenging for larger models and (2) whether our training method is still effective for larger models. We name the smaller model and larger model DPR BASE and DPR LARGE , respectively. We use the same set of basic hyper-parameters for each DPR model: a learning rate of 10 -5 , a batch size of 64 (32 for DPR LARGE ), 40 training epochs with 5% warmup steps. On ranking evaluation, our best setting uses the InfoNCE loss with λ = 0.5. On retrieval and QA evaluation, our best setting uses the dot product loss with λ = 0.03. Since we do not have a dev set for MEQs, 6 we conduct ranking evaluation on MEQ contrast sets in a dev setting, where we select the highest score among all checkpoints. Then we use the checkpoint with the best ranking score to test its retrieval and QA performance. The scores on NQ-test is reported using the best checkpoint on NQ-dev." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_4", "tab_7", "tab_4", "tab_6", "tab_5", "tab_4", "tab_6", "tab_4", "tab_6", "tab_7", "tab_4", "tab_6", "tab_7" ], "text": "Experimental results on three datasets (NQ-test, MEQ-AmbigQA, MEQ-GPT) are presented from Table 3 to Table 6. We have the following findings:\n(1) Our proposed method improves DPR's ability to distinguish MEQs. As shown in Tables 3 and5, on passage ranking and passage retrieval, the DPR trained with query-side contrastive loss outperforms the vanilla DPR on both contrast sets, showing improved contrast consistency on MEQs. This improvement is consistent across models of different sizes. For example, on MEQ-GPT, our model improves the vanilla DPR by 8% and 10% on ranking MRR for base and large versions respectively. On the choice of L QQ , Table 4 demonstrates that all three loss functions improve performance over baselines, while the optimal setting may require tuning on the specific dataset.\n(2) The query-side contrastive loss contributes the most to the improved contrast consistency.\nAlthough synthetic MEQs themselves bring more training signals, the model cannot consistently outperforms the vanilla DPR without L QQ . Actually, its performance is sometimes even lower than DPR. In contrast, after including the query-side contrastive loss, we observe consistent improvements across all datasets, as shown in Tables 3 and5. For example, on MEQ-AmbigQA, simply adding synthetic MEQs into the training set gives 12% lower recall@1 than the vanilla DPR, while training with L QQ outperforms the naive augmentation method by 18%.\n(3) The improvement does not simply come from the increased number of training data.\nThere is no significant difference on the performance between DPR augmented with random synthetic questions (\"Random\" in \"Augmentaiton\" column) and the original DPR (\"None\" in the column) in Tables 3,5, and 6. The average improvement of inserting random synthetic questions on all metrics is only 0.2% for DPR BASE and 1.6% for DPR LARGE , which indicates simply adding more synthetic data is not an effective solution.\n(4) Improved retrieval performance leads to higher end-to-end QA accuracy. As shown in Table 6, our improved DPR provides more relevant information for answer prediction on MEQs. Even using only 1 retrieved passage, our improved DPR-Large outperforms its vanilla version by 12% and 11% on two contrast sets respectively.\n(5) Our method does not sacrifice performance on standard test questions. After jointly trained with the query-side contrastive loss and augmented with synthetic MEQs, our model still NQ MEQ-AmbigQA MEQ-GPT Model Augmentation R@1 R@5 R@20 R@1 R@5 R@20 R@1 R@5 R@20 maintains its competitive performance on the standard NQ test set. Specifically, It outperforms all baselines in ranking evaluation (see Table 3), while performing on par with the best baseline in retrieval and QA scores (see Tables 5 and6). Summary: The results are consistent across ranking, retrieval, and end-to-end QA experiments, which demonstrates the solidity of the above findings. Nevertheless, the performance of DPR still has a long way to improve, and such a gap is observed in both base and large versions of the model. Notably, DPR models perform significantly worse on MEQ contrast sets than the standard test set, even though it is trained under a development setting. This suggests that further research is still necessary to improve the contrast consistency of retrieval models on MEQs." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Passage overlap", "publication_ref": [], "table_ref": [], "text": "One of the indications that DPR lacks the ability to distinguish the original question and its MEQ is the high overlap between the passages retrieved for each. Figure 4 illustrates that both synthetic data augmentation and the query-side contrastive loss can reduce passage overlap. The synthetic MEQ augmentation helps to train the question embeddings of MEQs closer to their positive passages. Moreover, the queryside contrastive loss explicitly trains the model to distinguish the original question and its MEQ apart. Nevertheless, a lower passage overlap does not always indicate better performance. For instance, our model with the dot product loss does not have the lowest passage overlap, but performs the best in retrieval evaluation." }, { "figure_ref": [ "fig_3" ], "heading": "Identification of inter-question relationships", "publication_ref": [], "table_ref": [], "text": "To further analyze model behavior after the queryside contrastive training, we test the models' ability to distinguish inter-question relationships. A model is considered successful in identifying the MEQ if the generated embedding of the original question is closer to its paraphrase question rather than its MEQ. The paraphrase questions are separately generated using InstructGPT to avoid conflict with those used in data augmentation. As shown in Figure 5, training with the query-side contrastive loss leads to an improved ability to distinguish between paraphrase questions and different questions, which indicates our models are better at identifying inter-question relationships. The model trained with InfoNCE loss has the highest success rate in identifying inter-question relationships, because it received more training signals from a positive example and a set of negative examples than those with other types of loss." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we addressed the gap in research on contrast consistency in OpenQA by collecting MEQs as challenging contrast sets to the popular NQ benchmark. Our findings reveal that DPR lacks contrast consistency on our contrast sets. To address this limitation, we introduced a query-side contrastive loss with the aid of data augmentation, which improved its ability to recognize interquestion relationships. Overall, our findings and data can pave the way for further exploring the role of contrast consistency in developing robust and effective OpenQA systems." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by NSF IIS-2119531, IIS-2137396, IIS-2142827, CCF-1901059, and ONR N00014-22-1-2507. Wenhao Yu is also supported in part by Bloomberg Data Science Ph.D Fellowship." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Data and code are available at https://github" } ]
Contrast consistency, the ability of a model to make consistently correct predictions in the presence of perturbations, is an essential aspect in NLP. While studied in tasks such as sentiment analysis and reading comprehension, it remains unexplored in opendomain question answering (OpenQA) due to the difficulty of collecting perturbed questions that satisfy factuality requirements. In this work, we collect minimally edited questions as challenging contrast sets to evaluate OpenQA models. Our collection approach combines both human annotation and large language model generation. We find that the widely used dense passage retriever (DPR) performs poorly on our contrast sets, despite fitting the training set well and performing competitively on standard test sets. To address this issue, we introduce a simple and effective query-side contrastive loss with the aid of data augmentation to improve DPR training. Our experiments on the contrast sets demonstrate that DPR's contrast consistency is improved without sacrificing its accuracy on the standard test sets.
Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions
[ { "figure_caption": "Original DPR: Not contrast consistent (b) Our improved DPR: Contrast consistent! (\"Pet Sematary 2\")", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Word cloud of the edited words. Words in green and red are the deleted and added words, respectively. Larger font sizes indicate higher frequencies.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Overlap in top-5 retrieved passages between the original training question and its MEQ.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The ratio of successful MEQ identifications of different models on contrast sets, with paraphrase questions as distractors.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Dataset statistics. Question lengths, answer lengths, and edit distances are all measured in words.", "figure_data": "StatisticsNQ Train Test AmbigQA GPT MEQSize79,168 3,6101,0502,293With Gold Passage 58,880 1,7666251,229Question Length9.17 9.2210.739.69Answer Length2.16 2.222.621.96#Answers1.22 1.791.471.18Edit Distance9.10 9.162.391.18Semantic Similarity 30.12 29.8796.4797.96", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "MEQ-GPT is similar to NQ regarding the average length of questions and answers. Questions in MEQ-Who wrote the music for the national anthem? A: John Stafford Smith Q: Who wrote the lyrics for the national anthem? A: Francis Scott Key", "figure_data": "Edit TypeProportion Antonym EditsExampleNouns Q: Verbs 641 (28.0%) 151 (24%) 425 (18.5%) Q: When did Australia stop using one cent coins? A: 1992 176 (41%) Q: When did Australia start using one cent coins? A: 1966Adjectives 418 (18.2%)146 (35%)Q: How many islands are in Andaman and Nicobar? A: 572 Q: How many inhabited islands are in Andaman and Nicobar? A: 37Numbers326 (14.2%)-Q: Where did season 2 of Jersey Shore take place? A: Miami Beach, Florida Q: Where did season 3 of Jersey Shore take place? A: Seaside Heights, New JerseyOrdinals211 (9.2%)30 (14%)Q: Highest scoring NBA players of all time in one game? A: Wilt Chamberlain Q: Second highest scoring NBA players of all time in one game? A: Kobe BryantDates152 (6.6%)-Q: Who ruled the Holy Roman Empire in 1509? A: Maximilian I Q: Who ruled the Holy Roman Empire in 1519? A: Charles VPrepositions Conjunctions66 (2.9%)14 (21%)Q:", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Different MEQ edit types in MEQ-GPT with their proportions of antonym edits and examples. The remaining 2.4% of the instances are of miscellaneous types. The first line in each example is the original question and the second line is the MEQ. Words in green and red are the deleted and added words, respectively.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Negative Passage : It was Williams' first Grand Slam final at Wimbledon since 2009, her first Grand Slam...", "figure_data": "Positive Passage : … Garbiñe MuguruzaQuestion: Who won the women's𝒑 +won her second Grand Slam singles title, defeating Venus Williams …finals at Wimbledon 2017?𝒑 -Positive Question: Winner ofPositive Passage: 2017 Wimbledonthe Wimbledon women's finals in 2017? (back translation)𝒒 +Championships -Garbiñe Muguruza won her second Grand Slam singlesQuestion: Who won the women's𝒑 +title, defeating Venus Williams in the final game, 7-5, 6-0. …finals at Wimbledon 2017?𝒒Negative Passage: It was Williams'Negative Question: Who won the 2017? (retrieved from PAQ) women's doubles at Wimbledon𝒒 -𝒑 -against a player other than her sister. first Grand Slam final at Wimbledon since 2009, her first Grand Slam finalEmbedding from question encoderEmbedding from passage encoderMinimize embedding distanceMaximize embedding distance5 Method: Training DPR with Query-Side Contrastive Loss", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ranking evaluation results. MR and MRR stand for mean rank and mean reciprocal rank, respectively. A lower MR or higher MRR indicates better performance. BM25 is not listed because sampling hard negatives from top-ranked passages in BM25 retrieval lowers the ranking performance of BM25 in return.", "figure_data": "ModelAugmentationMR↓NQ MRR↑MEQ-AmbigQA MR↓ MRR↑MEQ-GPT MR↓ MRR↑DPR BASENone2.360.7845.090.5635.440.507DPR BASERandom2.360.7815.090.5575.250.524DPR BASEMEQs2.340.7835.090.5435.100.529DPR BASEMEQs + L QQ2.250.7914.850.5694.880.547DPR LARGENone2.310.7804.840.5695.460.515DPR LARGERandom2.200.7974.980.5545.180.533DPR LARGEMEQs2.170.7974.790.5615.000.544DPR LARGEMEQs + L QQ2.140.8044.590.5924.610.565ModelL QQAmbigQA MR↓ MRR↑ MR↓ MRR↑ GPTDPR BASEInfoNCE4.85 0.569 4.88 0.547DPR BASE Dot Product 4.79 0.574 4.98 0.539DPR BASETriplet4.80 0.568 4.91 0.542DPR LARGE InfoNCE4.76 0.572 4.63 0.570DPR LARGE Dot Product 4.59 0.592 4.61 0.565DPR LARGETriplet4.61 0.582 4.59 0.573", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ranking evaluation with different L QQ functions on two MEQ contrast sets. All loss functions outperform the baselines in Table3.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Retrieval evaluation results. R@k stands for Recall@k.", "figure_data": "BM25None23.245.364.516.834.248.821.142.761.7DPR BASENone46.670.081.228.550.065.631.557.373.2DPR BASERandom48.271.281.627.549.265.831.858.073.8DPR BASEMEQs46.469.981.325.246.262.731.555.972.3DPR BASEMEQs + L QQ48.170.881.929.552.366.432.858.774.4DPR LARGENone46.067.680.326.849.264.029.254.970.7DPR LARGERandom49.070.981.526.248.464.131.356.772.3DPR LARGEMEQs48.070.581.427.747.261.531.257.071.8DPR LARGEMEQs + L QQ51.071.281.630.152.365.432.558.473.1ModelAugmentation1PNQ 5P20PMEQ-AmbigQA 1P 5P 20P1PMEQ-GPT 5P20PBM25None16.428.437.310.915.218.113.320.525.8DPR BASENone32.643.249.114.019.721.917.625.829.3DPR BASERandom33.744.849.414.719.122.416.825.429.5DPR BASEMEQs32.043.448.713.519.323.117.125.529.5DPR BASEMEQs + L QQ34.444.749.216.621.822.819.526.731.1DPR LARGENone31.442.247.914.319.221.416.124.629.1DPR LARGERandom33.744.649.313.420.421.517.325.529.4DPR LARGEMEQs33.044.748.715.719.321.717.425.029.1DPR LARGEMEQs + L QQ33.744.649.316.122.123.019.427.631.6", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "End-to-end QA results (Exact Match). 1P, 5P and 20P are the number of passages read by the FiD reader.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Zhihan Zhang; Wenhao Yu; Zheng Ning; Mingxuan Ju; Meng Jiang
[ { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "", "ref_id": "b1", "title": "Reading wikipedia to answer open-domain questions", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "NAACL-HLT", "ref_id": "b2", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Martin Fajcik; Martin Docekal; Karel Ondrej; Pavel Smrz", "journal": "", "ref_id": "b3", "title": "R2-D2: A modular baseline for open-domain question answering", "year": "2021" }, { "authors": "Luyu Gao; Jamie Callan", "journal": "", "ref_id": "b4", "title": "Condenser: a pre-training architecture for dense retrieval", "year": "2021" }, { "authors": "Matt Gardner; Yoav Artzi; Victoria Basmova; Jonathan Berant; Ben Bogin; Sihao Chen; Pradeep Dasigi; Dheeru Dua; Yanai Elazar; Ananth Gottumukkala; Nitish Gupta; Hannaneh Hajishirzi; Gabriel Ilharco; Daniel Khashabi; Kevin Lin; Jiangming Liu; Nelson F Liu; Phoebe Mulcaire; Qiang Ning; Sameer Singh; Noah A Smith; Sanjay Subramanian; Reut Tsarfaty; Eric Wallace; Ally Zhang; Ben Zhou", "journal": "", "ref_id": "b5", "title": "Evaluating models' local decision boundaries via contrast sets", "year": "2020" }, { "authors": "Gautier Izacard; Mathilde Caron; Lucas Hosseini; Sebastian Riedel; Piotr Bojanowski; Armand Joulin; Edouard Grave", "journal": "", "ref_id": "b6", "title": "Towards unsupervised dense information retrieval with contrastive learning", "year": "2021" }, { "authors": "Gautier Izacard; Edouard Grave", "journal": "", "ref_id": "b7", "title": "Leveraging passage retrieval with generative models for open domain question answering", "year": "2021" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; S H Patrick; Ledell Lewis; Sergey Wu; Danqi Edunov; Wen-Tau Chen; Yih", "journal": "", "ref_id": "b8", "title": "Dense passage retrieval for open-domain question answering", "year": "2020" }, { "authors": "Divyansh Kaushik; Eduard H Hovy; Zachary Chase Lipton", "journal": "", "ref_id": "b9", "title": "Learning the difference that makes A difference with counterfactually-augmented data", "year": "2020" }, { "authors": "Akhil Kedia; Mohd Abbas Zaidi; Haejun Lee", "journal": "", "ref_id": "b10", "title": "Fie: Building a global probability space by leveraging early fusion in encoder for opendomain question answering", "year": "2022" }, { "authors": "Omar Khattab; Matei Zaharia", "journal": "", "ref_id": "b11", "title": "Colbert: Efficient and effective passage search via contextualized late interaction over BERT", "year": "2020" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur P Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee; Kristina Toutanova; Llion Jones; Matthew Kelcey; Ming-Wei Chang; Andrew M Dai; Jakob Uszkoreit; Quoc Le; Slav Petrov", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b12", "title": "Natural questions: a benchmark for question answering research", "year": "2019" }, { "authors": "S H Patrick; Ethan Lewis; Aleksandra Perez; Fabio Piktus; Vladimir Petroni; Naman Karpukhin; Heinrich Goyal; Mike Küttler; Wentau Lewis; Tim Yih; Sebastian Rocktäschel; Douwe Riedel; Kiela", "journal": "", "ref_id": "b13", "title": "Retrieval-augmented generation for knowledge-intensive NLP tasks", "year": "2020" }, { "authors": "S H Patrick; Yuxiang Lewis; Linqing Wu; Pasquale Liu; Heinrich Minervini; Aleksandra Küttler; Pontus Piktus; Sebastian Stenetorp; Riedel", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "PAQ: 65 million probablyasked questions and what you can do with them", "year": "2021" }, { "authors": "Daliang Li; Ankit Singh Rawat; Manzil Zaheer; Xin Wang; Michal Lukasik; Andreas Veit; Felix X Yu; Sanjiv Kumar", "journal": "", "ref_id": "b15", "title": "Large language models with controllable working memory", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b16", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Shayne Longpre; Kartik Perisetla; Anthony Chen; Nikhil Ramesh; Chris Dubois; Sameer Singh", "journal": "", "ref_id": "b17", "title": "Entity-based knowledge conflicts in question answering", "year": "2021" }, { "authors": "Sewon Min; Julian Michael; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "", "ref_id": "b18", "title": "Ambigqa: Answering ambiguous open-domain questions", "year": "2020" }, { "authors": "Nathan Ng; Kyra Yee; Alexei Baevski; Myle Ott; Michael Auli; Sergey Edunov", "journal": "", "ref_id": "b19", "title": "Facebook fair's WMT19 news translation task submission", "year": "2019" }, { "authors": "Jianmo Ni; Chen Qu; Jing Lu; Zhuyun Dai; Gustavo Hernández Abrego; Ji Ma; Vincent Y Zhao; Yi Luan; Keith B Hall; Ming-Wei Chang; Yinfei Yang", "journal": "", "ref_id": "b20", "title": "Large dual encoders are generalizable retrievers", "year": "2022" }, { "authors": "Aäron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b21", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b22", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Bhargavi Paranjape; Matthew Lamm; Ian Tenney", "journal": "", "ref_id": "b23", "title": "Retrieval-guided counterfactual generation for QA", "year": "2022" }, { "authors": "Jae Sung Park; Sheng Shen; Ali Farhadi; Trevor Darrell; Yejin Choi; Anna Rohrbach", "journal": "", "ref_id": "b24", "title": "Exposing the limits of video-text models through contrast sets", "year": "2022" }, { "authors": "Stephen E Robertson; Hugo Zaragoza", "journal": "Foundations and Trends® in Information Retrieval", "ref_id": "b25", "title": "The probabilistic relevance framework: BM25 and beyond", "year": "2009" }, { "authors": "Alexis Ross; Tongshuang Wu; Hao Peng; Matthew E Peters; Matt Gardner", "journal": "", "ref_id": "b26", "title": "Tailor: Generating and perturbing text with semantic controls", "year": "2022" }, { "authors": "Florian Schroff; Dmitry Kalenichenko; James Philbin", "journal": "", "ref_id": "b27", "title": "Facenet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "Jörg Tiedemann; Santhosh Thottingal", "journal": "EAMT", "ref_id": "b28", "title": "OPUS-MT -building open translation services for the world", "year": "2020" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b29", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2019" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc V Le; Ed H Chi; Denny Zhou", "journal": "", "ref_id": "b30", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2022" }, { "authors": "Tongshuang Wu; Marco Túlio Ribeiro; Jeffrey Heer; Daniel S Weld", "journal": "", "ref_id": "b31", "title": "Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models", "year": "2021" }, { "authors": "Xi Ye; Rohan Nair; Greg Durrett", "journal": "", "ref_id": "b32", "title": "Connecting attributions and QA model behavior on realistic counterfactuals", "year": "2021" }, { "authors": "Wenhao Yu; Dan Iter; Shuohang Wang; Yichong Xu; Mingxuan Ju; Soumya Sanyal; Chenguang Zhu; Michael Zeng; Meng Jiang", "journal": "", "ref_id": "b33", "title": "Generate rather than retrieve: Large language models are strong context generators", "year": "2023" }, { "authors": "Fengbin Zhu; Wenqiang Lei; Chao Wang; Jianming Zheng; Soujanya Poria; Tat-Seng Chua", "journal": "", "ref_id": "b34", "title": "Retrieving and reading: A comprehensive survey on open-domain question answering", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 97.16, 620.02, 86.83, 10.63 ], "formula_id": "formula_0", "formula_text": "[I, x 1 , • • • , x t , q, a]," }, { "formula_coordinates": [ 4, 413.65, 685.82, 111.89, 11.76 ], "formula_id": "formula_1", "formula_text": ": q ′ is discarded if a ′ = a." }, { "formula_coordinates": [ 7, 127.79, 392.61, 106.7, 12.65 ], "formula_id": "formula_2", "formula_text": "s(q, p) = E Q (q) ⊺ E P (p)" }, { "formula_coordinates": [ 7, 72, 502.86, 220.98, 29.79 ], "formula_id": "formula_3", "formula_text": "L QP = -log exp(s(q, p + )) exp(s(q, p + )) + n i=1 exp(s(q, p - i ))" }, { "formula_coordinates": [ 8, 72, 554.14, 58.32, 10.69 ], "formula_id": "formula_4", "formula_text": "L QQ = -log" } ]
10.48550/arXiv.2302.04023
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b63", "b8", "b56", "b44", "b23", "b1", "b14", "b5" ], "table_ref": [], "text": "The rapidly evolving field of natural language processing (NLP) witnesses the rise of large language models (LLMs), such as GPT-3(Brown et al., 2020), LaMDA (Thoppilan et al., 2022), PaLM (Chowdhery et al., 2022), etc., which have revolutionized various downstream tasks with incontext learning (ICL) (Brown et al., 2020) and chain-of-thought (COT) prompting (Wei et al., 2022c). Excitingly, just by providing appropriate instructions (Sanh et al., 2022;Ouyang et al., 2022) or chain-of-thought prompts (Wei et al., 2022c), LLMs can achieve amazing performance on the zero-shot and few-shot scenarios of unseen tasks, even without updating parameters. Currently, one of the most well-known LLMs is ChatGPT (Ope-nAI, 2023b) powered by GPT-3.5 and GPT-4 (Ope-nAI, 2023a), which exhibits powerful dialogue capabilities. Since ChatGPT is a closed model, and OpenAI does not provide its training details, there are numerous aspects that need to be explored by researchers. For example, Jiao et al. (2023) evaluate the machine translation capability of ChatGPT, (Bang et al., 2023) assess the reasoning capability of ChatGPT. Hence, it is time to ask the question: Is information extraction task solved by ChatGPT and to what extent?\nInformation extraction (IE), as the fundamental natural language understanding task, aims to identify structured information of interest from unstructured plain text. Its results directly affect the subsequent downstream tasks, such as questionanswering (Fei et al., 2022;Cao et al., 2022) and knowledge graph construction (Wang et al., 2022a). Therefore, exploring the ability of ChatGPT to recognize target information can more directly reflect ChatGPT's performance on understanding task instructions to generate responses.\nIn this paper, we evaluate ChatGPT's capabilities on IE tasks in terms of four perspectives, including Performance, Evaluation Criteria, Robustness and Error Types.\n• Using few-shot ICL prompts generally leads to significant improvements, but still obviously lags behind SOTA results.\n• The chain-of-thought prompting cannot guarantee further gains compared to few-shot ICL prompts.\nEvaluation Criteria Through the manual checking, we find that ChatGPT tends to identify longer spans than the annotated ones, i.e., the recognized spans usually contain qualifiers such as crowns, quantifiers, adjectives, time, place, etc. Thus, the previous span hard-matching strategy is not suitable for the evaluation of LLMs like ChatGPT that generate human-like responses. We propose a softmatching strategy to solve this problem and display evaluation results more accurately.\nRobustness We conduct comparisons and analysis on four dimensions: Invalid Output, Irrelevant Context, Frequency of Target Types and The Order of Entities. We find that:\n• ChatGPT rarely outputs invalid responses in most cases.\n• Irrelevant context and frequency of target types have a significant impact on ChatGPT's performance.\n• ChatGPT is not sensitive to the order of entities, and cannot accurately understand the subject-object relationships of entities.\nError Types We summarize 7 types of errors on IE tasks by manually checking ChatGPT's responses, including Missing spans, Unmentioned spans, Unannotated spans, Incorrect span offsets, Undefined types, Incorrect types and Other. We find that \"unannotated spans\" is the most dominant error type, accounting for nearly 1/3 of errors. This raises concerns about the quality of the annotated data. Maybe using ChatGPT to assist in annotating data is a better solution." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b64", "b10", "b29", "b24", "b20", "b63", "b59", "b8", "b58", "b25", "b51", "b52", "b44", "b9", "b0", "b4", "b32", "b70", "b11", "b79", "b40", "b88", "b17", "b82", "b31", "b46", "b33", "b21", "b7", "b78", "b15", "b80" ], "table_ref": [], "text": "Large Language Models Based on the highly parallelizable Transformer architecture (Vaswani et al., 2017), pre-trained language models (PLMs) such as BERT (Devlin et al., 2019), BART (Lewis et al., 2020), etc., have shown powerful capabilities to solve a wide variety of NLP tasks. Some researchers find that scaling PLMs by increasing model size or data size often leads to more powerful capabilities, as long as the scaling law is followed (Kaplan et al., 2020;Hoffmann et al., 2022). Thus, numerous large-size models have been proposed, such as GPT-3 (Brown et al., 2020), LaMDA (Thoppilan et al., 2022), MT-NLG (Smith et al., 2022), PaLM (Chowdhery et al., 2022) and GPT-4 (Ope-nAI, 2023a), which typically have more than 100 billion parameters. The NLP community refers to these large-size PLMs as large language models (LLMs). Unlike small-sized PLMs, LLMs usually exhibit amazing emergent abilities (Wei et al., 2022b;Schaeffer et al., 2023) that enable them to achieve good performance in zero-shot and fewshot scenarios of unseen tasks, as long as the appropriate instructions (Wei et al., 2022a;Kojima et al., 2022;Wang et al., 2022b) or chain-of-though prompts (Wei et al., 2022c) are provided.\nChatGPT One of the best-known examples of LLMs is OpenAI's GPT (Generative Pre-Training Transformer) series, including GPT-1 (Radford et al., 2018), GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), GPT-4 (OpenAI, 2023a), etc. A key milestone in the development process is InstructGPT (Ouyang et al., 2022), a framework for instruction fine-tuning based on reinforcement learning from human feedback (RLHF) (Christiano et al., 2017). The framework allows a large language model to be adapted to a large number of NLP tasks simultaneously, and leverages human feedbacks to align the model output with human preferences in order to generate responses more consistent with human expectations. As the successor of InstructGPT, ChatGPT has exploded the field of artificial intelligence (AI), and attracted an unprecedented wave of enthusiasm. It can interact with humans through multiple turns of dialogue, understand user intent, accomplish instructions and return human-like responses. Shocked by Chat-GPT's performance, some papers already consider GPT-4 as an early version of artificial general intelligence (AGI) (Altman, 2023;Bubeck et al., 2023).\nInformation Extraction As a popular and fundamental task, information extraction (IE) aims to extract structured knowledge of interest from unstructured plain text. The output mainly includes entities, relations between entities, event arguments, opinions, human sentiments, etc. Due to the different target information, IE mainly involves 4 tasks, including named entity recognition (NER) (Li et al., 2020;Wang et al., 2021;Ding et al., 2021;Yang et al., 2023), relation extraction (RE) (Nan et al., 2020;Zhao et al., 2021;Han et al., 2022;Zhan et al., 2022;Li et al., 2022;Peng et al., 2022;Wang et al., 2023b), event extraction (EE) (Lin et al., 2020;Lee et al., 2021a;Hsu et al., 2022) and aspectbased sentiment analysis (ABSA) (Chen and Qian, 2020;Yan et al., 2021;Feng et al., 2021;Zhang et al., 2022c,b;Yu et al., 2023). Since the result of IE directly affects the performance of subsequent higher-level applications, the importance of IE cannot be overstated. This paper intends to evaluate the performance of ChatGPT on IE, in detail." }, { "figure_ref": [], "heading": "Evaluation of ChatGPT Since", "publication_ref": [ "b18", "b26", "b62", "b22", "b50", "b38", "b60", "b16", "b23", "b61", "b50", "b89", "b2", "b1", "b76", "b30", "b30" ], "table_ref": [], "text": "ChatGPT is a closed model and OpenAI does not provide its training details, researchers are exploring its concerns and capabilities. The concerns involve ethical risks (Haque et al., 2022;Krügel et al., 2023), patient privacy (Tang et al., 2023), fabricated misinformation (Jeblick et al., 2022;Chen et al., 2023), education integrity (Malinka et al., 2023) and legal challenges (Sun, 2023). For its capabilities, researchers evaluate the performance of ChatGPT on different tasks, including stance detection (Zhang et al., 2022a), question-answering (Guo et al., 2023), machine translation (Jiao et al., 2023), sentiment analysis (Susnjak, 2023) and other general NLP tasks (Qin et al., 2023;Zhong et al., 2023;Bian et al., 2023;Bang et al., 2023). In addition, for the information extraction task, Wei et al. (2023) propose a two-stage framework, ChatIE, to use ChatGPT for zero-shot information extraction, and evaluate its performance in detail. Li et al. (2023) measure the performance, explainability, calibration and faithfulness of ChatGPT on IE tasks. As a concurrent work, this paper measures the performance of Chat-GPT on multiple datasets of 14 IE subtasks, explores the impact of in-context learning (ICL) and chain-of-thought (COT) prompts on performance, evaluates robustness by scenario, and analyzes error types. Our perspective is significantly different from Li et al. (2023), and we evaluate more IE sub-tasks on more benchmarks.\n3 ChatGPT for Information Extraction" }, { "figure_ref": [], "heading": "Tasks", "publication_ref": [], "table_ref": [], "text": "In this paper, we consider 4 well-representative IE tasks, including Named Entity Recognition (NER), Relation Extraction (RE), Event Extraction (EE) and Aspect-based Sentiment Analysis (ABSA).\nSince each task contains several subtasks or scenarios, we conduct evaluations and analysis on the following 14 sub-tasks:\n• Flat Entity Recognition (NER-Flat): Recognizing all entities within the text. Each entity is identified as a separate entity, without any hierarchical relationship between them.\n• Nested Entity Recognition (NER-Nested): Recognizing all entities within the text. Each entity can be nested inside other entities, i.e., an entity may contain other sub-entities.\n• Relation Classification (RE-RC): Determining the relationship between a pair of given entities within the text.\n• Relational Triplet Extraction (RE-Triplet):\nIdentifying entities and their relationships simultaneously.\n• Event Detection (EE-Trigger): Identifying the word or phrase that indicates the occurrence of an event, and categorizing its corresponding event type.\n• Event Argument Extraction (EE-Argument):\nRecognizing the entities that are involved in the given event, and classifying their corresponding roles.\n• Trigger-Argument joint Extraction (EE-Joint): Identifying event trigger, event type and all arguments with their roles simultaneously.\n• Aspect Extraction (ABSA-AE): Extracting all the aspect terms from a review.\n• Opinion Extraction (ABSA-OE): Extracting all the opinion terms from a review.\n• Aspect-level Sentiment Classification (ABSA-ALSC): Predicting the sentiment polarities for every given aspect terms in a review.\n• Aspect-oriented Opinion Extraction (ABSA-AOE): Extracting the paired opinion terms for every given aspect terms in a review.\n• Aspect Extraction and Sentiment Classification (ABSA-AESC): Extracting the aspect terms as well as the corresponding sentiment polarities simultaneously.\n• Pair Extraction (ABSA-Pair): Extracting the aspect terms as well as the corresponding opinion terms simultaneously.\n• Triplet Extraction (ABSA-Triplet): Extracting all aspects terms with their corresponding opinion terms and sentiment polarity simultaneously." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b54", "b11", "b12", "b41", "b53", "b81", "b87", "b19", "b66", "b33", "b57", "b68", "b13", "b45", "b77", "b49", "b48", "b47" ], "table_ref": [], "text": "We select at least four datasets for each IE task, with a total of 17 datasets as follows2 :\n• For NER task, the datasets include CoNLL03 (Sang and Meulder, 2003), FewNERD (Ding et al., 2021), ACE04 (Doddington et al., 2004), ACE05-Ent(Walker et al., 2006) and GE-NIA (Ohta et al., 2002).\n• For RE task, the datasets include CoNLL04 (Roth and Yih, 2004), NYT-multi (Zeng et al., 2018), TACRED (Zhang et al., 2017), Se-mEval 2010 (Hendrickx et al., 2010).\n• For EE task, the datasets include ACE05-Evt (Walker et al., 2006), ACE05+ (Lin et al., 2020), CASIE (Satyapanich et al., 2020) and Commodity News EE (Lee et al., 2021b).\n• For ABSA task, the datasets include D 17 (Wang et al., 2017), D 19 (Fan et al., 2019), D 20a (Peng et al., 2020) and D 20b (Xu et al., 2020), which are all originated from the Se-mEval Challenges (Pontiki et al., 2014(Pontiki et al., , 2015(Pontiki et al., , 2016))." }, { "figure_ref": [], "heading": "Prompts", "publication_ref": [], "table_ref": [], "text": "The prompts designed in this paper all consists of five main elements: the task instruction, candidate target labels, output format description, demonstration examples and the input text. The task instruction describes the specific IE sub-task, candidate target labels are the types of target information, such as entity types, relation types, etc. The output format description specifies the format of outputs to facilitate the distinguishing of target information. The demonstration examples exist under the few-shot In-context Learning setting, which can also provide the chain-of-thought explanation. The input text is a sentence of review from which target information is to be extracted. An example of prompts for NER task is shown in Figure 1.\nFor the demonstration examples, we randomly select them from the training set of each dataset in Section 3.2. To obtain the chain-of-thought prompts, we construct them manually with the help of ChatGPT to generate explanations." }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b23", "b76" ], "table_ref": [], "text": "To conduct a thorough evaluation of ChatGPT's capabilities, for each IE sub-task, we first measure the performance of zero-shot scenario. Then, we investigate the impact of few-shot in-context learning (ICL) and few-shot chain-of-thought (COT) prompting on the performance. Specifically, we manually design 5 different zero-shot prompts for each sub-task since ChatGPT is sensitive to different prompts, and report the mean and standard deviation values to alleviate the randomness of prompts. To construct few-shot ICL prompts, we select the zero-shot prompt with best performance and add the randomly selected samples from the corresponding training set. For few-shot COT prompts, we add the chain-of-thought explanations to the few-shot ICL prompts, where the chain-ofthought explanations are manually constructed with the help of ChatGPT. To eliminate the randomness of selected samples, we select five different groups and also report the means and standard deviations.\nWe use the official API to generate all outputs from ChatGPT. To prevent the influence of dialogue history, we generate the response separately for each testing sample. Unlike other work where only 30-50 samples are selected for evaluation (Jiao et al., 2023;Wei et al., 2023), we use the entire test set of most dataset in Section 3.2 for evaluation. Too few samples will lead to low coverage and high randomness of results, too many samples are limited by the rate and expense of accessing OpenAI's API. Since most of datasets we use have a test set with less than 3000 samples, we limit the number of samples to a maximum of 3000 by random sampling.\nBesides, we compare ChatGPT with the stateof-the-art result for each sub-task. For the metric, we use F1 metric for all sub-tasks. See the Appendix A.1 for the F1 calculation criteria of all sub-tasks." }, { "figure_ref": [], "heading": "An example of NER task prompts", "publication_ref": [], "table_ref": [], "text": "Given the list of entity types [\"Organization\", \"Person\", \"Location\", \"Miscellaneous\"], read the given sentence and find out all words/phrases that indicate the above types of named entities. Answer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence: \"Japan began the defense of their Asian Cup title with a lucky 2-1 win against Syria in a Group C championship match on Friday.\" Answer: [\"Location\", \"Japan\"], [\"Miscellaneous\", \"Asian Cup\"], [\"Location\", \"Syria\"] ... (More examples are omitted here.) Sentence: \"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\" Answer:\nExpected Output:\n[\"Organization\", \"In Home Health\"] Figure 1: An example of prompts for NER task on CoNLL03 dataset. See the Appendix C for more prompts." }, { "figure_ref": [], "heading": "The Performance", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this section, we report the performance of Chat-GPT on 14 different sub-tasks, as shown in Table 1." }, { "figure_ref": [], "heading": "Performance Gap in Zero-shot Scenario", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "From the zero-shot result of Table 1, we can draw the following conclusions: 1) There is a significant performance gap between ChatGPT and SOTA methods. This seems obvious and reasonable, since all SOTA methods are trained on corresponding datasets. In other words, they are fully supervised models and are not zero/few-shot ones.\n2) The harder the task, the larger the performance gap. From the perspective of the four IE tasks of NER, RE, EE and ABSA, it can be seen that ABSA task perform significantly better than RE and EE tasks. Almost all sub-tasks of ABSA can reach more than 50% of SOTA, while all subtasks of RE and EE rarely exceed 30% of SOTA. One reason is that ABSA task involves only aspect terms and opinion terms, and is much simpler. While RE task and EE task involve many more target types, and is much harder. Take NYT-multi dataset for example, there are 24 relation types.\n3) The harder the scenario, the larger the performance gap. Each IE task has several scenarios. For NER task, the NER-Flat scenario is intuitively simpler than NER-Nested, and the performance of NER-Flat is significantly better than NER-Nested (47.0% vs. 26.7%). For other tasks, including RE, EE and ABSA, we can observe similar results. 4) On a few simple cases, ChatGPT can equal or exceed the performance of SOTA methods. We can find that ChatGPT is able to achieve comparable performance with SOTA methods on the ABSA-ALSC sub-task, and can even surpass SOTA result, reaching 117.7% of SOTA. The sub-task is a simple sentiment classification, and the candidate polarities include \"positive\", \"neutral\" and \"negative\"." }, { "figure_ref": [], "heading": "Mitigate the Gap", "publication_ref": [ "b65" ], "table_ref": [], "text": "The observed performance gap in the above subsection is not consistent with our actual experience with ChatGPT. To mitigate the gap, we add few randomly selected demonstration examples to construct few-shot ICL prompts and few-shot COT prompts. To eliminate the randomness of selected examples, we construct five groups of demonstration examples, and report the means and standard deviations in Figure 1.\nFor the few-shot ICL setting, it can be seen that using few-shot ICL prompts generally leads to significant improvements (about 3.0∼13.0 F1 value), but still obviously lags behind SOTA results. This seems to be inconsistent with the conclusion of Wadhwa et al. (2023) For the few-shot COT setting, we can find that the use of few-shot COT prompts cannot guarantee further gains compared to few-shot ICL prompts, sometimes it is worse than the performance of few-shot ICL prompts. The possible reasons are that, the quality of constructed chain-ofthought prompts is not good enough and ChatGPT is too sensitive for the few-shot COT prompts. To sum up, we conclude that ChatGPT struggles to achieve comparable performance compared to the corresponding SOTA methods in both zero-shot and few-shot scenarios, even if the chain-of-thought explanations are provided." }, { "figure_ref": [], "heading": "Rethink the Gap", "publication_ref": [ "b36", "b35", "b34" ], "table_ref": [ "tab_4", "tab_7", "tab_7" ], "text": "In this section, we rethink the performance gap from the perspective of evaluation criteria. Following the evaluation method of previous work (Lu et al., 2022;Lou et al., 2023;Liu et al., 2023), we strictly match the start and end indices of the predicted target text span (e.g., entity spans, opinion spans). This method may not be suitable for the evaluation of LLMs like ChatGPT that generate human-like responses. We manually check ChatGPT's responses, and find that ChatGPT tends to identify longer spans than the annotated ones, to get closer to humans. All sub-tasks in Section 3.1 involve four types of span: entities, event triggers, aspect terms and opinion terms. For each type of span, we select several typical annotated spans and their corresponding predicted spans, and shown them in Table 2. It can be seen that the annotated spans usually do not contain qualifiers such as quantifiers, articles, adjectives, time, place, etc. While the spans predicted by ChatGPT usually contain these qualifier parts, which are also correct target information. For example, \"University of Michigan\" and \"The University of Michigan\" indicate the same target information, although the offsets are different. Therefore, to incorporate this case, we propose a soft-matching approach to obtain more accurate evaluation results, as shown in Algorithm 1. Where GetSimilarity(•) indicates a method to calculate the similarity, here we use the python package difflib to calculate the edit distance as the similarity value. Note that the Line 6 in the algorithm ensures that only the offsets of spans are different.\nWe compare the evaluation results between the default hard-matching strategy and the softmatching strategy for related sub-tasks, and show them in Table 3. For space reasons, we only report results of one dataset for each sub-tasks. We set the threshold γ to 0.5. See the Appendix A.2 for the threshold value's details. From the Table 3, it can be seen that the soft-matching strategy delivers consistent and significant performance gains, with up to 14.53 F1 value. Interestingly, the improvement on simple sub-tasks is much more noticeable, i.e., ABSA task has a higher overall performance gains than EE task. Further, although the soft-matching strategy brings significant gains, it does not reach a comparable level with SOTA methods. This is still consistent with the conclusions of Section 4.\n6 Robustness Analysis" }, { "figure_ref": [], "heading": "Invalid Output", "publication_ref": [], "table_ref": [], "text": "Since ChatGPT is a generative model, the output responses may be irrelevant information that does not meet the task requirements. In this subsection, we investigate how many invalid responses Chat-GPT returns for different IE tasks. Here invalid responses refer to the response with incorrect format or unexpected content that is not generated as required by task-specific prompts. For each sub- " }, { "figure_ref": [], "heading": "Irrelevant Context", "publication_ref": [], "table_ref": [ "tab_7", "tab_10" ], "text": "Since ChatGPT is extremely sensitive to different prompts, we investigate the impact of irrelevant contexts on ChatGPT's performance on all IE subtasks. The specific implementation is to modify the \"input text\" part of zero-shot prompts, by randomly inserting a piece of irrelevant text before and after the input text. The irrelevant text does not contain the target information spans to be extracted. We also select the same dataset for each sub-task as Table 3, report the mean values of 5 different zero-shot prompts and performance changes, shown in Table 5. It can be seen that the performance of most sub-tasks decreases significantly, up to 48.0%, when adding irrelevant context randomly.\nThe ABSA-ALSC and RE-RC sub-tasks have less performance drop, due to the fact that they perform classification based on the given aspect term or entity pair and are less affected by irrelevant context. We can conclude that ChatGPT is very sensitive to the irrelevant context, which can significantly degrade performance on IE tasks. " }, { "figure_ref": [], "heading": "Frequency of Target Types", "publication_ref": [], "table_ref": [], "text": "The real-world data usually exhibits a long-tailed distribution, i.e., the frequency of target types varies greatly, causing the models to perform much worse on uncommon/tail types than on common/head ones. Here target types include entity types, relation types, event types, etc. In this subsection, we investigate the impact of the \"frequency of target types\" on ChatGPT's performance on all IE sub-tasks. We select one dataset for each sub-task with the phenomenon of frequency differences, report the mean values of five different prompts on the head types and tail types under the zero-shot setting. See the Appendix A.3 for details on how to distinguish head types and tail ones for each subtask. The results are shown in Table 6. It can be seen that the performance of tail types is significantly worse than head types, only up to 75.9% performance of head types. On some sub-tasks, such as RE-RC and RE-Triplet, the performance of tail types is even lower than 15% of head types' performance. We can conclude that ChatGPT also suffers from the long-tail problem. " }, { "figure_ref": [], "heading": "Other", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "In this subsection, we explore whether ChatGPT can distinguish the order of two entities in the RE-RC sub-task, i.e., which entity is the subject and which entity is the object. Since most relation types are not symmetric, the order of two entities is very critical. For example, the sentence \"Steven Paul Jobs was born in San Francisco on February 24, 1955.\" expresses the relational triplet <Steven Paul Jobs, born_in, San Francisco>, not the triplet <San Francisco, born_in, Steven Paul Jobs>. For each instance of the asymmetric relation types, we swap the order of entities and check the change in prediction results. After exchanging the order, the prediction result should be changed to \"NA\", which indicates no relationship exists between entities.\nThe results are shown in Table 7. All values are the mean using five different zero-shot prompts. We can see that after swapping the order, most of predicted results (over 70%) remain the same as before the swap. Therefore, it can be concluded that For RE-RC sub-task, ChatGPT is not sensitive to the order of entities, and cannot accurately understand the subject-object relationship of entities.\nError " }, { "figure_ref": [], "heading": "Analysis of Error Types", "publication_ref": [], "table_ref": [ "tab_14", "tab_4" ], "text": "In this section, we analyze ChatGPT's errors on all IE sub-tasks. Here we use \"span\" to denote the target information to be extracted, and \"types\" to indicate the types of target information such as entity types, relation type, event types, sentiment polarity, etc. Through the manual checking, we find that the errors mainly include:\n• Missing spans: Missing one or more annotated target spans.\n• Unmentioned spans: Answering the spans that do not exist within the given input text.\n• Unannotated spans: Answering the spans that are not annotated in the test set.\n• Incorrect span offsets: The offsets of the answered spans are incorrect.\n• Undefined types: Answering the types beyond the pre-defined types when the corresponding span is correct.\n• Incorrect types: The answered span is correct, while the corresponding type comes from the set of pre-defined types, but does not match the annotated type.\n• Other: Other errors apart from the above errors, such as incorrect output format, answering unexpected information, etc.\nSince these error types are suitable for all subtasks in Section 3.1, for convenience, we take the NER-Flat as an example, and statistically analyze each type of above errors under the zeroshot setting, on the CoNLL03 dataset. The results are shown in Table 8 andFigure 2. It can be seen that \"Unannotated spans\", \"Incorrect types\" and \"Missing spans\" are the three main types of errors, accounting for more than 70%. In particular, nearly 1/3 of all errors is the error of unannotated spans, which also raises concerns about the quality if annotated data." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we assess the capabilities of Chat-GPT from four perspectives including Performance, Evaluation Criteria, Robustness and Error Types. The details and conclusions are as follows:\nPerformance we first evaluate ChatGPT's performance on 17 datasets with 14 IE sub-tasks under the zero-shot, few-shot and chain-of-thought scenarios, and find a huge performance gap between ChatGPT and SOTA results.\nEvaluation Criteria We rethink the performance gap and find that the span hard-matching strategy is not suitable for the evaluation of ChatGPT, due to ChatGPT generate human-like responses. We propose a soft-matching strategy for evaluation to more accurately reflect ChatGPT's performance.\nRobustness We analyze the robustness of Chat-GPT on 14 IE sub-tasks from four perspective, including invalid output, irrelevant context, frequency of target types and error types. We draw the following conclusions: 1) ChatGPT rarely outputs invalid responses; 2) Irrelevant context and long-tail target types greatly affect ChatGPT's performance; 3) ChatGPT cannot understand well the subject-object relationships in RE task.\nError Types Through the manual checking, we analyze the errors of ChatGPT and summarize 7 types of errors, including Missing spans, Unmentioned spans, Unannotated spans, Incorrect span offsets, Undefined types, Incorrect types and Other. We find that \"Unannotated spans\" is the most dominant error type. This raises concerns about the quality of previous annotated data, and indicates the possibility of annotating data with ChatGPT." }, { "figure_ref": [], "heading": "A Details of Other Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Metric and Evaluation Criteria", "publication_ref": [], "table_ref": [], "text": "We use Micro-F1 as the primary metric to evaluate ChatGPT's performance on different IE sub-tasks:\n• Flat-NER, Nested-NER: A predicted entity is correct if its offsets and type match a reference entity.\n• RE-RC: A predicted relation is correct if its relation type matches the reference type.\n• RE-Triplet: A predicted relational triplet is correct if its relation type is correct and the subject and object entity spans are all correct. We only report the F1 value of relational triplets.\n• EE-Trigger: A predicted event trigger is correct if its span and event type all match the reference trigger.\n• EE-Argument: For a given event type, a predicted argument is correct if its span and role type all match the reference argument mention of this event.\n• EE-Joint: A predicted argument is correct if its span, role type and event type all match the reference argument mention. We only report the F1 value of event arguments.\n• ABSA-AE, ASBA-OE: An aspect/opinion is correct if its span matches the reference aspect/opinion mention.\n• ABSA-ALSC: A sentiment polarity is correct if it matches the reference polarity of given aspect term.\n• ABSA-AOE: An opinion is correct if its span matches the reference opinion of given aspect term.\n• ABSA-AESC: An aspect-sentiment pair is correct if its aspect span and corresponding sentiment polarity are all correct.\n• ABSA-Pair: An aspect-opinion pair is correct if its aspect span and opinion span all match the reference pair.\n• ABSA-Triplet: A triplet is correct if its aspect span, opinion span and corresponding sentiment polarity are all correct. " }, { "figure_ref": [], "heading": "A.2 Additional Notes on Soft-Matching Strategy", "publication_ref": [], "table_ref": [], "text": "The similarity calculated by our soft-matching strategy takes the value from 0 and 1. For the threshold γ, we set it to 0.5 by default, since this process can be seen as the binary classification problem. We assume that When the predicted span and the annotated span are only different in offset, the predicted span is reasonable and meaningful if the similarity value is higher than 0.5." }, { "figure_ref": [], "heading": "A.3 Additional Notes on Head/Tail Target Types", "publication_ref": [], "table_ref": [], "text": "The head types are those with more than K training instances in the training set, while tail types are those with less than K training instances. Take the entity type \"Person\" as an example, if the number of \"Person\" entities in the training set is more than the threshold K, then \"Person\" is a head type, and vice versa, it is a tail type. The values of K, corresponding to the datasets used in Section 6.3, are shown in the Table 9. Since the ABSA task involves only two types of entities, i.e., aspect term and opinion term, there is no long-tail types." }, { "figure_ref": [], "heading": "B Results on More Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we report the results on more datasets for all 14 sub-tasks in Section 3.1. Under the zero-shot, few-shot and chain-of-thought scenarios, we design five different prompts and shown the corresponding maximum, minimum, mean and standard deviation, respectively." }, { "figure_ref": [], "heading": "C Input Examples", "publication_ref": [], "table_ref": [], "text": "For demonstration, we show the zero-shot prompts, few-shot ICL prompts and few-shot COT prompts of NER task on the CoNLL03 dataset. Prompts for other datasets/tasks are similar. The zero-shot prompt with the best performance is selected to construct the corresponding few-shot ICL/COT prompts. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Zero-shot Prompts of NER task on the CoNLL03 dataset prompt 1:\nConsidering 4 types of named entities including \"Organization\", \"Person\", \"Location\" and \"Miscellaneous\", recognize all named entities in the given sentence.\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nprompt 2:\nGiven the list of entity types [\"Organization\", \"Person\", \"Location\", \"Miscellaneous\"], read the given sentence and find out all words/phrases that indicate the above types of named entities.\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nprompt 3:\nRead the given sentence carefully, identify all named entities of type \"Organization\", \"Person\", \"Location\" or \"Miscellaneous\".\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nprompt 4:\nAnalyze the given sentence and extract all word spans that refer to specific named entities of type \"Organization\", \"Person\", \"Location\" or \"Miscellaneous\".\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nprompt 5:\nWhat named entities are mentioned in the given sentence? Only return named entities of type \"Organization\", \"Person\", \"Location\" or \"Miscellaneous\".\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nExpected Output:\n[\"Organization\", \"In Home Health\"]\nFew-shot ICL Prompts of NER task on the CoNLL03 dataset prompt:\nConsidering 4 types of named entities including \"Organization\", \"Person\", \"Location\" and \"Miscellaneous\", recognize all named entities in the given sentence.\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"The arrangement calls for investors to make additional payments to fund Equitas but also provides them with 3.2 billion stg in compensation to help reduce their prior outstanding liabilities.\"\nAnswer:\n[\"Organization\", \"Equitas\"] Sentence:\n\"Results from the U.S. Open Tennis Championships at the National Tennis Centre on Saturday (prefix number denotes seeding):\"\nAnswer:\n[\"miscellaneous\", \"U.S. Open Tennis Championships\"], [\"location\", \"National Tennis \"In Home Health said it previously recorded a reserve equal to 16 percent of all revenue related to the community liaison costs.\"\nAnswer:\nExpected Output:\n[\"Organization\", \"In Home Health\"] Few-shot COT Prompts of NER task on the CoNLL03 dataset prompt:\nConsidering 4 types of named entities including \"Organization\", \"Person\", \"Location\" and \"Miscellaneous\", recognize all named entities in the given sentence.\nAnswer in the format [\"entity_type\", \"entity_name\"] without any explanation. If no entity exists, then just answer \"[]\".\nSentence:\n\"The arrangement calls for investors to make additional payments to fund Equitas but also provides them with 3.2 billion stg in compensation to help reduce their prior outstanding liabilities.\"\nAnswer:\n\"Equitas\" is a company or organization that requires additional funding, which corresponds to the \"organization\" in the given entity types. So, answer: Expected Output:\n\"In Home Health\" is a community organization, which can be labeled as \"organization\" in the given entity types. So, answer: [\"Organization\", \"In Home Health\"]" } ]
ChatGPT has stimulated the research boom in the field of large language models. In this paper, we assess the capabilities of ChatGPT from four perspectives including Performance, Evaluation Criteria, Robustness and Error Types. Specifically, we first evaluate ChatGPT's performance on 17 datasets with 14 IE sub-tasks under the zero-shot, few-shot and chain-ofthought scenarios, and find a huge performance gap between ChatGPT and SOTA results. Next, we rethink this gap and propose a soft-matching strategy for evaluation to more accurately reflect ChatGPT's performance. Then, we analyze the robustness of ChatGPT on 14 IE subtasks, and find that: 1) ChatGPT rarely outputs invalid responses; 2) Irrelevant context and long-tail target types greatly affect ChatGPT's performance; 3) ChatGPT cannot understand well the subject-object relationships in RE task. Finally, we analyze the errors of ChatGPT, and find that "unannotated spans" is the most dominant error type. This raises concerns about the quality of annotated data, and indicates the possibility of annotating data with ChatGPT. The data and code are released at Github site 1 . * Corresponding authors. 1 https://github.com/RidongHan/ Evaluation-of-ChatGPT-on-Information-Extraction Performance We evaluate the performance of ChatGPT on 17 datasets with 14 IE sub-tasks under 3 settings: zero-shot prompts, few-shot ICL prompts and few-shot COT prompts. The results indicate the following conclusions: • There is a significant performance gap between ChatGPT and SOTA methods. • The harder the task, the larger the gap. • ChatGPT can equal or exceed SOTA methods on a few simple cases.
Is Information Extraction Solved by ChatGPT? An Analysis of Performance, Evaluation Criteria, Robustness and Errors
[ { "figure_caption": "Figure 2 :2Figure 2: Percentage of error types for NER-Flat subtask on the CoNLL03 dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "that ChatGPT can achieve performance equivalent to SOTA methods by providing some demonstration examples. One reason may be that Wadhwa et al. (2023) provide more demonstration examples, i.e., almost 20 examples, while we only provide 5 demonstration examples. So, with a smaller number of demonstration examples, the few-shot ICL prompts cannot radically eliminate the performance gap.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The performances of ChatGPT on different datasets over multiple standard IE tasks. \"Ratio@SOTA\" indicates the percentage value of ChatGPT performance vs. SOTA in zero-shot scenario. For RE-Triplet sub-task, we only report the F1 value of relational triplets. For EE-Joint sub-task, we only report the F1 value of event arguments but not the F1 value of event triggers.", "figure_data": "", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The selected annotated spans and their corresponding predicted spans.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Algorithm 1 Soft-Matching Strategy Input: the sentence s, the list of annotated spans L A in sentence s, a predicted span p, the similarity threshold γ. Output: Return T rue if just the offsets of two spans are different and the similarity is greater than γ, otherwise return F alse.", "figure_data": "Begin:0. Similarity ← [ ]1. for t in L A :2. score ← GetSimilarity (t, p)3. Similarity.append (score)4. score, max_index ← max (Similarity)5. t ← L A [max_index]6. if p contains t or t contains p :7.if score > γ :8.return T rue.9. return F alse.End.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of results between default hardmatching strategy (Hard) and soft-matching strategy (Soft). ∆ indicates the performance change caused by the soft-matching strategy.task, we report the ratio of invalid responses under the zero-shot setting in Table4. For convenience, we select one dataset for each sub-task as Table3.From the results, it can be that In most cases, Chat-GPT rarely outputs invalid responses. However, on the RE-Triplet sub-task, invalid responses account for up to 25.3%. One reason may be that this sub-task is much more different.", "figure_data": "TaskDataset#Sent. Avg. #Invalid. Ratio (%)ABSA-AED 17 -14lap80038.44.8%ABSA-OED 17 -14lap80015.82.0%ABSA-ALSC D 17 -14lap8000.00.0%ABSA-AOED 19 -14lap34321.86.4%ABSA-AESC D 20a -14lap3390.40.1%ABSA-PairD 20a -14lap3396.01.8%ABSA-Triplet D 20b -14lap3286.82.1%NER-FlatCoNLL033453396.011.5%NER-NestedACE05-Ent1060807.5%RE-RCSemEval2010 271713.80.5%RE-TripletCoNLL0428872.825.3%EE-TriggerACE05-Evt83228.03.4%EE-Argument ACE05-Evt67618.22.7%EE-JointACE05-Evt83238.44.6%", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The ratio of invalid responses for each IE subtask under the zero-shot setting. \"#Sent.\" is the number of test sentences. \"Avg. #Invalid.\" indicates the average number of test sentences with invalid responses under the 5 different zero-shot prompts. \"Ratio (%)\" denotes the percentage of \"Avg. #Invalid.\" and \"#Sent.\".", "figure_data": "", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Statistics of prediction changes after swapping the order of entities. \"#Ent.Pair\" denotes the number of entity pairs expressing asymmetric relationships.", "figure_data": "", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Statistical analysis of various error types for NER-Flat sub-task on the CoNLL03 dataset. \"#Error.\" indicates the occurrence number of corresponding error type, while \"Ratio (%)\" denotes the corresponding percentage.", "figure_data": "", "figure_id": "tab_14", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results on D 17 .", "figure_data": "Dataset TaskMatch TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstdAEhard soft47.114 38.183 43.032 3.704 50.357 46.939 48.194 1.21 53.259 46.232 51.047 2.744 55.593 52.469 54.34 1.071 64.064 60.632 62.793 1.177 56.258 53.465 54.504 0.93714lapOE ALSChard soft hard soft56.352 37.131 48.449 7.186 60.115 56.686 57.893 1.189 54.985 47.444 50.78 64.169 54.611 60.074 3.164 65.238 60.756 63.171 1.608 61.711 58.68 60.413 1.095 2.851 75.535 73.547 74.557 0.667 78.746 74.465 76.758 1.618 76.758 72.783 75.351 1.455 75.535 73.547 74.557 0.667 78.746 74.465 76.758 1.618 76.758 72.783 75.351 1.455AESChard soft36.767 31.599 34.689 1.946 39.223 36.364 37.164 1.041 42.509 36.34 41.411 36.364 39.79 1.867 41.781 39.273 40.346 0.963 45.993 41.203 43.706 2.029 39.336 2.181AEhard soft64.834 45.107 55.652 8.371 73.928 65.361 70.988 3.137 73.738 71.253 72.406 1.065 71.271 50.77 63.569 8.021 79.075 72.316 76.724 2.377 80.154 77.559 79.219 0.99214resOE ALSChard soft hard soft66.447 50.0 74.773 69.546 71.999 2.043 78.279 75.588 77.278 0.946 74.956 70.634 73.132 1.494 59.477 6.381 73.175 68.694 71.605 1.716 64.425 51.66 58.744 4.366 81.393 81.041 81.164 0.153 83.157 80.776 81.852 0.883 83.774 76.367 79.771 2.475 81.393 81.041 81.164 0.153 83.157 80.776 81.852 0.883 83.774 76.367 79.771 2.475AESChard soft56.943 49.228 54.082 2.894 60.439 53.879 58.171 2.352 64.524 56.271 60.306 3.52 61.549 55.269 59.464 2.287 64.086 58.527 62.116 2.014 68.197 61.168 64.724 2.907AEhard soft46.564 31.111 40.334 5.983 55.556 51.457 53.495 1.636 60.42 51.114 38.318 45.865 5.701 59.338 54.645 56.94 1.852 65.105 60.858 62.774 1.427 57.909 59.269 0.93215resOE ALSChard soft hard soft51.316 40.982 46.396 3.894 56.098 52.926 53.963 1.13 57.632 54.888 56.413 0.917 60.976 58.893 59.885 0.677 60.763 54.667 57.781 2.278 49.753 45.333 47.105 1.564 88.909 87.246 88.133 0.541 88.54 84.104 86.47 1.414 79.482 71.349 75.231 3.265 88.909 87.246 88.133 0.541 88.54 84.104 86.47 1.414 79.482 71.349 75.231 3.265AESChard soft41.888 34.464 39.363 2.617 48.419 44.259 46.384 1.569 54.917 49.218 52.025 1.944 44.851 38.449 43.094 2.366 50.374 46.711 48.59 1.432 58.548 51.494 54.851 2.442Dataset Task Match TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstd14lapAOEhard soft68.815 38.07 78.906 60.232 72.131 7.092 57.601 11.029 70.609 59.351 64.834 4.041 62.651 48.347 55.432 5.68 76.574 71.561 74.563 1.948 72.511 62.715 67.86 4.08314resAOEhard soft74.694 54.563 67.667 7.631 82.843 70.194 78.065 4.79574.265 67.422 71.598 2.524 72.682 63.487 66.423 3.436 79.779 78.84 79.472 0.33 80.758 76.338 78.068 1.62115resAOEhard soft73.451 57.002 67.032 6.606 82.569 73.643 78.65 3.84171.889 65.691 70.299 2.38 78.894 74.851 76.869 1.427 77.989 66.667 71.889 3.94 63.756 56.306 59.601 3.05516resAOEhard soft80.397 62.235 73.226 6.763 87.354 77.649 82.722 4.17381.028 73.252 78.234 2.656 70.385 68.491 69.439 0.785 85.106 80.07 83.321 1.915 81.621 79.256 80.665 0.785", "figure_id": "tab_17", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Results on D 19 .", "figure_data": "Dataset TaskMatch TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstdAEhard soft54.621 43.439 49.862 4.233 61.116 55.473 59.008 2.31859.538 58.188 59.03 65.765 62.776 64.798 1.135 67.91 0.498 58.193 52.984 56.211 1.755 59.267 64.985 3.035OEhard soft61.765 37.755 50.87 72.376 57.143 65.484 6.14 9.2669.605 62.909 65.814 2.758 58.573 51.631 54.641 3.057 76.584 73.638 74.749 0.981 67.818 60.653 65.456 2.5ALSChard soft85.885 84.928 85.359 0.317 85.885 84.928 85.359 0.31787.081 85.646 86.364 0.478 83.971 79.904 81.962 1.548 87.081 85.646 86.364 0.478 83.971 79.904 81.962 1.54814lapAOEhard soft69.315 38.056 58.403 11.378 71.6 80.851 61.427 73.408 7.109 79.8865.969 68.429 2.486 64.179 49.731 56.798 5.407 75.299 77.44 1.596 73.805 65.447 69.576 3.519AESChard soft47.733 43.697 45.476 1.567 55.089 51.464 53.247 1.16950.903 48.063 49.502 1.124 55.828 44.222 48.869 3.799 55.705 52.016 54.489 1.439 60.327 48.0 54.139 3.926Pairhard soft39.835 22.184 31.755 6.056 48.724 25.823 39.534 7.82545.461 38.633 41.593 2.651 41.838 29.537 35.753 4.898 52.326 47.737 49.618 1.86 52.994 41.459 48.039 4.276Triplethard soft32.216 26.892 30.777 1.998 43.463 38.646 41.306 1.76738.58 45.902 43.049 44.723 0.994 46.27 32.831 36.106 2.154 36.882 26.557 31.456 3.658 37.676 42.21 3.107AEhard soft63.771 42.799 54.208 8.187 69.003 51.948 61.472 6.97973.322 67.864 70.023 1.946 70.751 66.078 68.417 1.817 76.539 72.074 73.88 1.651 76.139 71.815 74.539 1.653OEhard soft73.262 55.537 66.468 7.305 81.68 75.125 78.904 2.48876.684 70.382 74.972 2.322 68.464 58.258 63.434 3.735 83.199 80.786 82.092 0.898 78.5 74.185 76.733 1.497ALSChard soft92.424 91.047 91.68 92.424 91.047 91.680.474 0.47491.736 90.771 91.322 0.338 92.011 86.088 88.953 1.935 91.736 90.771 91.322 0.338 92.011 86.088 88.953 1.93514resAOEhard soft75.777 55.302 68.861 7.656 84.78 71.494 79.616 4.83775.039 70.551 72.975 1.585 73.721 64.893 67.699 3.348 83.082 80.919 81.922 0.75 82.209 78.377 80.055 1.426AESChard soft62.197 53.79 66.529 59.462 64.375 2.555 59.084 3.06369.595 63.325 65.983 2.081 68.936 62.421 64.818 2.813 72.523 66.803 69.314 1.85 72.34 66.667 69.443 1.933Pairhard soft57.198 42.027 50.05 67.02 49.478 59.364 5.781 4.89459.843 57.857 58.878 0.825 53.237 47.585 49.822 2.15 68.736 66.071 67.717 0.908 64.748 62.319 63.539 0.99Triplethard soft45.198 34.731 39.849 3.684 59.603 50.215 54.33 3.65255.382 52.461 54.17 64.272 61.391 63.261.096 54.006 43.779 47.472 3.63 1.031 62.117 58.885 59.989 1.271AEhard soft57.703 36.735 48.722 8.217 63.224 46.939 55.493 7.20967.486 65.277 66.017 0.78 71.834 69.818 70.657 0.684 73.478 68.958 71.654 1.861 69.63 63.958 67.181 2.348OEhard soft71.825 56.244 65.479 6.003 79.167 73.299 76.647 2.00175.794 69.703 73.111 2.21 81.349 77.864 79.206 1.169 77.4 67.860.263 63.199 2.856 71.176 74.11 2.246ALSChard soft92.804 91.315 91.96 92.804 91.315 91.960.6 0.691.811 89.082 90.124 0.934 89.826 80.645 84.764 3.207 91.811 89.082 90.124 0.934 89.826 80.645 84.764 3.20715resAOEhard soft73.2 82.446 74.214 78.715 3.661 56.476 66.653 6.43272.008 65.462 69.686 2.23 80.749 76.308 78.174 1.486 78.689 67.545 72.662 3.755 64.789 56.766 60.304 3.217AESChard soft57.794 46.236 53.911 4.189 60.711 51.196 58.245 3.60665.571 61.023 63.659 1.744 68.36 67.892 63.442 66.51 1.625 72.247 65.51 61.388 66.065 2.533 69.395 2.406Pairhard soft51.022 39.437 44.413 4.115 60.267 45.246 52.511 5.19956.087 51.146 53.755 1.894 53.42 64.706 62.434 63.391 0.881 65.804 61.738 63.063 1.45 45.754 49.617 3.024Triplethard soft42.677 33.468 38.393 3.764 54.898 45.269 50.19 3.99650.775 43.423 47.633 2.476 50.467 44.2 58.197 54.955 56.859 1.15 60.363 57.246.857 2.717 58.6 1.265AEhard soft58.655 40.642 51.813 6.684 62.447 49.02 57.217 5.09767.17 70.943 67.015 68.914 1.457 72.691 71.102 71.925 0.533 63.195 65.044 1.419 68.273 66.211 67.467 0.751OEhard soft77.349 59.877 70.018 6.343 83.877 75.72 80.438 2.71579.803 73.088 77.56 85.06 81.586 83.828 1.257 80.079 77.21 2.736 70.565 65.465 67.592 1.732 78.178 1.036ALSChard soft94.802 94.307 94.505 0.185 94.802 94.307 94.505 0.18595.05 95.0592.079 93.465 0.973 92.822 86.881 90.248 1.965 92.079 93.465 0.973 92.822 86.881 90.248 1.96516resAOEhard soft80.655 61.822 73.228 7.091 87.613 77.44 82.811 4.34278.585 68.482 75.112 4.094 71.23 86.21 78.743 82.969 2.949 81.933 80.219 81.403 0.655 68.615 69.732 0.886AESChard soft59.043 48.718 55.4 62.234 54.53 60.053 2.925 3.74364.571 62.238 63.105 0.867 68.008 64.17 66.476 64.826 65.81 0.582 70.551 68.219 69.122 0.878 65.932 1.369Pairhard soft56.367 43.898 50.195 4.551 63.837 51.449 57.496 4.49261.565 55.537 58.878 2.019 53.464 50.045 51.644 1.403 69.388 63.844 66.826 1.874 64.044 60.705 62.666 1.216Triplethard soft49.724 39.17 61.142 51.625 56.743 3.684 45.254 3.99856.973 50.452 53.899 2.53 63.776 60.805 62.525 1.191 62.956 59.646 60.591 1.198 51.095 47.717 49.882 1.153", "figure_id": "tab_18", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Results on D 20a .", "figure_data": "Dataset TaskMatch TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstdAEhard soft60.377 48.582 54.592 4.938 67.545 57.212 64.242 3.95165.695 61.736 63.733 1.272 63.671 59.684 61.85 71.527 67.846 69.672 1.443 72.815 67.787 70.856 1.803 1.609OEhard soft61.877 36.614 50.557 9.549 71.839 56.085 64.872 6.20868.288 62.726 65.711 1.891 58.405 52.935 55.335 2.46 75.135 73.594 74.134 0.631 68.161 61.891 65.745 2.11ALSChard soft86.393 85.745 86.004 0.212 86.393 85.745 86.004 0.21287.905 85.745 86.911 0.718 84.017 79.05 87.905 85.745 86.911 0.718 84.017 79.0581.857 1.858 81.857 1.85814lapAOEhard soft69.683 39.07 78.598 60.093 72.361 7.055 58.432 10.911 69.356 66.094 67.565 1.081 63.24 77.513 75.536 76.693 0.707 72.397 62.151 67.977 4.05 49.402 56.229 5.434AESChard soft51.986 48.848 50.289 1.371 60.645 56.792 58.328 1.3556.095 53.84 60.529 58.615 59.387 0.656 61.463 54.432 58.257 2.45 54.693 0.811 56.0 49.713 52.099 2.394Pairhard soft44.032 23.48 53.548 27.534 41.942 8.742 33.857 6.90947.791 42.56 54.218 51.812 52.993 0.883 54.133 44.579 49.564 3.669 45.539 1.951 43.436 32.09 37.258 4.442Triplethard soft34.948 29.809 33.167 1.875 45.439 41.342 43.61 1.48641.811 35.507 39.011 2.139 37.78 47.698 44.305 46.039 1.126 46.065 42.403 43.993 1.411 30.035 33.178 2.72AEhard soft68.372 46.499 57.775 9.134 75.349 51.961 65.898 8.70775.192 71.347 73.46 81.739 77.331 79.479 1.685 81.762 77.91 1.388 75.144 71.805 73.356 1.464 80.461 1.513OEhard soft72.9 81.247 75.688 78.777 2.143 55.699 66.289 7.15678.881 72.884 75.274 2.3 84.147 80.478 82.28 1.261 79.144 73.904 76.743 1.746 69.813 57.785 63.818 4.206ALSChard soft93.042 92.335 92.642 0.243 93.042 92.335 92.642 0.24393.16 93.1691.745 92.288 0.504 92.335 83.608 87.288 2.905 91.745 92.288 0.504 92.335 83.608 87.288 2.90514resAOEhard soft73.787 53.661 67.078 7.633 82.649 69.809 77.926 4.76777.724 70.113 73.288 2.523 72.136 62.907 66.009 3.329 83.414 80.098 81.05 1.241 80.392 76.358 78.169 1.43AESChard soft66.245 58.921 63.635 2.682 71.7 65.56 69.847 2.20471.662 65.666 69.057 1.928 72.851 64.005 67.584 3.368 75.884 69.486 73.344 2.147 76.722 69.56 72.635 2.669Pairhard soft57.796 41.913 50.327 5.121 68.861 49.602 60.183 6.29462.45 71.602 68.975 70.025 0.893 65.564 62.169 64.336 1.154 55.741 60.57 2.567 53.864 46.326 49.678 2.726Triplethard soft47.151 35.998 41.495 3.736 62.445 51.684 56.54 3.78558.026 48.666 54.899 3.63 65.406 61.872 64.203 1.401 64.709 59.853 62.538 1.596 55.105 45.375 48.897 3.695AEhard soft59.797 40.357 50.861 7.951 65.878 48.31 57.819 7.36271.733 67.733 69.812 1.384 73.961 70.42 76.182 72.356 74.181 1.301 78.118 74.514 76.378 1.357 71.822 1.248OEhard soft71.976 56.137 65.332 6.128 79.253 73.441 76.65 1.95673.161 68.893 71.371 1.964 66.733 60.983 63.218 2.398 78.932 77.481 78.404 0.536 77.03 70.902 74.348 2.166ALSChard soft92.824 91.667 92.222 0.429 92.824 91.667 92.222 0.42990.278 89.352 89.815 0.414 90.509 78.935 83.333 4.094 90.278 89.352 89.815 0.414 90.509 78.935 83.333 4.09415resAOEhard soft72.915 57.6 82.407 73.935 78.772 3.645 66.723 6.19972.467 63.501 68.393 2.882 63.934 55.644 59.367 3.221 80.688 73.964 76.246 2.452 77.543 66.363 71.802 3.728AESChard soft59.416 49.183 55.966 3.702 62.599 54.514 60.44 3.05166.036 62.601 64.651 1.276 70.488 62.708 66.863 2.666 69.442 66.009 67.574 1.155 74.522 66.667 70.456 2.881Pairhard soft52.623 40.545 45.171 4.208 62.081 46.848 53.667 5.17655.206 50.415 53.461 1.63 64.169 62.49 63.541 0.575 65.884 61.818 63.462 1.387 53.971 46.316 49.755 3.056Triplethard soft43.299 33.819 38.891 3.663 56.232 46.259 51.315 3.85649.336 45.904 47.877 1.345 50.0 59.712 56.41 57.428 1.199 61.382 57.253 59.081 1.734 43.714 46.554 2.761AEhard soft61.809 45.118 55.107 6.447 66.179 53.704 60.808 5.1369.075 66.837 67.928 0.891 71.225 69.75 73.656 71.31 72.349 0.967 75.973 75.261 75.539 0.259 70.703 0.51OEhard soft77.282 58.847 69.915 6.591 83.883 75.834 80.313 2.66878.689 77.277 77.934 0.499 70.312 64.897 67.226 1.832 83.896 82.297 83.044 0.656 80.078 76.303 78.1 1.47ALSChard soft95.344 94.9 95.344 94.995.033 0.178 95.033 0.17894.678 93.792 94.368 0.332 92.683 82.262 88.692 3.581 94.678 93.792 94.368 0.332 92.683 82.262 88.692 3.58116resAOEhard soft80.0 87.591 77.014 82.747 4.354 61.297 72.659 6.94978.128 73.674 76.371 1.693 69.736 68.142 68.842 0.545 85.769 80.186 82.925 1.907 81.766 79.158 80.902 0.921AESChard soft62.553 53.125 59.134 3.32 65.934 59.046 64.029 2.59469.336 65.707 67.434 1.442 71.013 68.204 69.348 1.146 72.975 68.586 70.703 1.65 74.022 72.205 73.057 0.695Pairhard soft57.835 45.667 52.217 4.645 66.559 53.167 60.24 4.95162.42 69.859 67.485 68.972 1.049 68.343 64.249 66.212 1.428 59.663 61.188 1.014 55.178 51.794 53.609 1.451Triplethard soft52.448 41.824 47.672 3.793 64.51 54.905 59.605 3.66958.596 54.545 56.551 1.33 66.397 64.274 65.213 0.913 66.147 62.278 63.778 1.303 53.169 49.11 51.843 1.434", "figure_id": "tab_19", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Results on D 20b . .718 60.095 3.814 72.495 68.684 70.528 1.435 76.128 73.636 74.639 1.079 soft 66.947 58.203 62.12 3.713 74.557 70.861 72.42 1.409 78.635 75.593 77.171 1.137 FewNERD hard 34.279 26.986 31.558 2.437 37.599 35.556 36.866 0.706 47.239 45.503 46.551 0.635 soft 37.737 29.951 35.087 2.683 42.18 39.604 41.047 0.851 52.67 50.495 51.839 0.765", "figure_data": "DatasetMatch TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstdCoNLL03 65.138 55ACE04 hard hard 29.55 21.601 27.8 soft 40.343 30.148 38.103 3.986 48.179 39.743 44.688 2.778 52.228 43.611 47.673 2.807 3.101 40.977 34.197 38.524 2.511 43.193 37.464 40.568 1.827ACE05-Enthard soft24.77 35.566 29.086 33.969 2.455 43.53 19.582 23.382 1.916 38.138 33.448 36.165 1.78 40.456 42.161 1.314 44.234 39.438 41.55 34.964 32.835 33.978 0.691 1.768GENIAhard soft39.433 35.473 38.09 46.126 41.732 44.551.645 50.766 47.286 48.818 1.314 51.972 49.618 50.892 1.003 1.631 56.697 53.156 54.688 1.327 57.373 55.168 56.471 1.012", "figure_id": "tab_20", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Results of NER task on different datasets.", "figure_data": "DatasetmaxZero-shot min meanstdmax5-shot ICL min meanstd5-shot COT max min mean stdCoNLL0465.816 53.902 59.206 3.848 62.757 50.679 55.317 4.561 ----NYT-multi38.738 24.584 30.959 5.509 30.005 22.364 26.876 2.744 ----TACRED21.578 16.977 19.467 1.498 34.777 25.722 27.844 3.478 ----Re-TACRED 27.877 15.723 21.412 4.174 45.682 26.776 34.003 6.548 ----SemEval2010 42.3236.424 39.273 2.203 43.344 35.297 39.437 2.554 ----CPR24.003 20.338 22.028 1.269 29.706 24.653 26.812 2.152 ----PGR58.216 44.068 54.471 5.252 55.762 54.4854.927 0.457 ----DocRED29.328 20.882 23.533.081 34.678 30.082 32.205 1.716 ----Re-DocRED32.937 19.214 25.975.565 31.852 27.699 28.887 1.525 ----DWIE23.898 14.606 20.003 3.237 35.158 21.841 26.719 4.76----CDR51.195 45.872 48.566 1.762 54.669 48.949 51.751 1.911 ----GDA55.398 49.804 53.055 2.307 56.8349.526 53.798 2.458 ----", "figure_id": "tab_21", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Results of RE-RC task on different datasets.", "figure_data": "DatasetMatch TypemaxZero-shot min meanstdmax5-shot ICL min meanstdmax5-shot COT min meanstdCoNLL04hard soft23.044 13.388 17.84 30.444 20.765 24.751 3.993 33.784 29.171 31.442 1.812 19.444 4.701 14.308 5.599 3.425 26.046 23.085 24.298 1.286 14.882 2.991 11.087 4.828NYT-multihard soft3.792 4.573.134 3.7943.476 4.2310.244 13.152 11.512 12.243 0.591 5.389 0.295 14.085 12.148 12.876 0.687 5.9810.693 2.325 0.77 2.6381.639 1.815TACREDhard soft2.78 3.3031.455 1.8462.337 2.8660.458 2.779 0.523 3.9361.888 2.4862.219 3.080.312 2.558 0.482 3.010.515 1.865 0.858 2.2120.728 0.761Re-TACREDhard soft2.91 3.8651.042 1.4512.304 2.9460.713 2.596 0.86 3.3271.936 2.7282.186 2.9850.233 1.526 0.216 2.0340.794 1.177 1.235 1.6840.291 0.282SemEval2010hard soft7.645 12.484 7.491 4.3255.821 9.6561.286 14.938 11.73 1.895 18.697 14.908 17.335 1.401 0.259 12.854 1.136 0.0870.0 0.00.035 0.1040.042 0.101CPRhard soft2.204 3.9881.338 2.5131.669 3.1140.331 5.908 0.506 9.2922.99 6.1964.266 7.2661.094 4.904 1.093 8.0142.663 3.78 5.327 7.0470.906 0.979PGRhard soft1.247 6.2370.0 1.4710.599 4.1630.463 9.283 1.815 14.346 11.465 12.43 3.822 5.5441.935 6.162 1.049 11.515 8.947 10.271 0.895 3.846 5.077 0.974DocREDhard soft4.548 5.5343.288 4.4943.869 5.0110.41 0.382 12.738 8.477 9.857 6.6587.49 9.8451.216 6.132 1.551 8.3953.912 4.661 4.647 6.0420.805 1.323Re-DocREDhard soft3.695 4.7381.896 2.5362.454 3.30.644 9.984 0.807 13.016 8.556 6.497.982 10.276 1.516 5.402 1.167 3.9323.441 3.732 4.457 4.8740.176 0.327DWIEhard soft0.921 1.2040.081 0.0810.497 0.6640.346 15.508 1.683 0.433 17.989 2.6445.81 7.0445.136 3.164 5.753 4.3141.206 1.943 1.773 2.8520.725 0.862CDRhard soft13.09 17.302 10.609 15.357 2.468 25.269 17.801 21.203 2.921 20.431 7.591 11.456 4.73 7.025 11.109 2.2 19.491 12.252 15.034 2.706 16.013 4.831 7.966 4.111GDAhard soft0.713 1.7270.084 0.2520.337 0.850.225 14.098 7.318 0.523 27.049 13.309 19.379 4.548 16.994 12.69 14.764 1.585 10.15 2.234 9.123 7.758 8.374 0.484", "figure_id": "tab_22", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Results of RE-Triplet task on different datasets.", "figure_data": "", "figure_id": "tab_23", "figure_label": "16", "figure_type": "table" } ]
Ridong Han; Tao Peng; Chaohao Yang; Benyou Wang; Lu Liu; Xiang Wan
[ { "authors": "Sam Altman", "journal": "Ope-nAI Blog", "ref_id": "b0", "title": "Planning for agi and beyond", "year": "2023" }, { "authors": "Yejin Bang; Samuel Cahyawijaya; Nayeon Lee; Wenliang Dai; Dan Su; Bryan Wilie; Holy Lovenia; Ziwei Ji; Tiezheng Yu; Willy Chung; Quyet V Do; Yan Xu; Pascale Fung", "journal": "", "ref_id": "b1", "title": "A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity", "year": "2023" }, { "authors": "Xianpei Ning Bian; Le Han; Hongyu Sun; Yaojie Lin; Ben Lu; He", "journal": "", "ref_id": "b2", "title": "Chatgpt is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models", "year": "2023" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "NeurIPS", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott M Lundberg; Harsha Nori; Hamid Palangi; Marco Túlio Ribeiro; Yi Zhang", "journal": "", "ref_id": "b4", "title": "Sparks of artificial general intelligence: Early experiments with GPT-4", "year": "2023" }, { "authors": "Shulin Cao; Jiaxin Shi; Zijun Yao; Xin Lv; Jifan Yu; Lei Hou; Juanzi Li; Zhiyuan Liu; Jinghui Xiao", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Program transfer for answering complex questions over knowledge bases", "year": "2022" }, { "authors": "Shan Chen; Benjamin H Kann; Michael B Foote; Jwl Hugo; Aerts; K Guergana; Raymond H Savova; Danielle S Mak; Bitterman", "journal": "medRxiv", "ref_id": "b6", "title": "The utility of chatgpt for cancer treatment information", "year": "2023" }, { "authors": "Zhuang Chen; Tieyun Qian", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Relation-aware collaborative learning for unified aspect-based sentiment analysis", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b8", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Paul F Christiano; Jan Leike; Tom B Brown; Miljan Martic; Shane Legg; Dario Amodei", "journal": "", "ref_id": "b9", "title": "Deep reinforcement learning from human preferences", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ning Ding; Guangwei Xu; Yulin Chen; Xiaobin Wang; Xu Han; Pengjun Xie; Haitao Zheng; Zhiyuan Liu", "journal": "", "ref_id": "b11", "title": "Few-nerd: A few-shot named entity recognition dataset", "year": "2021" }, { "authors": "George R Doddington; Alexis Mitchell; Mark A Przybocki; Lance A Ramshaw; Stephanie M Strassel; Ralph M Weischedel", "journal": "European Language Resources Association", "ref_id": "b12", "title": "The automatic content extraction (ACE) program -tasks, data, and evaluation", "year": "2004-05-26" }, { "authors": "Zhifang Fan; Zhen Wu; Xin-Yu Dai; Shujian Huang; Jiajun Chen", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Target-oriented opinion words extraction with target-fused neural sequence labeling", "year": "2019-06-02" }, { "authors": "Zichu Fei; Qi Zhang; Tao Gui; Di Liang; Sirui Wang; Wei Wu; Xuanjing Huang", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "CQG: A simple and effective controlled generation framework for multi-hop question generation", "year": "2022" }, { "authors": "Yuhao Feng; Yanghui Rao; Yuyao Tang; Ninghua Wang; He Liu", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Target-specified sequence labeling with multi-head self-attention for target-oriented opinion words extraction", "year": "2021" }, { "authors": "Biyang Guo; Xin Zhang; Ziyuan Wang; Minqi Jiang; Jinran Nie; Yuxuan Ding; Jianwei Yue; Yupeng Wu", "journal": "", "ref_id": "b16", "title": "How close is chatgpt to human experts? comparison corpus, evaluation, and detection", "year": "2023" }, { "authors": "Ridong Han; Tao Peng; Jiayu Han; Hai Cui; Lu Liu", "journal": "Neural Networks", "ref_id": "b17", "title": "Distantly supervised relation extraction via recursive hierarchy-interactive attention and entityorder perception", "year": "2022" }, { "authors": "Mubin Ul Haque; Isuru Dharmadasa; Zarrin Tasnim Sworna; Roshan Namal Rajapakse; Hussain Ahmad", "journal": "", "ref_id": "b18", "title": "Exploring sentiments of chatgpt early adopters using twitter data", "year": "2022" }, { "authors": "Iris Hendrickx; Nam Su; Zornitsa Kim; Preslav Kozareva; Nakov; Ó Diarmuid; Sebastian Séaghdha; Marco Padó; Lorenza Pennacchiotti; Stan Romano; Szpakowicz", "journal": "The Association for Computer Linguistics", "ref_id": "b19", "title": "Semeval-2010 task 8: Multiway classification of semantic relations between pairs of nominals", "year": "2010-07-15" }, { "authors": "Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Tom Clark; Eric Hennigan; Katie Noland; George Millican; Bogdan Van Den Driessche; Aurelia Damoc; Simon Guy; Karen Osindero; Erich Simonyan; Jack W Elsen; Oriol Rae; Laurent Vinyals; Sifre", "journal": "", "ref_id": "b20", "title": "Training compute-optimal large language models", "year": "2022" }, { "authors": "I-Hung Hsu; Kuan-Hao Huang; Elizabeth Boschee; Scott Miller; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "DEGREE: A data-efficient generation-based event extraction model", "year": "2022" }, { "authors": "Katharina Jeblick; Balthasar Schachtner; Jakob Dexl; Andreas Mittermeier; Anna Theresa Stüber; Johanna Topalis; Tobias Weber; Philipp Wesp; O Bastian; Jens Sabel; Michael Ricke; Ingrisch", "journal": "", "ref_id": "b22", "title": "Chatgpt makes medicine easy to swallow: An exploratory case study on simplified radiology reports", "year": "2022" }, { "authors": "Wenxiang Jiao; Wenxuan Wang; Jen-Tse Huang; Xing Wang; Zhaopeng Tu", "journal": "", "ref_id": "b23", "title": "Is chatgpt A good translator? A preliminary study", "year": "2023" }, { "authors": "Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei", "journal": "", "ref_id": "b24", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b25", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "Sebastian Krügel; Andreas Ostermaier; Matthias Uhl", "journal": "", "ref_id": "b26", "title": "The moral authority of chatgpt", "year": "2023" }, { "authors": "Meisin Lee; Lay-Ki Soon; Eu-Gene Siew", "journal": "", "ref_id": "b27", "title": "Effective use of graph convolution network and contextual sub-tree for commodity news event extraction", "year": "2021" }, { "authors": "Meisin Lee; Lay-Ki Soon; Eu-Gene; Ly Siew; Sugianto Fie", "journal": "", "ref_id": "b28", "title": "An annotated commodity news corpus for event extraction", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Bo Li; Gexiang Fang; Yang Yang; Quansen Wang; Wei Ye; Wen Zhao; Shikun Zhang", "journal": "", "ref_id": "b30", "title": "Evaluating chatgpt's information extraction capabilities: An assessment of performance, explainability, calibration, and faithfulness", "year": "2023" }, { "authors": "Bo Li; Wei Ye; Jinglei Zhang; Shikun Zhang", "journal": "", "ref_id": "b31", "title": "Reviewing labels: Label graph network with top-k prediction set for relation extraction", "year": "2022" }, { "authors": "Xiaoya Li; Jingrong Feng; Yuxian Meng; Qinghong Han; Fei Wu; Jiwei Li", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "A unified MRC framework for named entity recognition", "year": "2020-07-05" }, { "authors": "Ying Lin; Heng Ji; Fei Huang; Lingfei Wu", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "A joint neural model for information extraction with global features", "year": "2020" }, { "authors": "Chengyuan Liu; Fubang Zhao; Yangyang Kang; Jingyuan Zhang; Xiang Zhou; Changlong Sun; Fei Wu; Kun Kuang", "journal": "", "ref_id": "b34", "title": "Rexuie: A recursive method with explicit schema instructor for universal information extraction", "year": "2023" }, { "authors": "Jie Lou; Yaojie Lu; Dai Dai; Wei Jia; Hongyu Lin; Xianpei Han; Le Sun; Hua Wu", "journal": "AAAI", "ref_id": "b35", "title": "Universal information extraction as unified semantic matching", "year": "2023" }, { "authors": "Yaojie Lu; Qing Liu; Dai Dai; Xinyan Xiao; Hongyu Lin; Xianpei Han; Le Sun; Hua Wu", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Unified structure generation for universal information extraction", "year": "2022-05-22" }, { "authors": "Haoran Lv; Junyi Liu; Henan Wang; Yaoming Wang; Jixiang Luo; Yaxiao Liu", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Efficient hybrid generation framework for aspect-based sentiment analysis", "year": "2023" }, { "authors": "Kamil Malinka; Martin Peresíni; Anton Firc; Ondrej Hujnak; Filip Janus", "journal": "", "ref_id": "b38", "title": "On the educational impact of chatgpt: Is artificial intelligence ready to obtain a university degree?", "year": "2023" }, { "authors": "Yue Mao; Yi Shen; Chao Yu; Longjun Cai", "journal": "AAAI Press", "ref_id": "b39", "title": "A joint training dual-mrc framework for aspect based sentiment analysis", "year": "2021" }, { "authors": "Guoshun Nan; Zhijiang Guo; Ivan Sekulic; Wei Lu", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Reasoning with latent structure refinement for document-level relation extraction", "year": "2020" }, { "authors": "Tomoko Ohta; Yuka Tateisi; Jin-Dong Kim; Hideki Mima; Junichi Tsujii", "journal": "Citeseer", "ref_id": "b41", "title": "The genia corpus: An annotated research abstract corpus in molecular biology domain", "year": "2002" }, { "authors": " Openai", "journal": "", "ref_id": "b42", "title": "", "year": "2023" }, { "authors": " Openai", "journal": "OpenAI Blog", "ref_id": "b43", "title": "Introducing chatgpt", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b44", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Haiyun Peng; Lu Xu; Lidong Bing; Fei Huang; Wei Lu; Luo Si", "journal": "AAAI Press", "ref_id": "b45", "title": "Knowing what, how and why: A near complete solution for aspect-based sentiment analysis", "year": "2020-02-07" }, { "authors": "Tao Peng; Ridong Han; Hai Cui; Lin Yue; Jiayu Han; Lu Liu", "journal": "Knowl. Based Syst", "ref_id": "b46", "title": "Distantly supervised relation extraction using global hierarchy embeddings and local probability constraints", "year": "2022" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar; Mohammad Al-Smadi; Mahmoud Al-Ayyoub; Yanyan Zhao; Bing Qin; Orphée De Clercq; Véronique Hoste; Marianna Apidianaki; Xavier Tannier; Natalia V Loukachevitch; V Evgeniy; Núria Kotelnikov; Salud Bel; Jiménez María; Gülsen Zafra; Eryigit", "journal": "The Association for Computer Linguistics", "ref_id": "b47", "title": "Semeval-2016 task 5: Aspect based sentiment analysis", "year": "2016-06-16" }, { "authors": "Maria Pontiki; Dimitris Galanis; Haris Papageorgiou; Suresh Manandhar; Ion Androutsopoulos", "journal": "The Association for Computer Linguistics", "ref_id": "b48", "title": "Semeval-2015 task 12: Aspect based sentiment analysis", "year": "2015-06-04" }, { "authors": "Maria Pontiki; Dimitris Galanis; John Pavlopoulos; Harris Papageorgiou; Ion Androutsopoulos; Suresh Manandhar", "journal": "The Association for Computer Linguistics", "ref_id": "b49", "title": "Semeval-2014 task 4: Aspect based sentiment analysis", "year": "2014-08-23" }, { "authors": "Chengwei Qin; Aston Zhang; Zhuosheng Zhang; Jiaao Chen; Michihiro Yasunaga; Diyi Yang", "journal": "", "ref_id": "b50", "title": "Is chatgpt a general-purpose natural language processing task solver?", "year": "2023" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b51", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b52", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Dan Roth; Wen-Tau Yih", "journal": "ACL", "ref_id": "b53", "title": "A linear programming formulation for global inference in natural language tasks", "year": "2004" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b54", "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", "year": "2003-05-31" }, { "authors": " ", "journal": "", "ref_id": "b55", "title": "", "year": "" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "ICLR. OpenReview", "ref_id": "b56", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Taneeya Satyapanich; Francis Ferraro; Tim Finin", "journal": "AAAI Press", "ref_id": "b57", "title": "CASIE: extracting cybersecurity event information from text", "year": "2020-02-07" }, { "authors": "Rylan Schaeffer; Brando Miranda; Sanmi Koyejo", "journal": "", "ref_id": "b58", "title": "Are emergent abilities of large language models a mirage?", "year": "2023" }, { "authors": "Shaden Smith; Mostofa Patwary; Brandon Norick; Patrick Legresley; Samyam Rajbhandari; Jared Casper; Zhun Liu; Shrimai Prabhumoye; George Zerveas; Vijay Korthikanti; Elton Zheng; Rewon Child; Reza Yazdani Aminabadi; Julie Bernauer; Xia Song; Mohammad Shoeybi; Yuxiong He; Michael Houston; Saurabh Tiwary; Bryan Catanzaro", "journal": "", "ref_id": "b59", "title": "Using deepspeed and megatron to train megatron-turing NLG 530b, A large-scale generative language model", "year": "2022" }, { "authors": "Zhongxiang Sun", "journal": "", "ref_id": "b60", "title": "A short survey of viewing large language models in legal aspect", "year": "2023" }, { "authors": "Teo Susnjak", "journal": "", "ref_id": "b61", "title": "Applying BERT and chatgpt for sentiment analysis of lyme disease in scientific literature", "year": "2023" }, { "authors": "Ruixiang Tang; Xiaotian Han; Xiaoqian Jiang; Xia Hu", "journal": "", "ref_id": "b62", "title": "Does synthetic data generation of llms help clinical text mining?", "year": "2023" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Yaguang Du; Hongrae Li; Huaixiu Lee; Amin Steven Zheng; Marcelo Ghafouri; Yanping Menegali; Maxim Huang; Dmitry Krikun; James Lepikhin; Dehao Qin; Yuanzhong Chen; Zhifeng Xu; Adam Chen; Maarten Roberts; Yanqi Bosma; Chung-Ching Zhou; Igor Chang; Will Krivokon; Marc Rusch; Kathleen S Pickett; Meredith Ringel Meier-Hellstern; Tulsee Morris; Renelito Delos Doshi; Toju Santos; Johnny Duke; Ben Soraker; Vinodkumar Zevenbergen; Mark Prabhakaran; Ben Diaz; Kristen Hutchinson; Alejandra Olson; Erin Molina; Josh Hoffman-John; Lora Lee; Ravi Aroyo; Alena Rajakumar; Matthew Butryna; Viktoriya Lamm; Joe Kuzmina; Aaron Fenton; Rachel Cohen; Ray Bernstein; Blaise Kurzweil; Claire Aguera-Arcas; Marian Cui; Ed H Croak; Quoc Chi; Le", "journal": "", "ref_id": "b63", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b64", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Somin Wadhwa; Silvio Amir; Byron C Wallace", "journal": "", "ref_id": "b65", "title": "Revisiting relation extraction in the era of large language models", "year": "2023" }, { "authors": "Julie Medero; Christopher Walker; Stephanie Strassel; Kazuaki Maeda", "journal": "", "ref_id": "b66", "title": "Ace 2005 multilingual training corpus", "year": "2006" }, { "authors": "Liang Wang; Wei Zhao; Zhuoyu Wei; Jingming Liu; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b67", "title": "Simkgc: Simple contrastive knowledge graph completion with pre-trained language models", "year": "2022" }, { "authors": "Wenya Wang; Sinno Jialin Pan; Daniel Dahlmeier; Xiaokui Xiao", "journal": "AAAI Press", "ref_id": "b68", "title": "Coupled multi-layer attentions for co-extraction of aspect and opinion terms", "year": "2017-02-04" }, { "authors": "Xiao Wang; Weikang Zhou; Can Zu; Han Xia; Tianze Chen; Yuansen Zhang; Rui Zheng; Junjie Ye; Qi Zhang; Tao Gui; Jihua Kang; Jingsheng Yang; Siyuan Li; Chunsai Du", "journal": "", "ref_id": "b69", "title": "Instructuie: Multi-task instruction tuning for unified information extraction", "year": "2023" }, { "authors": "Xinyu Wang; Yong Jiang; Nguyen Bach; Tao Wang; Zhongqiang Huang; Fei Huang; Kewei Tu", "journal": "Association for Computational Linguistics", "ref_id": "b70", "title": "Automated concatenation of embeddings for structured prediction", "year": "2021" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Atharva Naik; Arjun Ashok; Arut Selvan Dhanasekaran; Anjana Arunkumar; David Stap; Eshaan Pathak; Giannis Karamanolakis; Gary Haizhi; Ishan Lai; Ishani Purohit; Jacob Mondal; Kirby Anderson; Krima Kuznia; Kuntal Doshi; Maitreya Kumar Pal; Mehrad Patel; Mihir Moradshahi; Mirali Parmar; Neeraj Purohit; Varshney; Rohitha Phani; Pulkit Kaza; Ravsehaj Verma; Rushang Singh Puri; Savan Karia; Doshi; Keyur Shailaja; Siddhartha Sampat; Sujan Mishra; A Reddy; Sumanta Patro; Tanay Dixit; Xudong Shen", "journal": "Association for Computational Linguistics", "ref_id": "b71", "title": "Super-naturalinstructions: Generalization via declarative instructions on 1600+ NLP tasks", "year": "2022-12-07" }, { "authors": "Zhen Wang; Hongyi Nie; Wei Zheng; Yaqing Wang; Xuelong Li", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b72", "title": "A novel tensor learning model for joint relational triplet extraction", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b73", "title": "a. Finetuned language models are zero-shot learners", "year": "2022-04-25" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus", "journal": "", "ref_id": "b74", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed H Chi; V Quoc; Denny Le; Zhou", "journal": "", "ref_id": "b75", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Xiang Wei; Xingyu Cui; Ning Cheng; Xiaobin Wang; Xin Zhang; Shen Huang; Pengjun Xie; Jinan Xu; Yufeng Chen; Meishan Zhang; Yong Jiang; Wenjuan Han", "journal": "", "ref_id": "b76", "title": "Zero-shot information extraction via chatting with chatgpt", "year": "2023" }, { "authors": "Lu Xu; Hao Li; Wei Lu; Lidong Bing", "journal": "Association for Computational Linguistics", "ref_id": "b77", "title": "Position-aware tagging for aspect sentiment triplet extraction", "year": "2020-11-16" }, { "authors": "Hang Yan; Junqi Dai; Tuo Ji; Xipeng Qiu; Zheng Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b78", "title": "A unified generative framework for aspect-based sentiment analysis", "year": "2021" }, { "authors": "Yawen Yang; Xuming Hu; Fukun Ma; Aiwei Liu; Lijie Wen; S Yu Philip", "journal": "IEEE", "ref_id": "b79", "title": "Gaussian prior reinforcement learning for nested named entity recognition", "year": "2023" }, { "authors": "Chengze Yu; Taiqiang Wu; Jiayi Li; Xingyu Bai; Yujiu Yang", "journal": "IEEE", "ref_id": "b80", "title": "Syngen: A syntactic plug-andplay module for generative aspect-based sentiment analysis", "year": "2023" }, { "authors": "Xiangrong Zeng; Daojian Zeng; Shizhu He; Kang Liu; Jun Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b81", "title": "Extracting relational facts by an end-to-end neural model with copy mechanism", "year": "2018-07-15" }, { "authors": "Yiming Zhan; Zhao Li; Xiuhao Zhao; Chao Zhang; Tong Chen", "journal": "IEEE", "ref_id": "b82", "title": "A simple overlapping relation extraction method based on dropout", "year": "2022" }, { "authors": "Bowen Zhang; Daijun Ding; Liwen Jing", "journal": "", "ref_id": "b83", "title": "How would stance detection techniques evolve after the launch of chatgpt?", "year": "2022" }, { "authors": "Wenxuan Zhang; Xin Li; Yang Deng; Lidong Bing; Wai Lam", "journal": "Association for Computational Linguistics", "ref_id": "b84", "title": "Towards generative aspect-based sentiment analysis", "year": "2021" }, { "authors": "Yice Zhang; Yifan Yang; Yihui Li; Bin Liang; Shiwei Chen; Yixue Dang; Min Yang; Ruifeng Xu", "journal": "Association for Computational Linguistics", "ref_id": "b85", "title": "Boundary-driven table-filling for aspect sentiment triplet extraction", "year": "2022" }, { "authors": "Yue Zhang; Tao Peng; Ridong Han; Jiayu Han; Lin Yue; Lu Liu; ; ", "journal": "Appl. Intell", "ref_id": "b86", "title": "Synchronously tracking entities and relations in a syntax-aware parallel architecture for aspect-opinion pair extraction", "year": "2022" }, { "authors": "Yuhao Zhang; Victor Zhong; Danqi Chen; Gabor Angeli; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b87", "title": "Position-aware attention and supervised data improve slot filling", "year": "2017-09-09" }, { "authors": "Kang Zhao; Hua Xu; Yue Cheng; Xiaoteng Li; Kai Gao", "journal": "Knowl. Based Syst", "ref_id": "b88", "title": "Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction", "year": "2021" }, { "authors": "Qihuang Zhong; Liang Ding; Juhua Liu; Bo Du; Dacheng Tao", "journal": "", "ref_id": "b89", "title": "Can chatgpt understand too? A comparative study on chatgpt and fine-tuned BERT", "year": "2023" } ]
[]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b3", "b25", "b26", "b9", "b15", "b5", "b28", "b6", "b10", "b22" ], "table_ref": [], "text": "Pretrained Transformer-based language models (Vaswani et al., 2017) have revolutionized the field of natural language processing (NLP). Specifically, pretrained Transformer models such as BERT (Devlin et al., 2018) (Bidirectional Encoder Representations from Transformers), GPT-2 (Radford et al., 2019) (Generative Pre-trained Transformer) and T5 (Raffel et al., 2020) (Text-To-Text Transfer Transformer), have set new benchmarks for various downstream NLP tasks. To adapt these models for downstream tasks, researchers finetune them on task-specific data. Finetuning modifies the representations generated by each layer. How do these modifications compare with respect to corresponding pretrained models?\nFurther, if we perturb the inputs supplied to these language models, does that lead to changes in layer representations and also prediction accuracy? How does this robustness vary across different NLP tasks for which these models have been finetuned? It is important to understand answers to these questions to ensure that we account for perturbations for which these models are not very robust.\nRecent investigations on adversarial text perturbations (Wang et al., 2021;Jin et al., 2020;Li et al., 2020;Garg and Ramakrishnan, 2020;Sanyal et al., 2022) have revealed that even strong language models are susceptible to adversarial examples, which can increase the risk of misclassification of input data, leading to incorrect results. However, existing studies on robustness have used only a few adversarial text perturbations and experimented with BERT on a few downstream NLP tasks.\nHence, it is important to study the representations generated by pre-trained and finetuned Transformer models and evaluate their robustness when exposed to text perturbations. Specifically, we aim to answer the following questions: (i) Is the effect of finetuning consistent across all models for various NLP tasks? (ii) To what extent are these models effective in handling input text perturbations? and (iii) Do these models exhibit varying levels of robustness to input text perturbations when finetuned arXiv:2305.14453v2 [cs.CL] 8 Nov 2023 for different NLP tasks?\nEarlier studies use representation similarity analysis (RSA) as the metric to quantify the similarity between the representations generated by different models (He et al., 2021). Recently, Kornblith et al. (2019) introduced a new similarity metric, Centered Kernel Alignment (CKA), to better measure the representations. In another recent study, Nanda et al. (2022) overcame the limitations of CKA by introducing 'Similarity Through Inverted Representations' (STIR) to analyze the shared invariances of models to meaningless perturbations. Hence, we use both CKA and STIR to examine the representation similarity across various layers of pretrained versus finetuned models. We measure robustness as a function of relative change in performance when perturbations are applied to inputs. We make the code and models publicly available 1 .\nOur key contributions are as follows: (1) Our analysis of finetuned models versus pretrained models shows that the last layers of the models are more affected than the initial layers when finetuning. (2) GPT-2 exhibits more robust representations than BERT and T5 across multiple types of input perturbation. (3) Although Transformers models exhibit good robustness, the models are seen to be most affected by dropping nouns, verbs or changing characters. (4) We also observed that while there is some variation in the affected layers between models and tasks due to input perturbations, certain layers are consistently impacted across different models, indicating the importance of specific linguistic features and contextual information." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b15", "b9", "b31", "b29", "b18", "b7", "b19", "b21", "b0", "b13", "b12", "b10" ], "table_ref": [], "text": "The NLP community has shown increasing concern for the robustness of pre-trained language models, when exposed to adversarial examples. To assess this, various studies have been conducted to examine the models' susceptibility to modifications in the input text. Some works investigated modifications including typos or word replacements, while Li et al. (2020); Jin et al. (2020); Sun et al. (2020) evaluated the models' capacity to adapt to different data distributions and linguistic phenomena, such as coreference or sarcasm. To address the concern for robustness, Wang et al. (2021) introduced a multi-task benchmark for the evaluation of language models. More broadly, Schiappa et al. (2022) perform robustness analysis of videolanguage models. Studies on model probing, such as (Tenney et al., 2019b;Liu et al., 2019;Tenney et al., 2019a;Hewitt and Manning, 2019), have analyzed the degree to which syntactic and semantic features are captured in the different layers of BERT-like models. Additionally, Zhou and Srikumar (2021); Merchant et al. (2020) performed a comprehensive analysis of how finetuning affects the representations in the BERT model using a combination of probing and analytical techniques.\nIn terms of similarity metrics, Voita et al. ( 2019) used a form of canonical correlation analysis (PW-CCA; (Morcos et al., 2018)) to examine the layerwise evolution of representations in deep neural networks. On the other hand, Abnar et al. (2019) utilized Representation Similarity Analysis (RSA; (Laakso and Cottrell, 2000;Kriegeskorte et al., 2008)) to assess the models' representations. In recent work, Wu et al. (2020) applied Centered Kernel Alignment (CKA; (Kornblith et al., 2019)) to pre-trained Transformers like BERT and GPT-2, focusing mainly on cross-model comparisons.\nOur study, however, specifically delves into comparing the representations generated by pre-trained and finetuned Transformer models, and analyzing their layer-wise similarity and shared invariances. Additionally, our work seeks to contribute to the ongoing efforts to better understand the strengths and limitations of Transformer models in handling input text perturbations." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Representational Similarity Analysis between pretrained and finetuned models", "publication_ref": [ "b10", "b22" ], "table_ref": [], "text": "We use the following two metrics for comparing pretrained and finetuned models: CKA and STIR.\nCKA We use CKA (Kornblith et al., 2019) to compare the layer-wise hidden state representations of pre-trained and finetuned models for each dataset. CKA=1 indicates perfect similarity while CKA=0 indicates no similarity. (Linear) CKA between input matrices X and Y is computed as CKA(X, Y ) =\nHSIC(K,L) √ HSIC(K,K)HSIC(L,L)\nwhere K and L are the similarity matrices computed as K = XX T and L = Y Y T , and HSIC(K, L), HSIC(K, K), and HSIC(L, L) are the Hilbert-Schmidt Independence Criterion (HSIC) values. Specifically, we extract the hidden states corresponding to each token within a sentence for every language model. These hidden states are then averaged to create a unified 768D representation for that sentence. For a 100 sentence input, we obtain [13,100,768] dimensional output (comprising 1 embedding and 12 layers) for each pre-trained as well as fine-tuned model. Then, we compute the CKA values at every layer (between the [100, 768] representations) yielding 13 layer-wise values for each combination of dataset and model. STIR To evaluate the shared invariance between pre-trained and finetuned models, we use STIR. First, we obtain hidden state representations for the pre-trained models on the test dataset for all GLUE tasks. Then, we sample half of the dataset (except for QQP, where 5000 examples were used) 20 times and obtain X ′ for X, where X ′ represents the examples with the smallest L2 norm with respect to these representations. Finally, we compute the CKA similarity between the hidden state representations generated by the finetuned model and pretrained model, and report it as STIR(finetuned|pretrained).\nTo measure the invariance in the opposite direction, we follow the same procedure with the finetuned model. We obtain hidden state representations for the finetuned model, find the examples with the smallest L2 norm, and use these examples' hidden state representations of the pre-trained model to find STIR(pre-trained|finetuned).\nAs proposed in (Nanda et al., 2022), STIR values are computed as STIR(m\n2 |m 1 , X) = 1 k X ′ CKA(m 2 (X), m 2 (X ′ ))\nwhere m 1 and m 2 are the models under comparison, X is the test dataset and X ′ are similar examples obtained using the representation inversion method mentioned above. We fix k=20." }, { "figure_ref": [], "heading": "Text perturbations", "publication_ref": [], "table_ref": [], "text": "We examine various types of text perturbations that encompass a wide range of variations that can occur in natural language text. They are defined as follows.\n(1) Drop noun/verb perturbations involve dropping words based on their part-of-speech tag, specifically nouns or verbs. (2) Drop first/last perturbations alter the phrase based on its location. Specifically, the first/last word is dropped. (3) Swap text perturbations involve swapping one or more words from the original phrase. (4) Change char perturbations involve changing one or more characters in a word(s). ( 5) Add text perturbations involve appending irrelevant word(s) to the text. ( 6) Bias perturbations involve switching the gender of one or more words in a phrase." }, { "figure_ref": [], "heading": "Representations and Tasks for Robustness Evaluation", "publication_ref": [ "b36", "b30", "b4", "b1", "b8", "b27", "b2", "b14", "b24", "b16" ], "table_ref": [], "text": "We experiment with a Transformer encoder (BERTbase), a decoder (GPT-2) and an encoder-decoder model (T5-base) for classification tasks. For generative tasks, we use GPT-2 and T5-base. For obtaining model representations, we extract the hidden states for BERT and GPT-2 and encoder hidden states for T5 for each token and average them to obtain a single representation of 768 size.\nFor evaluation across classification tasks, we use General Language Understanding Evaluation (GLUE; (Wang et al., 2018)) benchmark, a collection of diverse NLP tasks. The tasks include:\n(1) Single-sentence tasks: The Corpus of Linguistic Acceptability (CoLA;(Warstadt et al., 2019)) and Stanford Sentiment Treebank (SST-2; (Socher et al., 2013)). ( 2) Similarity and paraphrase tasks: Microsoft Research Paraphrase Corpus (MRPC; (Dolan and Brockett, 2005)), Semantic Textual Similarity Benchmark (STS-B; (Cer et al., 2017)) and Quora Question Pairs (QQP; (Iyer et al., 2017)). ( 3) Inference tasks: Multi-Genre Natural Language Inference (MNLI; (Williams et al., 2017)) (with two splits: matched and mismatched), Question Natural Language Inference (QNLI; (Rajpurkar et al., 2016)), Recognizing Textual Entailment (RTE; (Dagan et al., 2005)), Winograd Natural Language Inference (WNLI; (Levesque et al., 2012)), and Abreviated eXperiments (AX).\nFor generative tasks, we evaluate text summarization, free-form text generation and question generation. For text summarization, we use the Extreme Summarization (XSum (Narayan et al., 2018)) dataset. XSum contains news articles accompanied by single-sentence summaries, providing a diverse range of topics and presenting challenges in extractive summarization. For free-form text generation, we utilized the CommonGen (Lin et al., 2019) dataset, which consists of sentence pairs serving as prompts for everyday scenarios. This evaluates the models' capability to generate coherent and contextually relevant descriptions. Regarding question generation, we leveraged the Stanford Question Answering Dataset (SQuAD (Rajpurkar et al., 2016)). It includes passages from various sources along with question-answer pairs. In this task, our models were tasked with generating questions based on input context and answer.\nWe use pre-trained BERT-base, GPT-2, and T5base models from HuggingFace v4.2.2 (Wolf et al., 2020). We finetune GPT-2 and T5 models on GLUE tasks mentioned above and obtain the finetuned models for BERT from HuggingFace. Further details on detailed task descriptions and metrics are in Appendix." }, { "figure_ref": [], "heading": "Evaluation Metrics for Robustness", "publication_ref": [ "b17" ], "table_ref": [], "text": "For classification tasks, we used metrics like Matthews Correlation Coefficient for CoLA, Pearson Correlation Coefficient for STS-B and Accuracy for other tasks. For generative tasks, we employed ROUGE (Lin, 2004). We report ROUGE-1, ROUGE-2, and ROUGE-L F-scores.\nLet m c and m p denote the values of a metric m of the model on the clean and perturbed test sets, respectively. Then, we define robustness as robustness = 1 -mc-mp mc . Typically, robustness score of a model ranges between 0 (not robust) and 1 (very robust). Score greater than 1 suggests that the model's performance improves with the perturbation applied." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "4.1 How does finetuning modify the layers representations for different models?\nFig. 1 shows the layer-wise CKA/STIR comparisons between pre-trained and finetuned BERT, GPT-2 and T5 models for the GLUE benchmark. We observe that the impact of finetuning varies across models. GPT-2's last layers were more affected than BERT's during the finetuning process, but GPT-2 had fewer affected layers, indicating higher semantic stability. CKA values for GPT-2 remained mostly higher than those for BERT, suggesting more general applicability of GPT-2's pretrained representation. T5 showed similar effects of finetuning as BERT but with more pronounced effects, as evidenced by the CKA values.\nAll the models on the majority of datasets showed a gradual decrease in similarity of representations in the initial layers, with a larger drop in similarity observed later on. BERT and GPT-2 had the highest CKA values for the RTE and WNLI datasets, indicating that these models were least affected by finetuning. In contrast to BERT and GPT-2, T5's CKA values increased from layer 11 to 12 on some datasets. This could be due to the encoder-decoder architecture of T5, where the decoder is introduced after the 12th layer, causing the model to converge towards a more generalizable input representation.\nAnother noteworthy observation was the similar trend in CKA and STIR values for some datasets, which was consistent across all three models. This suggests that the underlying data characteristics of these particular tasks are better captured by the pretrained representations of these models. This finding explains why transfer learning using these models is so successful, where these pre-trained models have been finetuned on other related tasks, resulting in improved performance due to the shared invariance of these representations. It was also observed that in some cases, CKA values were slightly higher compared to STIR values. This difference could be attributed to CKA overestimating the similarity between models.\nLayer-wise analysis: For this experiment, we constructed a logistic regression model at each layer for both pre-trained and finetuned language models (e.g., BERT). These models were trained using the hidden state representations of [CLS] token generated by the corresponding BERT models. We then used the trained logistic regression models to infer the test dataset using the hidden state representations of [CLS] token generated by both pre-trained and finetuned BERT models. We repeat the experiment for GPT-2 also using the last token hidden state representations. Figs. 6 and 7 (in Appendix) show the relationship between CKA and layer-wise accuracy for BERT, and GPT-2 models. Fig. 8 (in Appendix) shows CKA/STIR plots for T5. For almost every task across all three models, we observe that the CKA values drop from initial to later layers. Our experiments reveal a correlation between CKA and accuracy, with a decrease in CKA values indicating a greater difference in accuracy between the pre-trained and finetuned models.\nOverall, based on the above results, the following insights can be drawn. (1) Task and model sensitivity: Obviously performance across NLP tasks varies for each model. Each task has its own unique characteristics that finetuning captures. This clearly shows up in the results since the CKA/STIR values vary significantly across NLP tasks even for the same model. (2) Layer influence: The results also demonstrate that the impact of finetuning is not evenly distributed across all layers of various models. Later layers are seen to be more impacted than the lower layers. (3) Layer-wise finetuning analysis: The layer-wise analysis of models' performances sheds light on the inner workings of deep learning models and highlights the importance of understanding the effect of finetuning on different layers. This can inform the creation of more effective finetuning strategies and the selection of appropriate layers to finetune for a specific task." }, { "figure_ref": [ "fig_0" ], "heading": "How robust are the classification models to perturbations in input text?", "publication_ref": [], "table_ref": [], "text": "For robustness analysis, we apply various text perturbation methods as detailed in Section 3.2, including the \"Change char\" perturbation where we replaced characters with a probability of 0.10, and the \"Add text\" perturbation, where we added extra words equivalent to 10% of the existing words. Comparison of robustness of the three language models in Fig. 2 shows that GPT-2 is the most robust model, followed by T5 and then BERT. This suggests that GPT-2 may be better suited for tasks that require robustness to text variations, such as natural language understanding in dynamic environments where the input may be noisy or incomplete. Moreover, the results reveal that the performance of all three models was significantly affected by changing characters, followed by re- Figure 3: Most affected layers in BERT model when subjected to text perturbation techniques: blue → most affected layer, green → second most affected layer, and orange → third most affected layer. The initial and last layers of BERT are most sensitive to perturbations. Specifically, when text is added as a perturbation, it primarily affects the lower layers, suggesting that these layers are actively engaged in comprehending the additional context. On the other hand, the middle layers demonstrate relatively less sensitivity to these perturbations. moving nouns and verbs, highlighting the models' heavy reliance on parts-of-speech and special characters for predicting labels. Also, the Bias perturbation has the lowest impact on the models' robustness, indicating that the models are highly robust to gender bias in the datasets.\nMetric ROUGE-1 ROUGE-2 ROUGE-L Task GPT-2 T5 GPT-2 T5 GPT-" }, { "figure_ref": [], "heading": "Is the impact of input text perturbations", "publication_ref": [], "table_ref": [ "tab_3", "tab_4", "tab_4", "tab_4", "tab_6" ], "text": "on finetuned models task-dependent?\nFirst, we show the performance of the finetuned models without any input text perturbations in Table 1 and Table 2 for each classification and generation task resp. For classification tasks, BERT performs better than GPT-2 in general. T5 performs comparable to BERT; outperforming BERT in some tasks like MRPC, STS-B, MNLI-m, QNLI and RTE. For generation, T5 outperforms GPT-2 across all tasks.\nSingle-sentence tasks Table 3 illustrates the impact of text perturbations on various finetuned models for single-sentence tasks, CoLA and SST-2. Specifically the robustness scores are shown. In the CoLA dataset, where the objective is to predict the grammatical acceptability of a sentence, all models showed significant sensitivity to perturbations except for bias. GPT-2 exhibited the highest robustness, with relatively high scores across seven perturbations. T5 showed mixed results, outperforming BERT on some perturbations but not others. Semantic perturbations, such as dropping nouns or verbs and swapping text, had the most significant impact on performance of the models.\nInterestingly, all models performed similarly on the sentiment analysis, with robustness scores >0.92 for all except the \"Change char\" perturbation, indicating high robustness for this task. These findings suggest that the impact of text perturbations on finetuned models is task-dependent and may vary across different models and perturbations. Similarity and paraphrase tasks Table 3 shows that GPT-2 is significantly better than BERT and T5 in the MRPC similarity task. However, BERT still exhibited good performance, although not as good as T5. On the other hand, BERT outperformed T5 and GPT-2 in the STS-B task, which involves assigning similarity scores to pairs of sentences ranging from 0 to 5.\nFor QQP, all three models showed similar scores, implying their equal efficiency in recognizing paraphrases despite perturbations. Besides semantic perturbations, syntactic perturbations like dropping last word and adding text also affected the models.\nNatural Language Inference tasks Robustness analysis of the three Transformer models in Table 3 reveals that they exhibit similar characteristics for most inference tasks, with GPT-2 displaying better robustness in RTE. Moreover, for the same RTE task, GPT-2's robustness score exceeded 1 for some perturbations, signifying its better performance in the presence of perturbations. The study highlights the significance of taking into account the task and model in evaluating robustness.\nFurther, the Transformer models demonstrated high tolerance towards \"Dropping first word\" and \"Bias\" perturbations, suggesting that these perturba- tions have a minimal impact on the model's performance. The results of this study provide valuable insights to researchers and practitioners for making informed decisions about selecting appropriate models for various applications and domains.\nText Summarization Free-form Text Generation Question Generation ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L Perturbation GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-\nNatural Language Generation Tasks Table 4 shows robustness scores for various generative tasks. For text summarization, GPT-2 exhibits higher robustness than T5 in terms of ROUGE-1 and ROUGE-2 metrics, but not for ROUGE-L.\nFor free-form generation, GPT-2 excels in perturbations involving word or parts-of-speech removal, demonstrating higher robustness compared to T5.\nOn the other hand, T5 showcases superior robustness in other types of perturbations. Interestingly, both models exhibit excellent performance when swapping words, highlighting their ability to generate varied and coherent text even when rearranging sentence structures. For question generation, GPT-2 outperforms T5 in most of cases. However, T5 exhibits higher robustness specifically in scenarios involving the 'Swap text' perturbation, highlighting its ability to generate coherent summaries even when the sentence structure is rearranged. Notably, the perturbations of dropping nouns, verbs, or changing characters continue to have the most significant impact on models' performance." }, { "figure_ref": [ "fig_3" ], "heading": "Is the impact of perturbations on finetuned models different across layers?", "publication_ref": [], "table_ref": [], "text": "We conducted a layer-wise analysis on BERT, GPT-2, and T5 models using the training and validation datasets. Specifically, we extracted the layerwise hidden states for BERT's [CLS] token, last token hidden states for GPT-2, and T5 decoder hidden states. Logistic regression models were then trained on the hidden states for training dataset, and employed to predict labels for the hidden state representations for the validation dataset. Subsequently, we assessed the impact of perturbations by comparing the performance of these models with the unperturbed dataset, and the top three affected layers were identified. We depict the top three affected layers for BERT, GPT-2 and T5 in Figs. 3,4 and 5 respectively for classification tasks.\nThe following conclusions can be drawn from the results. (1) Similarity across models: The layers most affected by text perturbations tend to be consistent across different models. This suggests that certain shared linguistic features and contextual information are crucial for these models across different architectures. (2) Variation across tasks: While there is some variation in the affected layers between models and tasks, a general trend can be observed. For the encoder model BERT, the later layers are more impacted, while the initial layers of the decoder model GPT-2 also show significant effects. T5 exhibits similar results to GPT-2, but some middle layers are also affected. (3) Influence of context: Perturbations involving changes in context, such as \"Swap text,\" tend to affect multiple layers across different models. This indicates that contextual information is distributed and integrated throughout the layers of these models, and altering the context can have a broader impact on the overall understanding and representation of the text. (4) For T5, we observe a notable increase in accuracy for the initial decoder layers, followed by a relatively constant performance. This suggests that the encoder has effectively learned the representations, requiring fewer decoder layers thereafter.\nOverall, these findings indicate that different layers of language models are sensitive to specific types of text perturbations, with some consistent patterns observed across architectures and tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b22", "b20" ], "table_ref": [], "text": "Our research has provided a comprehensive analysis of the layer-wise similarity and shared invariance between pre-trained and finetuned Transformer models, and the impact of text perturbations on their performance. A key to our study is that we leverage STIR (Nanda et al., 2022), a recent approach that estimates how much of the invariances to specific perturbations learned by one source model are shared with a second target model (Merlin et al., 2023). Our findings suggest that model performance is task-sensitive, and the type of data it was trained on influences its performance on a particular task. Layer-wise analysis showed that some layers have a more significant impact on performance than others. Also, the robustness scores of BERT, GPT-2, and T5 under different perturbations demonstrate that these models exhibit varying degrees of robustness depending on the task and type of perturbation. GPT-2 is more resilient to perturbations than T5 and BERT. Overall, our study provides insights into the strengths and limitations of BERT, GPT-2, and T5, serving as a foundation for future research to develop more resilient NLP models." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this work, we focused on English tasks only, and hence experimented with robustness of models trained for English only. In the future, we would like to experiment and evaluate robustness of multilingual models.\nWhile we have experimented with popular ways of performing input perturbations, there could be several other ways of input perturbation. Specifically, we looked at a class of perturbations which work in the character or word space. As future work, we would like to experiment with perturbations in the embedding space. " }, { "figure_ref": [], "heading": "A Task Descriptions", "publication_ref": [ "b30" ], "table_ref": [], "text": "A.1 Single-Sentence Tasks\n• CoLA The Corpus of Linguistic Acceptability (Warstadt et al., 2019), a task that evaluates a model's ability to distinguish between grammatical and ungrammatical sentences.\n• SST-2 Stanford Sentiment Treebank (Socher et al., 2013), a sentiment analysis task where models must predict the sentiment label of a given sentence." }, { "figure_ref": [], "heading": "A.2 Similarity and Paraphrase Tasks", "publication_ref": [ "b4", "b1", "b8" ], "table_ref": [], "text": "• MRPC Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005), a binary classification task that requires models to determine whether two sentences are semantically equivalent or not.\n• STS-B Semantic Textual Similarity Benchmark (Cer et al., 2017), a regression task that measures the semantic similarity between two given sentences.\n• QQP Quora Question Pairs (Iyer et al., 2017), a binary classification task that involves determining whether two questions are semantically equivalent or not." }, { "figure_ref": [], "heading": "A.3 Inference Tasks", "publication_ref": [ "b27", "b2", "b14" ], "table_ref": [], "text": "• MNLI2 Multi-Genre Natural Language Inference (Williams et al., 2017), a natural language inference task to determine the relationship between a premise sentence and a hypothesis sentence by categorizing it as entailment, contradiction, or neutral.\n• QNLI Question Natural Language Inference (Rajpurkar et al., 2016), a binary classification task where models must determine whether a given sentence is a valid inference from a given premise.\n• RTE Recognizing Textual Entailment (Dagan et al., 2005), a binary classification task where models must determine whether a given premise implies a given hypothesis.\n• WNLI Winograd Natural Language Inference (Levesque et al., 2012), a binary classification task that involves resolving pronounantecedent coreference.\n• AX Abreviated eXperiments, a task that resembles the MNLI." }, { "figure_ref": [], "heading": "B Calculating CKA", "publication_ref": [ "b10" ], "table_ref": [], "text": "The calculation for (Linear) CKA, as presented in (Kornblith et al., 2019), involves the following steps:\n• Compute similarity matrices K = XX T and L = Y Y T , where X and Y are the input matrices.\n• Compute normalized versions K ′ = HKH and L ′ = HLH of the similarity matrix using the centering matrix\nH = I n -1 n 11 T . • Return CKA(X, Y ) = HSIC(K,L) √ HSIC(K,K)HSIC(L,L)\n, where HSIC(K, L) = flatten(K ′ )•flatten(L ′ )\n(n-1) 2\n. These steps are used to calculate the (Linear) CKA, which is a measure of the similarity between two sets of vectors. This technique is often used in machine learning to compare the representations learned by different models." }, { "figure_ref": [], "heading": "C Calculating STIR", "publication_ref": [], "table_ref": [], "text": "For two models m 1 and m 2 , and data point x, we generated x s by solving the following optmization:\nargmin xs ||m 1 (x) -m 2 (x s )|| 2\nwhere ||.|| 2 is the Euclidean norm and m 1 (.) and m 2 (.) are last layer representations averaged over all the tokens of the respective models. This process generates x s for every point x such that ||m 1 (x) -m 2 (x s )|| 2 is smallest. We sampled half the test dataset(except for QQP where 5000 examples were considered) 20 times. We obtained the similar data point x s for each dataset point x to obtain X ′ and X respectively. Then we calculated shared invariance of the models using STIR as:\nSTIR(m 2 |m 1 , X, S r ) = 1 k X ′ S r (m 2 (X), m 2 (X ′ ))\nwhere S r is Linear CKA." }, { "figure_ref": [], "heading": "D Robustness failures in BERT", "publication_ref": [], "table_ref": [], "text": "Table 5 shows the impact of various text perturbations on the predictions made by finetuned BERTbase models on GLUE benchmark. The perturbations include changing the style of the text from active to passive, adding 10% extra text, changing characters with a probability of 10%, introducing bias by changing gender, swapping words and dropping first word, verbs from sentences.\nThe results of the analysis show that these perturbations can significantly alter the model's predictions. These findings highlight the importance of testing machine learning models with various perturbations to improve their robustness and ensure their predictions are not significantly influenced by small changes in the input text. 0.62 0.38 0.89 0.99 0.99 0.99 0.99 0.98 0.98 0.98 0.96 0.96 0.1 0.63 0.39 0.89 0.99 0.99 1.0 0.99 0.99 0.99 0.98 0.97 0.96 0.1 0.63 0.39 0.9 0.99 0.99 1.0 0.99 0.99 0.99 0.99 0.97 0.97 0.1 0.63 0.4 0.91 0.98 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.98 0.12 0.63 0.41 0.91 0.97 0.98 0.99 0.99 0.99 0.99 0.99 0.99 0.98 0.12 0.64 0.44 0.92 0.96 0.97 0.98 0.98 0.98 0.99 0.99 0.99 0.99 0.15 0.67 0.46 0.93 0.95 0.96 0.98 0.98 0.98 0.99 0.99 0.99 0.99 0.16 0.52 0.66 0.36 0.12 0.12 0.12 0.12 0.13 0.14 0.15 0.18 0.19 0.67 CoLA CKA Pre-trained GPT-2 1.0 0.93 0.91 0.92 0.83 0.86 0.92 0.93 0.87 0.83 0.76 0.76 0.37 0.95 1.0 0.98 0.92 0.9 0.9 0.93 0.95 0.94 0.94 0.9 0.9 0.57 0.91 0.96 1.0 0.95 0.95 0.95 0.95 0.94 0.95 0.94 0.91 0.87 0.53 0.92 0.87 0.92 0.99 0.94 0.96 0.98 0.96 0.92 0.88 0.8 0.76 0.36 0.84 0.86 0.92 0.97 0.99 0.99 0.97 0.93 0.95 0.92 0.87 0.8 0.47 0.88 0.86 0.92 0.99 0.98 0.99 0.98 0.95 0.95 0.91 0.84 0.78 0.42 0.92 0.88 0.92 0.99 0.95 0.97 0.99 0.97 0.94 0.9 0.83 0.78 0.4 0.91 0.9 0.94 0.99 0.97 0.98 0.99 0.98 0.97 0.94 0.87 0.83 0.47 0.83 0.9 0.95 0.95 0.98 0.98 0.96 0.94 0.99 0.97 0.94 0.89 0.6 0.79 0.9 0.94 0.91 0.96 0.95 0.93 0.91 0.98 0.98 0.97 0.93 0.68 0.71 0.87 0.9 0.83 0.91 0.89 0.85 0.85 0.94 0.97 0.98 0.95 0.75 0.7 0.87 0.9 0.81 0.89 0.87 0.84 0.84 0.93 0.96 0.98 0.96 0.75 0.41 0.61 0.62 0.49 0.59 0.57 0.52 0.54 0.65 0.7 0.75 0.73 0.82 " }, { "figure_ref": [], "heading": "RTE CKA", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "The authors of this paper are committed to upholding the highest ethical standards in conducting their research. All data collection, usage and analysis were performed in accordance with the relevant ethical guidelines and regulations. The authors declare that there are no conflicts of interest that may compromise the integrity of the research. Furthermore, the authors strive to ensure that their work contributes to the advancement of knowledge and makes a positive impact on society. Pre-trained T5 1.0 0.95 0.93 0.92 0.91 0.88 0.82 0.56 0.27 0.22 0.24 0.26 0.13 0.97 0.99 0.98 0.97 0.96 0.93 0.87 0.6 0.3 0.25 0.27 0.29 0.15 0.94 0.98 0.98 0.98 0.97 0.94 0.87 0.6 0.3 0.25 0.26 0.29 0.16 0.93 0.97 0.97 0.98 0.97 0.94 0.87 0.6 0.3 0.24 0.26 0.28 0.16 0.92 0.96 0.96 0.97 0.97 0.94 0.87 0.6 0.3 0.25 0.26 0.28 0.16 0.9 0.94 0.94 0.95 0.96 0.94 0.88 0.62 0.33 0.27 0.28 0.3 0.18 0.88 0.92 0.93 0.94 0.95 0.94 0.89 0.65 0.36 0.3 0.31 0.33 0.19 0.84 0.88 0.89 0.9 0.92 0.91 0.88 0.67 0.4 0.34 0.34 0.35 0.2 0.4 0.42 0.43 0.44 0.46 0.5 0.55 0.57 0.51 0.46 0.42 0.4 0.2 0.1 0.12 0.12 0.13 0.14 0.17 0.22 0.31 0.36 0.32 0.26 0.23 0.11 0.02 0.03 0.03 0.03 0.04 0.07 0.1 0.19 0.27 0.24 0.19 0.17 0.07 0.01 0.02 0.02 0.02 0.03 0.05 0.09 0.18 0.26 0.24 0.2 0.17 CoLA STIR/CKA 0 1 2 3 4 5 6 7 8 9 10 11 12 Fine-tuned T5 Pre-trained T5 1.0 0.88 0.76 0.76 0.75 0.52 0.15 0.05 0.02 0.02 0.01 0.01 0.07 0.92 0.96 0.87 0.87 0.85 0.6 0.18 0.06 0.04 0.03 0.03 0.02 0.1 0.85 0.94 0.92 0.92 0.9 0.64 0.2 0.08 0.05 0.04 0.04 0.04 0.12 0.83 0.93 0.92 0.93 0.91 0.66 0.21 0.08 0.05 0.05 0.05 0.04 0.12 0.83 0.92 0.91 0.92 0.91 0.67 0.22 0.09 0.06 0.05 0.05 0.05 0.13 0.81 0.9 0.89 0.9 0.9 0.67 0.22 0.09 0.06 0.06 0.06 0.05 0.14 Pre-trained T5 1.0 0.98 0.93 0.88 0.87 0.83 0.79 0.75 0.68 0.47 0.06 0.03 0.44 0.97 1.0 0.98 0.94 0.93 0.9 0.86 0.81 0.74 0.5 0.07 0.04 0.46 0.93 0.97 0.99 0.97 0.96 0.93 0.89 0.84 0.76 0.52 0.07 0.04 0.46 0.91 0.96 0.99 0.99 0.98 0.95 0.92 0.88 0.79 0.54 0.08 0.04 0.46 0.9 0.95 0.98 0.98 0.98 0.97 0.94 0.9 0.81 0.55 0.08 0.05 0.47 0.87 0.93 0.96 0.97 0.98 0.98 0.96 0.92 0.84 0.58 0.09 0.05 0.48 0.84 0.9 0.93 0.95 0.96 0.97 0.97 0.94 0.86 0.59 0.09 0.06 0.48 0.83 0.88 0.91 0.93 0.94 0.96 0.96 0.95 0.87 0.59 0.1 0.06 0.47 0.71 0.76 0.79 0.81 0.82 0.84 0.85 0.86 0.86 0.69 0.14 0.08 0.41 0.36 0.39 0.4 0.41 0.42 0.44 0.46 0.47 0.57 0.6 0.17 0.09 0.21 0.09 0.1 0.11 0.11 0.12 0.13 0.14 0.15 0.25 0.34 0.13 0.07 0.06 0.05 0.06 0.06 0.07 0.07 0.08 0.09 0.1 0.18 0.26 0.1 0.06 0.04 0.28 0.3 0.31 0.33 0.34 0.35 0.35 0.37 0.37 0.26 0.07 0.06 0.37 MRPC CKA 0 0.95 0.92 0.91 0.91 0.89 0.86 0.63 0.12 0.07 0.05 0.06 0.21 0.96 1.0 0.98 0.97 0.97 0.95 0.92 0.69 0.13 0.09 0.08 0.08 0.27 0.93 0.99 0.99 0.99 0.98 0.97 0.94 0.71 0.14 0.09 0.08 0.09 0.29 0.91 0.98 0.99 0.99 0.99 0.98 0.95 0.72 0.14 0.09 0.09 0.09 0.31 0.91 0.97 0.98 0.99 0.99 0.99 0.95 0.73 0.14 0.09 0.09 0.09 0.32 0.9 0.96 0.97 0.98 0.99 0.99 0.96 0.73 0.14 0.09 0.09 0.09 0.33 0.87 0.93 0.94 0.96 0.97 0.98 0.97 0.74 0.14 0.09 0.08 0.09 0.34 0.84 0.91 0.92 0.94 0.95 0.97 0.96 0.75 0.15 0.1 0.09 0.1 0.34 0.78 0.82 0.83 0.85 0.87 0.89 0.9 0.71 0.15 0.09 0.08 0.09 0.32 0.65 0.68 0.69 0.72 0.73 0.75 0.78 0.62 0.13 0.08 0.07 0.07 0.28 0.29 0.31 0.32 0.34 0.34 0.36 0.37 0.32 0.08 0.05 0.05 0.05 0.17 0.14 0.16 0.16 0.17 0.18 0.19 0.2 0.18 0.04 0.03 0.03 0.03 0.12 0.18 0.23 0.23 0.25 0.28 0.31 0.32 0.27 0.05 0.03 0.03 0.03 0.41 Pre-trained GPT-2 1.0 0.99 0.98 0.97 0.97 0.94 0.91 0.9 0.87 0.72 0.37 0.27 0.85 0.99 1.0 1.0 0.99 0.99 0.97 0.94 0.93 0.9 0.75 0.38 0.27 0.86 0.98 1.0 1.0 1.0 0.99 0.98 0.96 0.95 0.91 0.76 0.39 0.28 0.87 0.97 0.99 1.0 1.0 1.0 0.99 0.97 0.96 0.92 0.78 0.4 0.29 0.88 0.96 0.98 0.99 1.0 1.0 1.0 0.98 0.98 0.94 0.79 0.41 0.3 0.89 0.94 0.96 0.97 0.98 0.99 1.0 0.99 0.99 0.95 0.81 0.42 0.31 0.9 0.91 0.94 0.95 0.96 0.98 0.99 1.0 1.0 0.96 0.81 0.43 0.31 0.9 0.89 0.92 0.94 0.95 0.96 0.98 0.99 1.0 0.96 0.82 0.44 0.32 0.9 0.86 0.88 0.9 0.91 0.93 0.95 0.96 0.96 0.99 0.91 0.6 0.49 0.9 0.69 0.71 0.72 0.73 0.75 0.77 0.78 0.78 0.87 0.97 0.86 0.78 0.75 0.37 0.38 0.39 0.4 0.41 0.43 0.43 0.44 0.57 0.81 0.97 0.96 0.45 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.47 0.72 0.94 0.96 0.36 0.53 0.54 0.55 0.57 0.58 0.62 0.64 0.65 0.67 0.58 0.32 0.24 0.71 WNLI CKA Sent.1 United Nations vehicle was attacked in the Serbian 1(not_entailment) province of Kosovo and at least one civilian policeman was killed, the United Nations said. Sent.2 civilian policeman was killed. Ques. At what point does oxygen toxicity begin to happen? Sent. Oxygen gas (O_2) can be toxic at elevated partial pressures, Swap text QNLI leading to convulsions and other health problems." }, { "figure_ref": [], "heading": "STS-B CKA", "publication_ref": [], "table_ref": [], "text": "[j] 1 (not_entailment)→ Ques. point what At does oxygen toxicity begin to happen ? 0 (entailment) Sent. Oxygen gas (O_2) can be toxic at elevated partial pressures , leading to j and other health problems . [ convulsions ] Sent.1 \" He may not have been there , \" the defence official said on Thursday. Drop text MRPC Sent.2 \" He may not have been there , \" said a defence official . 1 (equivalent)→ (No verbs) speaking on condition of anonymity 0 (not_equivalent) Sent.1 \" He may not there , \" the defence official on Thursday Sent.2 \" He may not there , \" a defence official on condition of anonymity .\nTable 5: Examples of text-perturbations on GLUE benchmark. Upon testing the modifications on finetuned BERT-base models, it was observed that they alter the predictions." } ]
Transformer-based pretrained models like BERT, GPT-2 and T5 have been finetuned for a large number of natural language processing (NLP) tasks, and have been shown to be very effective. However, while finetuning, what changes across layers in these models with respect to pretrained checkpoints is under-studied. Further, how robust are these models to perturbations in input text? Does the robustness vary depending on the NLP task for which the models have been finetuned? While there exists some work on studying the robustness of BERT finetuned for a few NLP tasks, there is no rigorous study that compares this robustness across encoder only, decoder only and encoderdecoder models. In this paper, we characterize changes between pretrained and finetuned language model representations across layers using two metrics: CKA and STIR. Further, we study the robustness of three language models (BERT, GPT-2 and T5) with eight different text perturbations on classification tasks from the General Language Understanding Evaluation (GLUE) benchmark, and generation tasks like summarization, free-form generation and question generation. GPT-2 representations are more robust than BERT and T5 across multiple types of input perturbation. Although models exhibit good robustness broadly, dropping nouns, verbs or changing characters are the most impactful. Overall, this study provides valuable insights into perturbation-specific weaknesses of popular Transformer-based models, which should be kept in mind when passing inputs. We make the code and models publicly available 1 .
On Robustness of Finetuned Transformer-based NLP Models
[ { "figure_caption": "Figure 2 :2Figure 2: Comparison of robustness scores of BERT, GPT-2, and T5, based on GLUE scores.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Most affected layers in GPT-2 model when subjected to text perturbation techniques: blue → most affected layer, green → second most affected layer, and orange → third most affected layer.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Most affected layers in T5 model when subjected to text perturbation techniques: blue → most affected layer, green → second most affected layer, and orange → third most affected layer.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "STSFigure 6 :6Figure 6: CKA values between the hidden state representations of pre-trained and finetuned BERT-base, the layer-wise performances and CKA/STIR for GLUE tasks. Matthews Correlation Coefficient (MCC) is used as the performance metric for the AX and CoLA datasets. For STS-B, it is Pearson Correlation Coeffficient (PCC) and for the remaining tasks, Accuracy is used.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "STS0.91 0.92 0.93 0.94 0.95 0.94 0.94 0.94 0.91 0.88 0.8 0.29 0.85 0.91 0.93 0.93 0.94 0.94 0.94 0.94 0.93 0.9 0.88 0.8 0.29 0.81 0.89 0.9 0.91 0.91 0.92 0.91 0.92 0.91 0.88 0.86 0.79 0.31 0.54 0.56 0.56 0.56 0.55 0.55 0.55 0.57 0.61 0.63 0.64 0.61 0.15 STIR(Fine-tuned|Pre-trained) STIR(Pre-trained|Fine-tuned) CKA(Pre-trained, Fine-tuned) 0.46 0.89 0.99 0.99 0.99 0.99 0.99 0.98 0.98 0.95 0.89 0.23 0.72 0.48 0.91 0.98 0.99 0.99 0.98 0.97 0.97 0.97 0.94 0.88 0.23 0.72 0.5 0.91 0.97 0.99 0.98 0.97 0.96 0.96 0.96 0.94 0.87 0.23 0.73 0.55 0.93 0.93 0.97 0.95 0.94 0.92 0.92 0.93 0.91 0.85 0.23 0.74 0.56 0.93 0.93 0.96 0.94 0.94 0.92 0.92 0.93 0.91 0.85 0.23 0.37 0.48 0.31 0.09 0.11 0.1 0.09 0.09 0.09 0.1 0.11 0.12 0.02 STIR(Fine-tuned|Pre-trained) STIR(Pre-trained|Fine-tuned) CKA(Pre-trained, Fine-tuned) .91 0.91 0.82 0.74 0.76 0.73 0.73 0.73 0.69 0.64 0.42 0.24 0.94 0.98 0.98 0.73 0.75 0.76 0.76 0.8 0.81 0.77 0.72 0.48 0.27 0.92 0.96 0.97 0.8 0.83 0.84 0.84 0.86 0.85 0.82 0.75 0.5 0.27 0.65 0.46 0.46 0.92 0.72 0.76 0.65 0.56 0.48 0.45 0.36 0.24 0.11 0.54 0.44 0.46 0.87 0.84 0.84 0.77 0.7 0.61 0.57 0.46 0.3 0.12 0.57 0.43 0.44 0.89 0.79 0.81 0.72 0.64 0.54 0.51 0.4 0.26 0.11 0.66 0.49 0.48 0.92 0.75 0.78 0.68 0.59 0.51 0.48 0.39 0.25 0.12 0.72 0.56 0.56 0.94 0.78 0.81 0.72 0.65 0.57 0.54 0.45 0.29 0.14 0.73 0.68 0.68 0.91 0.87 0.88 0.83 0.78 0.72 0.68 0.58 0.38 0.18 0.75 0.75 0.76 0.89 0.88 0.89 0.86 0.83 0.78 0.74 0.64 0.42 0.21 0.75 0.83 0.84 0.8 0.87 0.87 0.86 0.87 0.84 0.8 0.71 0.47 0.24 0.75 0.83 0.84 0.77 0.83 0.83 0.83 0.84 0.82 0.79 0.71 0.48 0.25 0.55 0.64 0.65 0.42 0.49 0.49 0.52 0.56 0.58 0.55 0.52 0.34 0.17", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ".87 0.87 0.94 0.78 0.79 0.75 0.7 0.67 0.64 0.49 0.28 0.15 0.91 0.98 0.97 0.91 0.87 0.88 0.85 0.85 0.83 0.79 0.61 0.35 0.16 0.85 0.96 0.95 0.89 0.9 0.9 0.88 0.88 0.86 0.81 0.62 0.36 0.16 0.83 0.73 0.71 0.92 0.81 0.82 0.79 0.71 0.65 0.59 0.45 0.26 0.11 0.65 0.68 0.66 0.81 0.86 0.85 0.83 0.79 0.72 0.65 0.49 0.28 0.1 0.67 0.67 0.64 0.82 0.83 0.83 0.81 0.76 0.69 0.62 0.47 0.27 0.1 0.8 0.72 0.7 0.9 0.82 0.83 0.8 0.74 0.67 0.62 0.47 0.27 0.11 0.82 0.8 0.78 0.91 0.86 0.86 0.84 0.8 0.74 0.69 0.53 0.3 0.12 0.74 0.84 0.82 0.85 0.88 0.88 0.87 0.86 0.82 0.76 0.58 0.34 0.13 0.75 0.88 0.86 0.84 0.89 0.89 0.88 0.87 0.84 0.79 0.61 0.35 0", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: CKA values between the hidden state representations of pre-trained and finetuned GPT-2, the layer-wise performances and CKA/STIR for GLUE tasks. Matthews Correlation Coefficient (MCC) is used as the performance metric for the AX and CoLA datasets. For STS-B, it is Pearson Correlation Coeffficient (PCC) and for the remaining tasks, Accuracy is used.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Performance", "figure_data": "Perturbation BERT GPT-2 T5 BERT GPT-2 T5 BERT GPT-2 T5 BERT GPT-2 T5 BERT GPT-2 T5Drop nouns0.180.100.240.920.93 0.93 0.940.96 0.94 0.560.48 0.57 0.890.92 0.89Drop verbs0.050.240.060.950.95 0.95 0.980.99 0.96 0.930.92 0.89 0.970.96 0.96Drop first0.480.750.540.980.97 0.98 1.000.99 1.00 0.980.93 0.94 0.990.98 0.99Drop last0.340.450.321.000.99 1.00 1.001.00 1.00 0.840.83 0.83 0.950.96 0.95Swap text0.130.160.060.980.98 0.97 0.991.01 0.98 0.980.96 0.95 0.970.97 0.96Add text0.850.920.860.990.99 0.99 0.931.00 0.96 0.990.99 0.98 0.991.00 0.99Change char 0.140.290.290.840.86 0.84 0.430.97 0.65 0.580.52 0.57 0.880.95 0.94Bias0.950.960.921.001.01 1.00 1.001.00 1.00 0.990.99 0.99 1.001.01 1.00MNLI-m (Accuracy) QNLI (Accuracy)RTE (Accuracy)WNLI (Accuracy)Perturbation BERT GPT-2 T5 BERT GPT-2 T5 BERT GPT-2 T5 BERT GPT-2 T5Drop nouns0.830.85 0.83 0.820.87 0.82 0.841.01 0.89 1.001.00 1.05Drop verbs0.890.90 0.90 0.960.94 0.94 0.981.01 0.96 1.001.01 1.03Drop first0.940.94 0.95 0.970.98 0.97 0.951.00 1.00 1.000.99 1.01Drop last0.890.90 0.89 0.970.98 0.97 0.971.01 0.98 1.000.99 1.00Swap text0.940.95 0.94 0.970.97 0.97 0.980.97 0.97 1.001.01 1.00Add text0.950.95 0.95 0.991.00 0.99 1.000.99 0.97 1.001.03 1.03Change char 0.670.66 0.68 0.770.75 0.77 0.820.99 0.83 1.001.03 1.01Bias0.991.00 1.00 1.001.00 1.00 1.001.02 1.00 1.001.02 1.00", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of robustness scores on various GLUE tasks for finetuned Transformer models under different types of perturbations. Highest robustness values per row are highlighted in bold. Top three perturbations (per column) with significant impact on model performance are underlined.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "2 T5 Drop nouns 0.97 0.62 0.78 0.39 0.94 0.62 0.27 0.19 0.01 0.02 0.29 0.21 0.79 0.64 0.64 0.39 0.79 0.63 Drop verbs 0.99 0.92 0.85 0.83 0.97 0.91 0.99 0.99 0.97 0.97 0.99 0.99 0.89 0.89 0.86 0.78 0.90 0.88 Drop first 1.01 0.99 1.00 0.97 0.95 0.98 0.85 0.83 0.74 0.69 0.87 0.83 1.00 0.94 0.99 0.91 1.00 0.94 Drop last 1.00 1.00 1.00 1.00 0.95 1.00 0.85 0.83 0.72 0.70 0.86 0.83 1.00 1.00 0.99 1.00 1.00 1.00 Swap text 1.00 0.99 0.99 0.97 0.94 0.98 0.99 1.00 0.98 1.00 1.00 1.01 0.91 0.98 0.93 0.96 0.91 0.98 Add text 0.94 0.95 0.93 0.90 0.90 0.95 0.85 0.91 0.81 0.82 0.88 0.89 0.97 0.97 0.93 0.92 0.97 0.97 Change char 0.82 0.59 0.61 0.39 0.79 0.60 0.68 0.69 0.56 0.56 0.71 0.70 0.85 0.69 0.69 0.49 0.86 0.69 Bias 0.99 0.98 0.99 0.97 0.94 0.98 0.93 0.99 0.91 0.99 0.93 0.99 0.96 1.00 0.97 1.00 0.96 1.00", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Robustness scores for finetuned GPT-2 and T5 under different types of perturbations. Highest robustness values per row are shown in bold. Top three perturbations (per column) with most impact on ROUGE are underlined.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, and Bo Li. 2021. Adversarial glue: A multitask benchmark for robustness evaluation of language models. arXiv preprint arXiv:2111.02840.", "figure_data": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow-man. 2019. Neural network acceptability judgments.Transactions of the Association for ComputationalLinguistics, 7:625-641.Adina Williams, Nikita Nangia, and Samuel R Bow-man. 2017. A broad-coverage challenge corpus forsentence understanding through inference. arXivpreprint arXiv:1704.05426.Thomas Wolf, Lysandre Debut, Victor Sanh, JulienChaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,et al. 2020. Transformers: State-of-the-art naturallanguage processing. In Proceedings of the 2020 con-ference on empirical methods in natural languageprocessing: system demonstrations, pages 38-45.John M Wu, Yonatan Belinkov, Hassan Sajjad, NadirDurrani, Fahim Dalvi, and James Glass. 2020. Simi-larity analysis of contextual word representation mod-els. arXiv preprint arXiv:2005.01172.", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Kalyan Pavan; Reddy Neerudu; Subba Reddy Oota; Mounika Marreddy; Venkateswara Rao Kagita; Manish Gupta
[ { "authors": "Samira Abnar; Lisa Beinborn; Rochelle Choenni; Willem Zuidema", "journal": "", "ref_id": "b0", "title": "Blackbox meets blackbox: Representational similarity and stability analysis of neural language models and brains", "year": "2019" }, { "authors": "Daniel Cer; Mona Diab; Eneko Agirre; Inigo Lopez-Gazpio; Lucia Specia", "journal": "", "ref_id": "b1", "title": "Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation", "year": "2017" }, { "authors": "Ido Dagan; Oren Glickman; Bernardo Magnini", "journal": "", "ref_id": "b2", "title": "The pascal recognizing textual entailment challenge", "year": "2005" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Bill Dolan; Chris Brockett", "journal": "IWP", "ref_id": "b4", "title": "Automatically constructing a corpus of sentential paraphrases", "year": "2005" }, { "authors": "Siddhant Garg; Goutham Ramakrishnan", "journal": "", "ref_id": "b5", "title": "Bae: Bert-based adversarial examples for text classification", "year": "2020" }, { "authors": "Ruidan He; Linlin Liu; Hai Ye; Qingyu Tan; Bosheng Ding; Liying Cheng; Jia-Wei Low; Lidong Bing; Luo Si", "journal": "", "ref_id": "b6", "title": "On the effectiveness of adapterbased tuning for pretrained language model adaptation", "year": "2021" }, { "authors": "John Hewitt; Christopher D Manning", "journal": "", "ref_id": "b7", "title": "A structural probe for finding syntax in word representations", "year": "2019" }, { "authors": "Shankar Iyer; Jeff Bilmes; Jordan Boyd-Graber", "journal": "", "ref_id": "b8", "title": "First quora dataset release: Question pairs", "year": "2017" }, { "authors": "Di Jin; Zhijing Jin; Joey Tianyi Zhou; Peter Szolovits", "journal": "", "ref_id": "b9", "title": "Is bert really robust? a strong baseline for natural language attack on text classification and entailment", "year": "2020" }, { "authors": "Simon Kornblith; Mohammad Norouzi; Honglak Lee; Geoffrey Hinton", "journal": "", "ref_id": "b10", "title": "Similarity of neural network representations revisited", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Nikolaus Kriegeskorte; Marieke Mur; Peter A Bandettini", "journal": "Frontiers in systems neuroscience", "ref_id": "b12", "title": "Representational similarity analysisconnecting the branches of systems neuroscience", "year": "2008" }, { "authors": "Aarre Laakso; Garrison Cottrell", "journal": "Philosophical psychology", "ref_id": "b13", "title": "Content and cluster analysis: assessing representational similarity in neural systems", "year": "2000" }, { "authors": "Hector Levesque; Ernest Davis; Leora Morgenstern", "journal": "", "ref_id": "b14", "title": "The winograd schema challenge", "year": "2012" }, { "authors": "Linyang Li; Ruotian Ma; Qipeng Guo; Xiangyang Xue; Xipeng Qiu", "journal": "", "ref_id": "b15", "title": "Bert-attack: Adversarial attack against bert using bert", "year": "2020" }, { "authors": "Wangchunshu Bill Yuchen Lin; Ming Zhou; Pei Shen; Chandra Zhou; Yejin Bhagavatula; Xiang Choi; Ren", "journal": "", "ref_id": "b16", "title": "Commongen: A constrained text generation challenge for generative commonsense reasoning", "year": "2019" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b18", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Amil Merchant; Elahe Rahimtoroghi; Ellie Pavlick; Ian Tenney", "journal": "", "ref_id": "b19", "title": "What happens to bert embeddings during fine-tuning?", "year": "2020" }, { "authors": "Gabriele Merlin; Vedant Nanda; Ruchit Rawal; Mariya Toneva", "journal": "", "ref_id": "b20", "title": "What happens during finetuning of vision transformers: An invariance based investigation", "year": "2023" }, { "authors": "Ari Morcos; Maithra Raghu; Samy Bengio", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Insights on representational similarity in neural networks with canonical correlation", "year": "2018" }, { "authors": "Till Vedant Nanda; Camila Speicher; John P Kolling; Krishna Dickerson; Adrian Gummadi; Weller", "journal": "", "ref_id": "b22", "title": "Measuring representational robustness of neural networks through shared invariances", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b23", "title": "", "year": "" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "", "ref_id": "b24", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b26", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "", "ref_id": "b27", "title": "Squad: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Soumya Sanyal; Zeyi Liao; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Ro-bustLR: A diagnostic benchmark for evaluating logical robustness of deductive reasoners", "year": "2022" }, { "authors": "Madeline Chantry Schiappa; Shruti Vyas; Hamid Palangi; Vibhav Yogesh S Rawat; Vineet", "journal": "", "ref_id": "b29", "title": "Robustness analysis of video-language models against visual and language perturbations", "year": "2022" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Y Ng; Christopher Potts", "journal": "", "ref_id": "b30", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Lichao Sun; Kazuma Hashimoto; Wenpeng Yin; Akari Asai; Jia Li; Philip Yu; Caiming Xiong", "journal": "", "ref_id": "b31", "title": "Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert", "year": "2020" }, { "authors": "Ian Tenney; Dipanjan Das; Ellie Pavlick", "journal": "", "ref_id": "b32", "title": "Bert rediscovers the classical nlp pipeline", "year": "2019" }, { "authors": "Ian Tenney; Patrick Xia; Berlin Chen; Alex Wang; Adam Poliak; Thomas Mccoy; Najoung Kim; Benjamin Van Durme; Dipanjan Samuel R Bowman; Das", "journal": "", "ref_id": "b33", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Attention is all you need", "year": "2017" }, { "authors": "Elena Voita; Rico Sennrich; Ivan Titov", "journal": "", "ref_id": "b35", "title": "The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b36", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 390.05, 647.89, 89.74, 18.54 ], "formula_id": "formula_0", "formula_text": "HSIC(K,L) √ HSIC(K,K)HSIC(L,L)" }, { "formula_coordinates": [ 3, 72.06, 458.2, 217.07, 26.31 ], "formula_id": "formula_1", "formula_text": "2 |m 1 , X) = 1 k X ′ CKA(m 2 (X), m 2 (X ′ ))" }, { "formula_coordinates": [ 5, 331.14, 696.12, 164.86, 14.64 ], "formula_id": "formula_2", "formula_text": "Metric ROUGE-1 ROUGE-2 ROUGE-L Task GPT-2 T5 GPT-2 T5 GPT-" }, { "formula_coordinates": [ 8, 97.98, 72.3, 397.63, 22.73 ], "formula_id": "formula_3", "formula_text": "Text Summarization Free-form Text Generation Question Generation ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L Perturbation GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-2 T5 GPT-" }, { "formula_coordinates": [ 11, 319.16, 621.83, 202.64, 42.06 ], "formula_id": "formula_4", "formula_text": "H = I n -1 n 11 T . • Return CKA(X, Y ) = HSIC(K,L) √ HSIC(K,K)HSIC(L,L)" }, { "formula_coordinates": [ 12, 111.45, 133.34, 136.6, 11.62 ], "formula_id": "formula_5", "formula_text": "argmin xs ||m 1 (x) -m 2 (x s )|| 2" }, { "formula_coordinates": [ 12, 73.92, 301.14, 212.15, 29.79 ], "formula_id": "formula_6", "formula_text": "STIR(m 2 |m 1 , X, S r ) = 1 k X ′ S r (m 2 (X), m 2 (X ′ ))" } ]
10.18653/v1/2021.eacl-demos.36
2023-11-28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b13", "b2", "b0", "b1", "b8", "b8", "b12", "b22", "b28", "b5", "b8" ], "table_ref": [], "text": "Comparative reasoning constitutes a fundamental cognitive ability that plays a crucial role in decisionmaking. It refers to comparing objects, concepts, or entities to draw conclusions or make informed decisions. For example, consumers often compare products on their features such as price, quality, and user reviews before placing an order. Policymakers weigh the advantages and disadvantages of different policy proposals to address pressing issues. Regarding textual documents, comparative reasoning is commonly needed in identifying differences between research studies, contrasting news articles from different sources, or synthesizing arguments of opposing viewpoints in a debate.\nRecent research has developed models for a few NLP tasks related to comparing texts, including identifying comparative sentences (Jindal and Liu, 2006), mining comparable entities (Li et al., 2011), identifying comparative aspects from a set of questions (Bondarenko et al., 2022;Beloucif et al., 2022), extracting comparative summaries (Bista et al., 2019), and summarizing different opinions (Iso et al., 2022). Yet, the data collection for these tasks relies on expensive and time-consuming manual annotation. As a result, low-resource scenarios are common when it comes to new comparative tasks (Iso et al., 2022). Moreover, the task-specific design of such models limits their general comparative reasoning abilities. Meanwhile, pre-trained language models (PLMs) such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) exhibit generalizability on several NLP tasks. However, existing pre-training methods such as masked language modeling and span in-filling fail to empower language models (LMs) with strong comparative reasoning abilities due to the lack of explicit training on comparisons.\nTo address these challenges, we propose a novel pre-training framework to enhance the comparative reasoning abilities of LMs. Specifically, it trains LMs to capture the comparison information between entities from paired documents. Our approach pilots around a scalable, labor-free data collection method that gathers documents as entity descriptions and a wealth of facts for entity comparison by combining structured (i.e., Wikidata) and unstructured data (i.e., news and Wikipedia). We represent these comparisons of facts as quintuples, which consist of a pair of entities and the corresponding values of their shared property. To empower LMs with comparative reasoning abilities on such data, given two comparable entities and their corresponding descriptive documents, we design three novel pre-training tasks including the generation of comparative answers, question-answer pairs, and summaries. Pre-training data of these tasks are obtained through automatic textualization of fac-\nThe show, with a book by the screenwriter Diablo Cody (\"Juno\") and staging by director Diane Paulus (\"Waitress\"), takes on the good work we are always asking new musicals to do: the work of singing about real things. tual quintuples, so as to prevent expensive manual annotation. Subsequently, the pre-training tasks are uniformly formatted with natural language prompts to perform multi-task pre-training on the LMs. To our best knowledge, this work is the first to pretrain LMs for comparative reasoning.\nTo comprehensively evaluate the comparative reasoning abilities of LMs, we introduce a new benchmark with a suite of comparative reasoning tasks. It contains: (1) comparative question answering (QA), sourced from subsets of HotpotQA and 2WikiQA datasets (Yang et al., 2018;Ho et al., 2020); (2) comparative question generation (QG), including HotpotQG and 2WikiQG which are converted from the QA datasets; (3) comparative summarization, including the Diffen dataset that we crawled and the existing CocoTrip dataset (Iso et al., 2022).\nWith this benchmark, we conduct extensive experiments with vanilla PLMs (i.e., BART and T5) and their counterparts trained by our proposed framework. Results demonstrate a notable improvement in the performance of these PLMs on different comparative reasoning scenarios, especially under low-resource settings. Specifically, under the fewshot setting, the BART model pre-trained with our framework outperforms the vanilla BART by an average of 6.17 points on all datasets. Under the zero-shot setting, the improvement becomes as high as 13.99 points on average. These results highlight the effectiveness of our pre-training framework which empowers LMs with impressive abilities of comparative reasoning even when zero or few examples are available. Besides, we analyze the effect of the pre-training data size on the model performance, and provide a case study to better understand the benefits of our pre-training.\nOur contributions are summarized as follows:\n• We propose a scalable method of collecting and designing training data for entity comparison, using both structured and unstructured data sources that are publicly accessible. • We present a novel framework for pre-training LMs to enhance their comparative reasoning abilities on multiple related objectives. • We provide the first benchmark for entity comparison over texts, serving as a foundation for future research in this topic.\n2 Related Work" }, { "figure_ref": [], "heading": "Comparative Reasoning", "publication_ref": [ "b9", "b13", "b0", "b3", "b0", "b11", "b8" ], "table_ref": [], "text": "The academic landscape of comparative reasoning tasks has seen a significant progression. Early research primarily focused on mining explicit comparative information from massive corpora, such as identifying comparative sentences (Jindal and Liu, 2006), extracting comparable entities (Li et al., 2011), and classifying components of comparison (Beloucif et al., 2022). Recent work focused more on text generation tasks such as generating arguments to answer comparative questions (Chekalina et al., 2021), generating comparable questions from news (Beloucif et al., 2022), and summarizing comparative opinions (Lerman and McDonald, 2009;Iso et al., 2022). The existing techniques were designed for specific tasks and could not generalize across all types of comparative reasoning tasks. Moreover, they suffered from the scarcity of labelled data in low-resource settings. Our approach aims to address these two challenges." }, { "figure_ref": [], "heading": "Language Model Pre-training", "publication_ref": [ "b26", "b29", "b10", "b15", "b7", "b7", "b27", "b21", "b6", "b6", "b18", "b25" ], "table_ref": [], "text": "It is worth exploring to use both structured and unstructured data in language model pre-training.\nEarly work proposed to fuse knowledge graphs and textual information by encoding entities or nodes as a part of the input (Zhang et al., 2019;Wang et al., 2021;Yu et al., 2022;Ke et al., 2021;Liu et al., 2021;Hu et al., 2022). For example, Hu et al. (2022) integrated graph-based knowledge augmented modules to bring structured knowledge into generative LMs. Another branch of work incorporated entity information (Xiong et al., 2020;Zhang et al., 2022) or relational information (Qin et al., 2021;Hu et al., 2021) without modifying the structure of the LM. While these pre-trained models delivered encouraging outcomes across a multitude of downstream tasks, they were not tailored for the needs of comparative reasoning. A novel design of pre-training objectives is necessary, which has not been inherently presented in these models. Regarding the collection of pre-training data, RGPT-QA (Hu et al., 2021) combined Wikidata and Wikipedia to generate synthetic QA pairs for pre-training. However, such a set of pre-training data only comprised statements of individual entities, so the trained model is not effective for multihop comparative questions. MQA-QG (Pan et al., 2021) and MuSiQue (Trivedi et al., 2022) generate synthetic data for unsupervised multi-hop QA, but they did not consider other comparative tasks." }, { "figure_ref": [], "heading": "Pre-training Framework", "publication_ref": [], "table_ref": [], "text": "To enhance the ability of LMs in comparative reasoning, we introduce a novel framework for pretraining LMs on a collected corpus of comparative entities. Specifically, LMs are given a pair of documents, each describing an entity, and are trained to generate target sequences which require comparison between the entities. We consider three types of target sequences: an answer to a comparative question, a question-answer pair that requires comparative reasoning, and a comparative summary of entities. Correspondingly, we design three text-totext pre-training tasks that require the LMs to simultaneously attend to both documents and extract information for pairwise comparison. This framework enables them to handle various downstream scenarios that require comparative reasoning.\nTo collect data for large-scale pre-training, we extract comparable entity pairs with their properties by combining structured and unstructured data. The extracted data are first formulated as quintuples (elaborated in §3.1 to show their comparative nature), which are later used for text-to-text pretraining." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "A Wikidata statement is denoted as (e, p, v). Here, e signifies the entity, which is the subject of the statement. p refers to the property, describing the aspect of the entity that the statement addresses. v represents the value, which is the object entity or specific value associated with the property. We define a quintuple as a pair of Wikidata statements of two comparable entities on a common property. Formally, a quintuple is represented as (e 1 , e 2 , p, v 1 , v 2 ), where p is a common property of e 1 and e 2 , and v 1 and v 2 are the corresponding values. Such quintuples enable the comparison on shared properties, reflecting the similarity or difference between the corresponding property values or tail entities.\nIn our framework, the input sequence constitutes two documents D 1 and D 2 on e 1 and e 2 respectively. The target sequences are textualized forms of the quintuple, such as question-answer pairs (denoted by (Q, A)), and summaries (denoted by S)." }, { "figure_ref": [], "heading": "Pre-training Data Preparation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data Sources", "publication_ref": [], "table_ref": [], "text": "Structured data is a reliable source for obtaining entity information. We use Wikidata, a collaborative knowledge base that stores data in a structured format. Wikidata contains numerous statements that describe entities, where each statement includes a property of the entity and a value. Each entity and property is associated with a set of aliases.\nUnstructured data including news sources (i.e., Gigawords, CC-News) and encyclopedia (i.e., Wikipeda) offer an abundance of information for determining the comparability of entities and relevant properties. For example, a sentence in a piece of news from New York Times like \"The show, with a book by the screenwriter Diablo Cody ('Juno') and staging by director Diane Paulus ('Waitress'), takes on the good work ...,\" indicates that Diablo Cody and Diane Paulus can be compared on the property of work (values: screenwriter vs. director). Besides, Wikipedia contains a vast collection of articles pertaining to a large set of entities. A Wikidata entity uniquely corresponds to a Wikipedia article whose title matches the entity's surface form." }, { "figure_ref": [], "heading": "Quintuple Collection", "publication_ref": [], "table_ref": [], "text": "In this section, we elaborate the process of collecting quintuples by combining structured data (i.e., Wikidata) and unstructured data (i.e., news and Wikipedia). Intuitively, when a pair of statements concerning the same property of related entities cooccur in a textual context, there is a high probability that these statements are indeed comparable.\nTo extract this comparability information, we first sample a paragraph from the news or Wikipedia. Then, we link Wikidata statements to the sentences in the paragraph by identifying the mentions of entity e, property p, and value v using string matching. Specifically, a statement (e, p, v) is linked to a sentence if the aliases of e, p, and v all appear in the sentence. Next, we pair (e 1 , p 1 , v 1 ) and (e 2 , p 2 , v 2 ) if they satisfy the following criteria:\n1. e 1 and e 2 belong to the same category, e.g., both have the value human for property instance of. This ensures the entities are analogous to each other. 2. p 1 = p 2 . This follows the common practice that comparisons are usually made on a shared property between two entities. 3. The sentences linked to (e 1 , p 1 , v 1 ) and (e 2 , p 2 , v 2 ) co-occur in a from news or Wikipedia. Being mentioned together indicates implicit entity comparison. We denote such a statement pair as a quintuple (e 1 , e 2 , p, v 1 , v 2 ). By following the above criteria, such quintuples store necessary information for comparing entities e 1 and e 2 , which plays a critical role in our pre-training task design." }, { "figure_ref": [], "heading": "Quintuple Textualization", "publication_ref": [ "b23", "b16" ], "table_ref": [ "tab_0", "tab_2" ], "text": "In order to empower the LM with the ability of comparative reasoning in various language generation scenarios, we pre-train the LM in a text-to-text manner. To achieve this, the first step is to represent the comparative information inherent in the quintuples in a textual form.\nTo begin with, we extract descriptive documents D 1 and D 2 for each pair of entities e 1 and e 2 as contexts in our pre-training. First, we find Wikipedia articles of e 1 and e 2 by the links from Wikidata. To ensure the information within the quintuple can be inferred from the context, we filter the articles based on whether any sentence within the article can be linked to statements (e 1 , p, v 1 ) and (e 2 , p, v 2 ). We link the statements based on two heuristics: (1) Within an article pertaining to entity e, sentences are highly probable to discuss e as their subject; (2) If a sentence in a Wikipedia article of e mentions both e and v from a Wikidata statement (e, p, v) , then the sentence is likely to describe the fact of (e, p, v). Thus, we link the statements to sentences whenever (e, v) or (p, v) can be matched. To assess the linking quality, we randomly sampled 100 statement-sentence links and performed manual inspection. The linking accuracy exceeds 95%, indicating the Wikidata statements are effectively linked to the sentences. Finally, due to length limit of LMs, we split the original article into 10-sentence segments, and use the segment that contains the linked sentence as the descriptive document D 1 for e 1 (or D 2 for e 2 ). Next, we convert the comparison knowledge encapsulated within the quintuples into comparative texts, namely, QA pairs and summaries. To synthesize comparative QA pairs (Q, A), we design a diverse set of templates shown in Table 1. To generate synthetic comparative summaries S, we utilize an off-the-shelf data-to-text model (Ribeiro et al., 2021) fine-tuned on DART (Nan et al., 2021) dataset. This allows us to transform quintuples into concise declarative sentences. An example of a textualized quintuple is provided in Table 3.\n1 } [SEP] {D 2 } → Q; A Comparative Summary Generation Generate a comparative summary. Context: {D 1 } [SEP] {D 2 } → S Text Infilling {corrupted D 1 } [SEP] {corrupted D 2 } → {D 1 } [SEP] {D 2 }" }, { "figure_ref": [], "heading": "Pre-training Tasks and Objectives", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this section, we describe three comparative pretraining tasks used to train LMs. They are all text generation tasks, which align seamlessly with architectures of widely used language models such as BART and T5. We unify them with task-specific prompts in a multi-task setting, shown in Table 2." }, { "figure_ref": [], "heading": "Comparative Answer Generation", "publication_ref": [], "table_ref": [], "text": "To train the LM with the ability to answer comparative questions, given a comparative question, we concatenate it with the documents D 1 , D 2 as input, and then train the model to generate the corresponding answer. This task not only requires the model to find relevant contexts to the question in each single document, more importantly, it requires the interaction between both documents to make the comparison. We define the loss function as:\nL QA = - (Q i ,A i )∈T log P (A i |Q i , D 1 , D 2 )\nin which T is a set of QA pairs derived from the templates. and P (•) is the predicted probability." }, { "figure_ref": [], "heading": "Comparative QA Pairs Generation", "publication_ref": [], "table_ref": [], "text": "Given two documents, the model is required to generate comparative questions and answers. With this objective, the model learns to attend to both documents, identify the comparable properties of two entities and ask meaningful questions:\nL QAG = - (Q i ,A i )∈T log P (Q i , A i |D 1 , D 2 )" }, { "figure_ref": [], "heading": "Comparative Summary Generation", "publication_ref": [], "table_ref": [], "text": "Comparative summarization aims at generating summaries that highlight the similarities or differences between two entities given their descriptions. Given two documents, the model is tasked with generating short comparative summaries that represent the comparable statements:\nL SUM = - S∈S log P (S|D 1 , D 2 )\nwhere S is the set of summaries from quintuple textualization." }, { "figure_ref": [], "heading": "Prompt-based Multi-task Training", "publication_ref": [ "b22", "b24", "b12" ], "table_ref": [ "tab_1" ], "text": "Inspired by the prompt-based multi-task training methods utilized by previous text-to-text transformers (Raffel et al., 2020;Sanh et al., 2022), we jointly train the aforementioned pre-training tasks by unifying their input sequences with natural language prompts. The detailed format of source and target sequences are shown in Table 2. The model is jointly optimized for all tasks, which encourages the model to learn generalizable representations that are beneficial across tasks. To preserve its general language modeling ability, we employ the proposed pre-training tasks along with the text infilling (TI) task, where the model is required to reconstruct the documents corrupted with randomly masked spans, as described in Lewis et al. (2020). We denote the loss function for text infilling as as L TI . Hence, the overall objective is as follows:\nL = L QA + L QAG + L SUM + L TI .\nWe denote the proposed multi-task pre-training for comparison as +CMP. To analyze the effects of each pre-training task, we define single-task variants: +CMP QA for comparative answer generation, +CMP QAG for comparative QA pairs generation, and +CMP SUM for summary generation." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "To evaluate our proposed method, we consider downstream tasks involving comparative reasoning, including comparative question answering (QA), comparative question generation (QG) and comparative summarization. In this section, we introduce the downstream datasets and evaluation metrics." }, { "figure_ref": [], "heading": "Comparative Question Answering", "publication_ref": [ "b4", "b28", "b5" ], "table_ref": [], "text": "Comparative QA requires the comparison of two entities on their shared properties. Since our focus on comparison over documents instead of knowledge retrieval, we do not include distractor passages but directly use the gold evidence passages as the context for question answering. For evaluation, we calculate the exact match (EM) score between the predicted answer and the ground-truth answer, after necessary normalization (Chen et al., 2017). Besides, unigram F-1 scores are also calculated as a complementary metric.\nHotpotQA and 2WikiQA. HotpotQA (Yang et al., 2018) and 2WikiMultihopQA (2WikiQA) (Ho et al., 2020) are factual question answering datasets collected from English Wikipedia. These datasets require multi-hop reasoning on different entities before reaching the correct answer. To focus on comparative ability, we obtain the subset of comparative questions by their question type annotations in the original dataset. As a result, the train and validation set of HotpotQA consist of 17,456 and 1,487 instances, respectively. Likewise, 2WikiQA comprises 51,693 and 3,040 instances in training and validation set, respectively. We report results in validation sets." }, { "figure_ref": [], "heading": "Comparative Question Generation", "publication_ref": [ "b19", "b14" ], "table_ref": [], "text": "Comparative QG aims at generating questions that draw comparisons between the shared properties of two entities, given their textual descriptions. We convert the aforementioned QA datasets to QG by using the evidence passages as input and the comparative question as output. We report the results on validation sets using overall BLEU (Papineni et al., 2002) and ROUGE-L (Lin, 2004) metrics." }, { "figure_ref": [], "heading": "Comparative Summarization", "publication_ref": [ "b30", "b8" ], "table_ref": [], "text": "Comparative summarization aims at generating summaries that highlight the similarities or differences between two entities given their descriptions. Following the convention in text summarization (Zhang et al., 2020), we evaluate the generated summaries with ROUGE-2 and ROUGE-L scores.\nCocoTrip. We collect data from the common opinion summarization setting of the CocoTrip dataset (Iso et al., 2022), which involves summarizing the shared opinions from two sets of reviews about two hotels. The dataset consists of 20, 10, and 18 instances for training, validation and test set, respectively. Since the test data is available, we report the results on the test set. We concatenate both reviews as the input context. Diffen. To address the lack of available datasets for the comparative summarization of two entities, we create a new dataset from Diffen.com, a website recognized for offering high-quality, humanauthored comparisons between different people or objects to help people make informed decisions. Comparison articles on Diffen.com typically include a brief introduction summarizing the similarities and differences. We manually collect these introductory paragraphs as comparative summaries. To gather input sources, we obtain Wikipedia articles for each entity. The resulting dataset comprises 20 instances for training and 100 instances for vali- dation. The task aims at generating a comparative summary based on the given text descriptions of two entities. The input sequence consists of concatenated entity descriptions, with each description truncated to the first 512 tokens." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b12", "b22" ], "table_ref": [], "text": "As a pilot study on pre-training for comparative reasoning, we adopt the pre-trained BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) as baselines. Models further pre-trained on our comparative objectives are denoted as BART+CMP and T5+CMP, respectively. See training details in A.2. We also conduct zero-shot experiments with Chat-GPT (gpt3.5-turbo), where the details are in 3.2.3. Since ChatGPT is pre-trained on much larger-scale data, and the downstream datasets might have leaked to its training data, it is not a comparable baseline. We provide ChatGPT (OpenAI, 2021) as a reference to the performance of one of the most advanced large language models. To test the comparative reasoning ability of models under low-resource scenarios, we compare the models in few-shot and zero-shot settings in addition to the conventional full-data fine-tuning. In the few-shot setting, we randomly selected 100 instances from the training set. However, given the limited number of training instances available in CocoTrip and Diffen (only 20 instances each), we merge the full-data and few-shot settings for these two datasets. See training details in Appendix A.3." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Effects of Comparative Pre-training", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In the comprehensive evaluation across the aforementioned six datasets, we compare LMs trained with our method against the vanilla BART and T5. Main results are listed in Table 4.\nWhen abundant data are available, both our proposed models, BART+CMP and T5+CMP, achieve competitive performance which improves their corresponding baselines by ~1 point on average. However, in low-resource scenarios, the superiority of our method over the baselines becomes clearly evident. Specifically, for the few-shot setting, our BART+CMP achieves an average score of 39.09, showing an relative improvement of +19.8% compared to BART's score of 32.62. Similarly, our T5+CMP achieves an average score of 35.55, which improves +14.2% relatively over T5. Among three tasks, our models show the most significant improvement on comparative QA, demonstrating the effectiveness of our synthetic QA pre-training. In zero-shot setting, BART+CMP and T5+CMP also consistently surpass their baselines by large margins. For instance, BART+CMP achieves an average score of 24.15, which outperforms BART by +13.99 (+137% relatively). Likewise, T5+CMP 45.25 52.46 55.96 57.22 11.76 39.51 25.73 52.43 24.91 45.36 12.36 25.92 37.41 + CMP QAG 32.21 38.17 45.13 46.63 12.57 39.29 26.89 52.74 27.22 42.55 12.28 26.85 33.54 + CMP SUM 32.41 37.81 32.34 34.33 12.50 39.40 33.04 59.29 30.54 47.84 12.10 26.69 " }, { "figure_ref": [], "heading": "Effects of Pre-training Tasks", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To further explore the benefits of multi-task pretraining, we compare the performance of our models pre-trained on any single task (i.e., QA, QAG or SUM) with the unified models pre-trained on all proposed tasks. Results are shown in Table 5.\nWhen the model is pre-trained on a single task, we observe a significant improvement in performance on the downstream task that closely resembled the pre-training task. However, such models do not exhibit similar improvements on other tasks that are less similar in nature. For example, BART+CMP QA improves over BART by a large margin on few-shot comparative QA (+11.43 points in F1 on HotpotQA), but performs at a similar or lower level as BART on QG (+0.38 points in BLEU on HotpotQG and -4.29 points in BLEU on 2WikiQG). On the other hand, the unified model BART+CMP exhibits substantial improvements across all downstream tasks and therefore achieves the best overall performance. The improvements brought by multi-task pre-trained on each task is comparable to the gains achieved through the corresponding task-specific pre-training. These results suggest that pre-training on a single task enhances the model's ability to transfer knowledge only to tasks with similar characteristics, while multi-task pre-training enables the model to learn more gen- eralized representations and to effectively transfer the shared knowledge across different tasks." }, { "figure_ref": [ "fig_2" ], "heading": "Effects of the Size of Pre-training Data", "publication_ref": [], "table_ref": [], "text": "In Figure 2, we plot the few-shot performance of BART+CMP on 2WikiQA according to the number of quintuples used in pre-training. We observe that when the number of quintuples increases on a logarithmic scale, the performance grows linearly. The analysis reveals that scaling the pre-training data benefits the downstream tasks, affirming the effectiveness of the proposed method for gathering large scale pre-training data. Further discussion on the effects of entity coverage is shown in 4.4.1." }, { "figure_ref": [], "heading": "Effects of Entity Coverage", "publication_ref": [], "table_ref": [], "text": "To study the effects of the " }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To intuitively show the comparative reasoning ability of our pre-trained model, we present an example of comparative summarization in Table 7. Given documents describing airsoft and paintball, models are expected to generate a summary comparing the commonalities and differences of these two games. However, without exhaustive fine-tuning, the generated summary of BART fails to describe the correct relationship between these two entities. On the contrary, after pre-trained on various comparative reasoning objectives, our model generates high quality comparative summaries based on the provided documents under the few-shot setting. The generated summary includes that both games are popular shooting sports while also comparing their differences in their equipment." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented a novel framework for pre-training language models for comparative reasoning. It obtained quintuples for entity comparison by combining structured and unstructured data, converted the quintuples into textual components, and employed them in three novel sequence-tosequence pre-training tasks. We demonstrated the effects of the pre-training tasks on six downstream datasets, especially in limited-resource scenarios.\nTo facilitate the assessment of models' capability of entity comparison over texts, we release a benchmark for future research." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In our pre-training framework, we generate synthetic data with templates to collect comparative question-answer pairs, which may cause fluency issues on some synthetic questions. Such noise in the pre-training data might affect the downstream performance. Similarly, since the synthetic summaries were generated by an off-the-shelf data-totext model, the language of generated summaries can be rigid and lack of diversity and flexibility. Future work can adopt more advanced approaches to convert quintuples into more fluent and diverse texts for pre-training. Another limitation is that BART and T5 have a maximum input token limit of 1,024. When dealing with longer documents or complex comparative scenarios, this limation may lead to truncation of relevant context, potentially affecting the model's performance. Future work can explore LMs that can handle longer texts." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by NSF IIS-2119531, IIS-2137396, IIS-2142827, IIS-2234058, CCF-1901059, and ONR N00014-22-1-2507. Wenhao Yu is also supported by Bloomberg Data Science Ph.D Fellowship." }, { "figure_ref": [], "heading": "A.3 Downstream Experimental Details", "publication_ref": [ "b20" ], "table_ref": [], "text": "For downstream experiments, we fine-tune the models with a batch size of 64, and search for learning rates among 1e-5, 3e-5, 1e-4. For QA and QG, we set the max input length as 512 tokens and max output length as 32. For summarization, we set the max input length as 1,024 tokens and the output length as 128 tokens. For comparative QA, we select the best checkpoints by the highest F1. For comparative QG, we select the best checkpoints by the highest BLEU. For comparative summarization, the best checkpoints are selected by the highest ROUGE-L. For evaluation metrics, we adopt the implementations of Exact Match, unigram F1, and ROUGE-L by KILT (Petroni et al., 2021), and the BLEU implemented by the Hugging Face evaluate library (v0.3.4). For all downstream datasets, we report the average scores of three run with different random seeds." }, { "figure_ref": [], "heading": "A.4 Experimental Details of ChatGPT", "publication_ref": [], "table_ref": [], "text": "Since ChatGPT is a much larger model with much more pre-trained data, and the downstream datasets might have leaked to its training data, we provide the result of ChatGPT only as a reference to the performance of one of the most advanced large language models. We prompt ChatGPT (gpt3.5-turbo) in zero-shot setting and the prompts are shown in the below table. We modify the prompts from the ones we use with T5/BART and empirically choose the ones that work effectively, as shown in Table 10. " } ]
Comparative reasoning is a process of comparing objects, concepts, or entities to draw conclusions, which constitutes a fundamental cognitive ability. In this paper, we propose a novel framework to pre-train language models for enhancing their abilities of comparative reasoning over texts. While there have been approaches for NLP tasks that require comparative reasoning, they suffer from costly manual data labeling and limited generalizability to different tasks. Our approach introduces a novel method of collecting scalable data for text-based entity comparison, which leverages both structured and unstructured data. Moreover, we present a framework of pre-training language models via three novel objectives on comparative reasoning. Evaluation on downstream tasks including comparative question answering, question generation, and summarization shows that our pre-training framework significantly improves the comparative reasoning abilities of language models, especially under low-resource conditions. This work also releases the first integrated benchmark for comparative reasoning.
Pre-training Language Models for Comparative Reasoning
[ { "figure_caption": "Figure 1 :1Figure 1: The framework of pre-training LMs for comparative reasoning abilities. In Step 1, we collect quintuples for entity comparison by combining structured knowledge base (i.e., Wikidata) and unstructured text corpora (i.e., Gigawords, CC-News, Wikipedia). Details are in § 3.2.2. In Step 2, to obtain text-based pre-training data, we textualize the quintuples into synthetic QA pairs with a set of templates, and convert the quintuples into summaries with an off-the-shelf data-to-text model. We gather Wikipedia documents as text descriptions of entities. Details are in §3.2.3. In Step 3, we design novel pre-training tasks for the LMs. Details are described in §3.3.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Answer the comparative question. Question: {Q} Context: {D 1 } [SEP] {D 2 } → A Comparative QA Pairs Generation Generate a comparative question-answer pair. Context: {D", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Few-shot performance (measured by F1) of BART+CMP on 2WikiQA, when the model is pretrained on different number of quintuples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "D 1 :1Airsoft is a team game in which participants eliminate opposing players by tagging them out of play with spherical plastic projectiles shot with mock air weapons called airsoft guns. ... (446 words left) D 2 : Paintball is a competitive team shooting sport in which players eliminate opponents from play by hitting them with spherical dye-filled gelatin capsules called paintballs that break upon impact. ... (472 words left) Gold: Airsoft is a popular combat simulation game where participants are eliminated when hit by pellets launched from guns that resemble real firearms. In paintball participants try to hit each other with paintballs launched from a special paintball marker/gun. While airsoft is ::::::cheaper :::: and :::::::: provides :: a ::::: more :::::::: realistic ::::::: warfare :::::::::experience, paintball is :::: more :::::::: popular, :::::: more ::::::::: organized :::and :::: has ::::: larger :::::: events. BART (R-L: 18.66, R-2: 4.39) Airsoft is a team shooting sport in which participants eliminate opponents by hitting them with airsoft guns. Airsoft guns are shaped like basketballs or baseball bats and are equipped with a series of round-shaped projectiles called paintballs. BART+CMP (R-L: 19.17, R-2: 8.62) Airsoft and Paintball are two of the most popular shooting sports of all time. Airsoft is a shooting sport that involves hitting opponents :::: with :::::: airsoft ::::: guns, while Paintball is a more aggressive game that :::: uses : a :::::: softer, :::::more :::::::::: aggressive, :::::::::: ball-shaped ::::::::: paintball.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Synthetic QA templates. All indicates the templates are applied to all quintuples. The templates under If v 1 ̸ = v 2 : or If v 1 = v 2 : are applied to quintuples whose v 1 and v 2 are different or the same, respectively.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Task names and the format of source-target sequence format in each pre-training task. John Robert Lewis (February 21, 1940July 17, 2020) was an American statesman and civil rights activist who served in the United States House of Representatives for from 1987 until his death in 2020. He was the chairman of the Student Nonviolent Coordinating Committee (SNCC) from 1963 to 1966. ... While in the House, Lewis was one of the leaders of the Democratic Party, serving from 1991 ... D2 Henry Calvin Johnson Jr. (born October 2, 1954) is an American lawyer and politician serving as the U.S. representative for since 2007. He is a member of the Democratic Party. ...", "figure_data": "Quintuple: (John Lewis, Hank Johnson, member of po-litical party, Democratic Party, Democratic Party)D2:", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "A quintuple for comparison between John Lewis and Hank Johnson on their shared property member of political party. The example consist of the textualized data used in pre-training, including the entities' descriptive documents D 1 and D 2 , QA pairs (Q, A) synthesized with designed templates, and the synthetic summary S generated by a data-to-text model with two the two Wikidata statements as input.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Main results. Our pre-trained models denoted by +CMP, bring significant performance gain to BART and T5 in zero-shot (e.g., relatively +82% and +220% of F1 on HotpotQA) and few-shot (e.g., relatively +29% and +52% of F1 on 2WikiQA) settings across all tasks. In full-data settings that assume a huge number of labeled examples are available, our approach makes smaller improvements on the two models.", "figure_data": "Comparative QAComparative QGComparative SummarizationHotpotQA2WikiQAHotpotQG2WikiQGCocoTripDiffenEMF1EMF1 BLEU R-L BLEU R-LR-2 R-LR-2R-L AVGChatGPT 62.68 73.45 74.33 82.957.36 29.16 10.89 33.02 6.80 18.59 9.29 21.69 35.85BART 69.27 75.70 91.87 92.43 16.29 43.41 35.28 61.94 23.63 44.99 10.04 24.39 49.10Full-data+ CMP 69.26 75.43 91.81 92.30 17.18 43.66 35.82 62.13 27.60 47.90 12.11 26.69 50.16 T5 73.16 79.20 87.40 89.67 17.57 44.70 36.02 62.70 29.18 45.57 9.21 24.03 49.87 + CMP 72.69 78.83 88.75 91.08 17.26 44.65 36.12 63.18 30.48 49.19 8.12 23.04 50.28Few-shotBART 33.82 39.70 37.65 39.67 11.38 39.04 30.02 57.14 23.63 44.99 10.04 24.39 32.62 + CMP 44.31 52.15 57.58 58.49 12.75 39.33 30.29 56.28 27.60 47.90 12.11 26.69 39.09 T5 48.89 54.71 43.85 45.63 6.48 30.95 6.71 28.44 29.18 45.57 9.21 24.03 31.14 + CMP 50.50 58.29 56.51 58.33 8.18 33.88 12.12 37.95 30.48 49.19 8.12 23.04 35.55Zero-shotBART + CMP 31.47 39.04 40.55 42.47 0.00 11.93 0.00 19.30 T5 20.44 28.87 20.88 26.92 + CMP 44.25 52.62 54.34 56.301.70 18.53 6.86 29.13 1.21 18.70 7.24 28.993.45 20.34 4.09 18.32 6.32 17.90 10.16 9.21 32.23 6.11 24.43 8.02 20.22 24.15 2.38 18.53 8.94 25.28 5.61 17.65 16.28 5.83 32.31 8.63 28.19 5.72 18.43 28.57", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Few-shot and Zero-shot results of models with multi-task pre-training (denoted by +CMP) vs. single-task pre-training (denoted by +CMP QA , +CMP QAG , and +CMP SUM ).", "figure_data": "33.19", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "A test example of Diffen dataset. BART and BART+CMP refer to the model predictions under fewshot fine-tuning. BART+CMP generated the similarities and", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Mengxia Yu; Zhihan Zhang; Wenhao Yu; Meng Jiang
[ { "authors": "Meriem Beloucif; Muhie Seid; Steffen Yimam; Chris Stahlhacke; ; Biemann; M Elvis Vs", "journal": "European Language Resources Association", "ref_id": "b0", "title": "Jackson: Who has more albums? classification and identification of elements in comparative questions", "year": "2022" }, { "authors": "Umanga Bista; Alexander Mathews; Minjeong Shin; Aditya Krishna Menon; Lexing Xie", "journal": "", "ref_id": "b1", "title": "Comparative document summarisation via classification", "year": "2019" }, { "authors": "Alexander Bondarenko; Yamen Ajjour; Valentin Dittmar; Niklas Homann; Pavel Braslavski; Matthias Hagen", "journal": "", "ref_id": "b2", "title": "Towards understanding and answering comparative questions", "year": "2022" }, { "authors": "Viktoriia Chekalina; Alexander Bondarenko; Chris Biemann; Meriem Beloucif; Varvara Logacheva; Alexander Panchenko", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Which is better for deep learning: Python or MATLAB? answering comparative questions in natural language", "year": "2021" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Reading wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Xanh Ho; Anh-Khoa Duong Nguyen; Saku Sugawara; Akiko Aizawa", "journal": "International Committee on Computational Linguistics", "ref_id": "b5", "title": "Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps", "year": "2020" }, { "authors": "Ziniu Hu; Yizhou Sun; Kai-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Relation-guided pre-training for open-domain question answering", "year": "2021" }, { "authors": "Ziniu Hu; Yichong Xu; Wenhao Yu; Shuohang Wang; Ziyi Yang; Chenguang Zhu; Kai-Wei Chang; Yizhou Sun", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Empowering language models with knowledge graph reasoning for open-domain question answering", "year": "2022" }, { "authors": "Hayate Iso; Xiaolan Wang; Stefanos Angelidis; Yoshihiko Suhara", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Comparative opinion summarization via collaborative decoding", "year": "2022" }, { "authors": "Nitin Jindal; Bing Liu", "journal": "", "ref_id": "b9", "title": "Identifying comparative sentences in text documents", "year": "2006" }, { "authors": "Pei Ke; Haozhe Ji; Yu Ran; Xin Cui; Liwei Wang; Linfeng Song; Xiaoyan Zhu; Minlie Huang", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Jointgt: Graph-text joint representation learning for text generation from knowledge graphs", "year": "2021-08-01" }, { "authors": "Kevin Lerman; Ryan Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Contrastive summarization: An experiment with consumer reviews", "year": "2009" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Shasha Li; Chin-Yew Lin; Young-In Song; Zhoujun Li", "journal": "IEEE transactions on knowledge and data engineering", "ref_id": "b13", "title": "Comparable entity mining from comparative questions", "year": "2011" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Y Liu; L Wan; H He; Peng; Yu", "journal": "", "ref_id": "b15", "title": "Kgbart: Knowledge graph-augmented bart for generative commonsense reasoning", "year": "2021" }, { "authors": "Linyong Nan; Dragomir Radev; Rui Zhang; Amrit Rau; Abhinand Sivaprasad; Chiachun Hsieh; Xiangru Tang; Aadit Vyas; Neha Verma; Pranav Krishna; Yangxiaokang Liu; Nadia Irwanto; Jessica Pan; Faiaz Rahman; Ahmad Zaidi; Mutethia Mutuma; Yasin Tarabar; Ankit Gupta; Tao Yu; Yi Chern Tan; Xi Victoria Lin; Caiming Xiong; Richard Socher; Nazneen Fatema; Rajani ", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "DART: Opendomain structured data record to text generation", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b17", "title": "Chatgpt: A large-scale generative language model", "year": "2021" }, { "authors": "Liangming Pan; Wenhu Chen; Wenhan Xiong; Min-Yen Kan; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Unsupervised multi-hop question answering by question generation", "year": "2021" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Fabio Petroni; Aleksandra Piktus; Angela Fan; Patrick Lewis; Majid Yazdani; Nicola De Cao; James Thorne; Yacine Jernite; Vladimir Karpukhin; Jean Maillard; Vassilis Plachouras; Tim Rocktäschel; Sebastian Riedel", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "KILT: a benchmark for knowledge intensive language tasks", "year": "2021" }, { "authors": "Yujia Qin; Yankai Lin; Ryuichi Takanobu; Zhiyuan Liu; Peng Li; Heng Ji; Minlie Huang; Maosong Sun; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "ERICA: Improving entity and relation understanding for pre-trained language models via contrastive learning", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b22", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "F R Leonardo; Martin Ribeiro; Hinrich Schmitt; Iryna Schütze; Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Investigating pretrained language models for graph-to-text generation", "year": "2021" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b24", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "Harsh Trivedi; Niranjan Balasubramanian; Tushar Khot; Ashish Sabharwal", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b25", "title": "MuSiQue: Multihop Questions via Single-hop Question Composition", "year": "2022" }, { "authors": "Xiaozhi Wang; Tianyu Gao; Zhaocheng Zhu; Zhengyan Zhang; Zhiyuan Liu; Juanzi Li; Jian Tang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "KEPLER: A unified model for knowledge embedding and pre-trained language representation", "year": "2021" }, { "authors": "Wenhan Xiong; Jingfei Du; William Yang; Wang ; Veselin Stoyanov", "journal": "", "ref_id": "b27", "title": "Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model", "year": "2020" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Donghan Yu; Chenguang Zhu; Yiming Yang; Michael Zeng", "journal": "", "ref_id": "b29", "title": "Jaket: Joint pre-training of knowledge graph and language understanding", "year": "2022" }, { "authors": "Jingqing Zhang; Yao Zhao; Mohammad Saleh; Peter Liu", "journal": "PMLR", "ref_id": "b30", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 79.35, 102.95, 434.38, 31.13 ], "formula_id": "formula_0", "formula_text": "1 } [SEP] {D 2 } → Q; A Comparative Summary Generation Generate a comparative summary. Context: {D 1 } [SEP] {D 2 } → S Text Infilling {corrupted D 1 } [SEP] {corrupted D 2 } → {D 1 } [SEP] {D 2 }" }, { "formula_coordinates": [ 5, 329.86, 432.84, 170.83, 23.62 ], "formula_id": "formula_1", "formula_text": "L QA = - (Q i ,A i )∈T log P (A i |Q i , D 1 , D 2 )" }, { "formula_coordinates": [ 5, 326.66, 587.66, 177.24, 23.62 ], "formula_id": "formula_2", "formula_text": "L QAG = - (Q i ,A i )∈T log P (Q i , A i |D 1 , D 2 )" }, { "formula_coordinates": [ 5, 342.23, 721.68, 146.08, 22.26 ], "formula_id": "formula_3", "formula_text": "L SUM = - S∈S log P (S|D 1 , D 2 )" }, { "formula_coordinates": [ 6, 70.87, 321.84, 157.12, 10.69 ], "formula_id": "formula_4", "formula_text": "L = L QA + L QAG + L SUM + L TI ." } ]
10.18653/v1/2020.acl-main.424
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b69", "b13", "b2", "b45", "b83", "b42", "b21", "b82", "b47", "b59", "b76", "b15" ], "table_ref": [], "text": "Text simplification aims to improve a text's readability or content accessibility while preserving its fundamental meaning (Stajner, 2021;Chandrasekar et al., 1996). Traditional human evaluation for text simplification often relies on individual, shallow sentence-level ratings (Sulem et al., 2018c;Alva-Manchego et al., 2021), easily affected by the annotator's preference or bias. Maddela et al. (2023) recently proposes a more reliable and consistent human evaluation method by ranking and rating multiple simplifications altogether. However, as text simplification involves performing a series of transformations, or edits, such as paraphrasing, removing irrelevant details, or splitting a long sen- The cost of groceries in the United Kingdom has increased to a record 17.1%, says market research group Kantar Worldpanel. || This is due to high inflation, supply chain problems, and expensive energy affecting the economy.\nGrocery inflation in the United Kingdom reaches a record high of 17.1%, according to market research group Kantar Worldpanel, amid high levels of inflation, supply chain issues and high energy costs impacting the economy.\nComplex Wording tence into multiple shorter ones (Xu et al., 2012), sentence-level scoring remains difficult to interpret since it is not reflective of detailed information about the types of edits being performed. Fine-grained human evaluation through span selection has been explored for machine translation (Lommel et al., 2014) and open-ended text generation (Dou et al., 2022). Yet, these evaluation methods are error-driven -i.e., focusing solely on evaluating failure -which punishes creative and diverse generations with minor errors in favor of generic ones. Additionally, machine translation and open-ended generation tasks usually retain none of the input words, while text simplification must balance the editing and preservation of words in the original input (Xu et al., 2016). We thus evaluate simplification quality as the aggregation of edit successes and failures, as depicted in Figure 1.\nWe introduce SALSA -Success and FAiluredriven Linguistic Simplification Annotation -an arXiv:2305.14458v2 [cs.CL] 22 Oct 2023 edit-level human evaluation framework capturing a broad range of simplification transformations. SALSA is built on a comprehensive typology ( §2) containing 21 quality and error edit types. Using SALSA, we develop an interactive interface and collect 19K edit annotations of 840 simplifications written by eleven state-of-the-art language models and two humans. With these annotations, we conduct a large-scale analysis of model and automatic metric performance, and further introduce the automatic word-level quality estimation task for text simplification. Our main findings are as follows:\n• Few-shot GPT-3.5 far surpasses existing models, particularly in making syntax and content edits. However, its simplifications are not aligned to the types of operations performed by human. ( §4) • Some fine-tuned models such as the MUSS (Martin et al., 2022) produce more diverse edits than GPT-3.5, yet suffer from incredibly high errors, while others (T5, Raffel et al., 2020) learn to minimize loss by making very few changes. ( §4) • Open-source instruction fine-tuned models such as Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) perform a similar number of edits as GPT-3.5, but at a cost of more conceptual errors due to the inherent limits of model imitation. ( §4) • Fine-tuned on SALSA annotations, our referencefree metric, LENS-SALSA, captures the subtleties of specific simplification approaches beyond existing automatic evaluation metrics. ( §5) • Leveraging our data, we present the automatic word-level quality estimation task for text simplification and establish several baseline approaches for future modeling efforts. ( §6)\nOur results demonstrate that SALSA provides an interpretable and exhaustive evaluation of text simplification." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "SALSA Framework", "publication_ref": [ "b31", "b23", "b81" ], "table_ref": [], "text": "We introduce SALSA, an edit-based human evaluation framework for text simplification. SALSA is defined by a typology of 21 linguistically-grounded edit types with the aim of capturing both successes and failures (i.e., quality changes and errors, see Figure 1). The annotation methodology of SALSA is structured as a decision tree and implemented via an easy-to-use interface, illustrated in Figure 2. Our interface is designed with Thresh (Heineman et al., 2023), and we release our configuration to encourage adaptation to other text rewriting tasks 1) selecting edits, (2) identifying information change, (3) classifying edit type and (4) rating efficacy/severity. (Du et al., 2022) or collecting fine-grained human feedback (Wu et al., 2023) 1 . In the following, we describe each step of the annotation process." }, { "figure_ref": [], "heading": "Edit Selection", "publication_ref": [], "table_ref": [], "text": "Annotation begins with edit selection, where annotators identify the edits performed by the simplification and select the corresponding spans for each edit. We define six types of edit operations: single-operation insertion, deletion, substitution, word-/clause-reorder, and multi-operation sentence split and structure changes. An insertion or deletion edit exclusively modifies content, while a substitution either modifies or paraphrases content. Reorder, split, or structure edits perform a contextfree syntax transformation. As split and structure edits are multi-operation (i.e., require a combination of single operations), they are defined by a set of underlying single-operation constituent edits. For example, this structure change from passive to active voice made by zero-shot GPT-3.5 involves multiple constituent edits:" }, { "figure_ref": [], "heading": "Categorizing by Information Change", "publication_ref": [ "b67" ], "table_ref": [], "text": "Each selected edit is then labeled with its impact on the underlying sentence information: less, same, more or different information. Given the type of operation and change to information, we subsequently organize each edit into three linguistic families as defined by Siddharthan (2014): Lexical edits perform simple changes in \"wording\". This includes paraphrasing (i.e., substitution that keeps the same information) and inconsequential trivial changes (e.g., inserting 'the'). Syntax edits capture transformations to the distribution of information, rather than substance. A split converts a candidate sentence to two sentences, a re-order edit re-arranges clauses or wording within a clause, and a structural edit modifies the voice, tense or clausal structure. Examples of structural edit sub-types are in Appendix B. Conceptual edits modify underlying ideas conveyed by the text. A conceptual edit requires elaboration to add clarifying information or generalization to delete unnecessary/complicated ideas." }, { "figure_ref": [ "fig_3" ], "heading": "Edit Type Classification", "publication_ref": [], "table_ref": [], "text": "After being categorized into lexical, syntax, or conceptual edit families, we further classify each edit operation into 21 fine-grained success (quality), failure (error), or trivial edit types as listed in Figure 3. Successful edits simplify through diverse approaches, from paraphrasing complex spans, generalization of unnecessary information, or elaboration to add clarity and background context. E.g.," }, { "figure_ref": [], "heading": "EXAMPLE (elaboration)", "publication_ref": [], "table_ref": [], "text": "Vicuna 7B ... can be fitted to an exponentially decaying curve.\n... can be represented by a curve that gets smaller and smaller over time.\nOften small edits, particularly to syntactic structure, can improve clarity, such as this addition of a clear subject-verb structure through the inclusion of the relative pronoun 'who':\nEXAMPLE (structure change)\nGPT-4 Paltrow in turn claims he was the one crashing rather than the other way around. Paltrow says he was the one who crashed, not her.\nOr this conversion of the participial phrase to a relative clause to help explain significance: Sentence splitting or reordering information may clarify a sequence of events:\nEXAMPLE (component reorder)\nChatGPT Poland announces the closure of a major border crossing with Belarus \"until further notice\" amid heightened tensions between the two countries. Poland has closed a big border crossing with Belarus due to increased tensions between the two countries. The closure will remain in effect until further notice.\nFailure edits include any ablation from minor readability issues to hallucinations or deletions to sentence meaning. In the following example, the coreference error captures the deleted reference between the 'ICJ' and 'US' acronyms to their original definitions, useful contextual information:\nEXAMPLE (coreference error)\nChatGPT The International Court of Justice (ICJ) rules that the United States violated its ... The ICJ said that the US broke its ... And often multiple edits overlap, such as this information rewrite which successfully adds clarity via reordering, but botches the author's sarcasm: EXAMPLE (information rewrite) Alpaca 7B ... justifies a runtime nearing 3 hours (with a postcredits scene, no less), and it already opened to over $100 million worldwide. .. takes up almost 3 hours of the movie. The movie opened to over $100 million worldwide. A post-credits scene completes the story.\nWe also separately ask annotators to identify if the edit contains a grammar error. Appendix A provides an exhaustive description and examples for each edit type." }, { "figure_ref": [], "heading": "Rating Edit Efficacy / Severity", "publication_ref": [], "table_ref": [], "text": "As each edit has a varying degree of impact on overall simplification quality, we finally ask annotators to rate the efficacy of quality edits or severity of error edits. We define three levels: 1 -minor, 2 -somewhat, and 3 -major. Examples of each severity level are included in Appendix A.3." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "We describe our use of SALSA to collect 19K edit annotations covering 11.6K spans on 840 modelgenerated and human-written simplifications." }, { "figure_ref": [], "heading": "Simplification Data", "publication_ref": [ "b45", "b59", "b47", "b53", "b77", "b15" ], "table_ref": [], "text": "Data collection is performed on an extended version of SIMPEVAL 2022 (Maddela et al., 2023), including a train set covering state-of-the-art simplification systems and held-out test set of recent LLMs. We include a full description of each system in Appendix C.1. SALSA Train. We first extend the 360 simplifications from SIMPEVAL 2022 to 700 simplifications based on 100 complex sentences from Wikipedia articles dated between Oct 2022 and Dec 2022. The complex sentences are unseen during the training of the LLMs and were selected to be intentionally difficult (avg. length of 37.3 words) to enable an evaluation of the models' full capabilities in performing diverse simplification edits. Simplifications are generated by five models including fine-tuned T5-3B and T5-11B (Raffel et al., 2020), MUSS (Martin et al., 2022), a controllable BARTlarge model trained with unsupervised, mined paraphrases, zero-and few-shot GPT-3.5 (Ouyang et al., 2022), and two human-written references. For modeling experiments in §5 and §6, we divide the initial 700 simplifications by the complex sentence with a 70/30% train/dev split. SALSA Test. We further gather 20 more complex sentences from Wikipedia articles published in Mar 2023 and generate 140 simplifications using recent LLMs including GPT-3.5, ChatGPT, GPT-4, Alpaca-7B (Touvron et al., 2023) and Vicuna-7B (Chiang et al., 2023), along with T5-3B and T5-11B fine-tuned with control tokens." }, { "figure_ref": [ "fig_2" ], "heading": "Annotation", "publication_ref": [ "b65", "b4" ], "table_ref": [], "text": "As crowd-sourced annotators have shown to have inconsistent quality (Shmueli et al., 2021), we hire 6 undergraduate students from a US university. Annotators were trained with an in-depth tutorial con- To concretely measure agreement for each stage of the SALSA framework, we collect annotations in three stages: (1) we have three annotators select edits, (2) a fourth annotator adjudicates the edits into a single selection and (3) the initial three annotators classify and rate the adjudicated edits. Figure 2 illustrates our annotation interface, with further screenshots of our tutorial included in Appendix G." }, { "figure_ref": [], "heading": "Inter-Annotator Agreement", "publication_ref": [ "b21" ], "table_ref": [], "text": "We calculate edit selection agreement (i.e. agreement prior to adjudication) by each token, with Table 1 reporting agreement per edit, further broken down by their type of information change. We observe edit agreement is highly dependent on the edit type and type of information change being performed. High agreements are seen for deletion (α=0.75), paraphrase (substitution with the same information, α=0.53), and sentence splits (α=0.66). Substitution that introduces more information, however, exhibits lower agreement (α=0.15), due to the subjectivity among annotators on determining whether new tokens contain 'novel' information, as was often mixed up with insertion. Reordering (α=0.12) and structure edits (α=0.25) also report lower agreements. We fully explore the phenomenon of annotator disagreement in Appendix C.2, and find overlapping syntactic and content edits often have multiple correct interpretations, leading to an inherent disagreement. Additionally, we find our % rates for annotator agreement are similar to fine-grained evaluation frameworks in other text generation tasks (Dou et al., 2022). Figure 5: Failure edits per-model, organized by edit type. Compared to humans, both GPT-3.5 setups make more syntax and lexical errors. Although humans perform bad deletion errors at a higher frequency than GPT-3.5, this is reflective of the inherent ambiguity in judging the relevancy of the deleted content." }, { "figure_ref": [ "fig_4", "fig_4", "fig_6", "fig_4", "fig_4" ], "heading": "Key Analysis", "publication_ref": [ "b45", "b57", "b8" ], "table_ref": [], "text": "We use SALSA to evaluate state-of-the-art simplification by collecting annotations on our extended version of the SIMPEVAL corpus (Maddela et al., 2023), which includes fine-tuned, LLM-and human-written simplifications. Our resulting data collection includes 19K edit annotations across 840 simplifications.\nWe present our primary results in Figures 4,5, and 6. Figures 4 and5 illustrate the frequency of quality and error edit types. As edits vary in length, we calculate edit coverage: the length of each edit in proportion to the total length of the simplification and report the average edit coverage for different efficacy and severity ratings in 6, showing a view of edit ratings adjusted for length. Additionally, we include Figure 7, which compares simplifications generated by recent instruction fine-tuned language models. The following are our key findings: Models primarily write good edits, but still trail humans (Fig. 4,5). We observe that 16% of modelgenerated edits are errors, with the best-performing model, few-shot GPT-3.5, producing errors in only 9% of edits. We find this still trails human simplifications, which have an error rate of 6%. MUSS and GPT-3.5 have a median count of 1 error per simplification and 63% of their simplifications contain at least one error, showing these errors are not concentrated in a few 'bad' simplifications but instead often occur among many good edits.\nLanguage models elaborate, while humans generalize (Fig. 4). When simplifying content, all models (excluding T5) tend to elaborate at a higher ratio than humans, for example, GPT-3.5 attempts to insert content 17% more often. As LLMs have shown to encode world knowledge in their parameters (Petroni et al., 2019;Brown et al., 2020), GPT-3.5 elaboration is far more effective than MUSS, for example:" }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "EXAMPLE", "publication_ref": [ "b34", "b46", "b28" ], "table_ref": [], "text": "Few-shot GPT-3.5 After defeating PSD candidate Viorica Dȃncilȃ by a landslide in 2019, his second term.. In 2019, Klaus Iohannis defeated PSD candidate Viorica Dȃncilȃ by a large margin. His second term.. GPT-3.5 writes quality edits at a higher frequency than humans, but human edits are longer and more effective (Fig. 4,6). Both zeroshot and few-shot GPT-3.5 produce a larger number of edits, but human edits are more substantial, as demonstrated by the higher edit coverage across all efficacy levels, particularly for syntax and lexical edits. Human simplification typically deletes, paraphrases, or reorders entire clauses, while GPT-3.5 often edits single modifiers or words.\nFine-tuned T5-3B and T5-11B generate conservative simplifications (Fig. 4,5,6). Compared to all other systems, both T5 models make minimal changes in terms of frequency and edit coverage, Figure 6: Edit coverage of efficacy (+) and severity (-) ratings for each model, separated by simplification approach, with edit coverage defined as (len(e C ) + len(e S ))/(len(C) + len(S)) (see §A.4). Overall, humans make the longest quality edits and most infrequent error edits. We report the distribution of each edit rating in Figure 14.\nwhile still exhibiting high rates of error. This is likely due to their training data, Wiki-Auto (Jiang et al., 2020), containing shorter sentences, usually requiring simpler simplification techniques, making it difficult for models to generalize on longer and more complex sentences. Later in Appendix D, we show using control tokens (Martin et al., 2020) during training, as done by MUSS, can improve diversity but at the expense of increasing deletion and hallucination errors. Split edits are straightforward, Structure edits are far more complex (Fig. 4,5). Surprisingly, sentence splitting is shown to be the easiest edit for all models to accomplish, with a similar number made by MUSS, GPT-3.5, and humans, with even the conservative T5 models making a comparable number of split edits. However, structure change and re-ordering edits are rarely seen in fine-tuned models. We speculate this may be attributed to (i) these types of edits are infrequent in the training data and (ii) GPT-3.5 has a unique ability to perform complicated syntax rewriting, echo with the findings in abstractive summarization (Goyal et al., 2022). Despite GPT-3.5's improvement, the structure error rate demonstrates it has not yet reached human-level ability. Additionally, we observe zeroshot GPT-3.5 produces structure errors (see below example) at a 19% rate higher than few-shot." }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Zero-shot GPT-3.5 The sentence included a fine of $400... You will receive a fine of $400... We find human simplifications are more conservative with re-ordering than models, yet attempts to simplify with re-ordering often appear arbitrary:" }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Human written On 3 November 2022, the British Secretary... On November 3rd, 2022, the British Secretary... Humans appear to produce bad deletion errors, but these are often subjective (Fig. 5). Bad dele-tion constitutes 35% of error edits made by humans, compared to 8% by few-shot GPT-3.5. The anomaly of the bad deletion errors reveals an inherent subjectivity in assessing deletion:" }, { "figure_ref": [ "fig_4" ], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Human written Unlike the first film adaptation, in which director Samuel Fuller removed... Unlike the first film adaptation, Samuel Fuller removed... In this example, some annotators marked the edit as a bad deletion while others consider it appropriate. As the sentence discusses a book adaptation into a film, the description of 'Samuel Fuller' is helpful depending on the reader, which underscores the need for adaptive levels of simplification to accommodate each reader's needs.\nParaphrasing is a crucial, but tricky mechanism (Fig. 4,5). MUSS, GPT-3.5, and humans all paraphrase in at least 75% of sentences. Despite low performance in conceptual and syntactic simplification, MUSS paraphrases at a human-like rate likely due to its training on over one million paraphrase sentence pairs mined from web crawl data. Although zero-/few-shot GPT-3.5 paraphrases at a higher rate than humans, these edits are often are unnecessary. For instance:" }, { "figure_ref": [ "fig_6" ], "heading": "EXAMPLE", "publication_ref": [ "b76", "b15", "b29" ], "table_ref": [], "text": "Few-shot GPT-3.5 The club said on social media that customers subdued the gunman... The club reported on social media that customers were able...\nOpen-source LLMs are approaching GPT-3.5 simplifications, or are they (Fig. 7)? Given recent attention to ChatGPT (OpenAI, 2022), GPT-4 (OpenAI, 2023), and the emergence of instruction fine-tuning smaller language models on outputs from proprietary LLMs, we perform a supplementary evaluation on these systems. The open-source Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) appear to perform a similar number of quality and error edits to GPT-3.5. However, these systems tend to write far more bad elaboration errors such as factual errors or contradictions: EXAMPLE Alpaca 7B ... a controversial \"angel tax\" provision seeking to capture some of the income entering the country from foreign investors funding India's start-ups. ... a controversial \"angel tax\" provision, which is aimed at stopping foreign investors from funneling money into India's startups.\nThis behavior suggests open-source instruction finetuned models mimic the style of their larger counterparts, but not their knowledge, a phenomenon observed by Gudibande et al. (2023). GPT-4 exhibits the best performance by making fewer content errors while producing a high number of quality edits, but still exhibits errors particularly when paraphrasing individual spans without considering the broader sentence meaning:" }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "GPT-4 Grocery inflation in the United Kingdom reaches a record high of 17.1% ... The cost of groceries in the United Kingdom has increased to a record 17.1% ... While GPT-4 successfully paraphrases inflation by relating to cost, it fails to recognize the sentence is discussing inflation rate, rather than exact prices.\nWe include further analysis, discussion, and dataset statistics in Appendix D." }, { "figure_ref": [], "heading": "Evaluating Metric Edit Sensitivity", "publication_ref": [ "b56", "b82", "b86", "b61", "b24", "b45", "b62", "b79", "b24", "b6", "b85", "b50" ], "table_ref": [ "tab_3", "tab_4" ], "text": "While automatic metrics are traditionally evaluated using correlation with sentence-level, Likert scale human ratings on dimensions of adequacy, fluency and simplicity, this fails to understand the ability of automatic metrics to capture the subtleties of lexical, syntactic, and conceptual simplification. With our SALSA annotations, we study how well current automatic metrics capture these distinct simplification approaches. Additionally, we introduce LENS-SALSA, a reference-free metric fine-tuned on SALSA annotations. Existing Automatic Metrics. We consider five automatic metrics: BLEU (Papineni et al., 2002), SARI (Xu et al., 2016), the most widely-used text simplification metric, BERTSCORE (Zhang et al., 2020), COMET-MQM, a machine translation metric (Rei et al., 2020) trained on MQM ratings (Freitag et al., 2021), and LENS (Maddela et al., 2023), a recently proposed text simplification metric finetuned on SIMPEVAL that contains rank-based human ratings of simplifications from 24 systems. LENS-SALSA. The automatic simplification metrics mentioned above require human-written references, which may not be available in every evaluation setting. To this end, we introduce LENS-SALSA, a reference-free simplification metric enabled by edit-level information. Based on the COMETKIWI machine translation metric design (Rei et al., 2022), we first pre-train LENS-SALSA on the sentence-level human ratings from SIMPE-VAL using UniTE (Wan et al., 2022), a multi-task learning method. Specifically, the metric is trained on the same score but from three input formats: Simp:Ref, Simp:Complex, and Simp:Complex:Ref, where \":\" denotes concatenation. Then, we finetune LENS-SALSA on SALSA annotations using a dual-objective to predict both the sentence-level score (calculated by LENS) and a word-level quality score ŵi ∈ [-3, 3], corresponding to the efficacy or severity rating ( §2.4) of each word w i in the complex and simplified sentences. We use RoBERTa-large as the base model for LENS-SALSA, and 490, 210, and 140 sentence pairs for train, validation, and test, respectively. Implementation details are provided in Appendix F.2.\nB L E U S A R I B E R T S C O R E C O M E T -M Q M L E N S L E N S -S A L S A\nResults. As fine-grained MQM annotations in machine translation are considered a gold-standard in metric evaluation (Freitag et al., 2021), we adapt their method (detailed in §A.4) to collapse edit-level ratings to a single score, and calculate subscores by only considering certain edit types. Table 2 reports the Pearson correlation between metric scores and human sub-scores across each SALSA dimension. LENS-SALSA achieves the highest correlation in nearly all edit approaches, showing its capability to capture all forms of simplification. Overall, only LENS and LENS-SALSA obtain substantial correlation with the overall human SALSA scores (0.33 and 0.45 respectively), while other metrics have spurious and even negative correlations with human judgments. Interestingly, COMET-MQM, intended for machine translation, performs better than BLEU and BERTScore, which further underlines the value of span-based ratings for trained metrics. Despite strong performance, we find LENS mainly evaluates lexical and syntactic edits, rather than conceptual ones, which may be attributed to its training data consisting of shorter, paraphrasebased simplifications. Lastly, all metrics have substantially higher correlation with quality than error edits. We posit this is primarily due to the sparsity and wide range of errors exhibited in the generations of current high-performing systems.\n6 Word-Level Quality Estimation\nWord-level quality estimation (QE) is the task of predicting the quality of each token in a generation, and has substantial downstream application to evaluating and refining text simplification. Despite word-level QE being a well understood task in machine translation (Basu et al., 2018;Zerva et al., 2022), it has not yet been studied for text simplification due to a lack of appropriately annotated data. In this section, we use SALSA annotations to demonstrate baseline approaches and highlight potential for future work.\nTask. We define word-level simplification QE as classifying each token in the complex and simplified sentences as quality, error, or ok. To adapt SALSA for the QE task, we label each token by the average efficacy/severity rating of its associated edit: < 0 as error, = 0 as ok, and > 0 as quality. Words that are not part of any edits default to the ok label. We deconstruct split and structure edits into their constituent edits, only label the simplified spans for substitution edits, and exclude reorder edits due to their low frequency. Methods. We propose two approaches: End-toend, where a single model labels each token directly; and Two-stage, where a word aligner first identifies edits, then the model labels each token using the identified edit information. For end-to-end, we implement the following two methods: Tagging (Tag) is a native sequence tagging model with a classification head.\nTagging with Multi-task Loss (Tag-ML) is similar to the tagging method except trained with a multi-task loss function: L = L tag + L ec . L ec is an additional objective that classifies each token into none, deletion, substitution, or insertion.\nFor two-stage methods, we first apply a QAbased word aligner (Nagata et al., 2020) to the sentence pair and use a set of rules to convert word alignments to edits: consecutive non-aligned words in the original sentence are labeled as a deletion edit; consecutive non-aligned words in the simplified sentence are labeled as an insertion edit; and aligned words or spans that differ are labeled as a substitution edit. Here are three two-stage methods:\nTagging with Edit Information (Tag-EI) is a sequence tagging model with a classification head that takes the concatenation of the hidden states of both edit type and token as the input. The hidden states of the edit type are obtained via a linear layer.\nEdit Classification with Separate Classifiers (Ec-Sep) contains one classifier for each of the three edit operations. Each classifier is an encoder model with a feedforward neural network (FNN). The inputs to these FNNs are the hidden states of the [CLS] token and the max-pooled tokens from the edit spans (i.e., for substitution edit, one from the original span, and one from the simplified span).\nEdit Classification with One Classifier (Ec-One) is one classifier with three FNNs mentioned above. The difference is the encoder is trained collectively.\nAll methods (including the word aligner) use RoBERTa-large. Further implementation details and results are included in Appendix F.\nResults. Table 3 shows the test set performance for each label. Among the end-to-end methods, training with multi-task loss results in improvement on all three label F1 scores, achieving the second-best average F1 score overall. We find edit classification approaches detect error tokens more accurately than tagging approaches. Within edit classification methods, using one classifier outperforms multiple ones due to the benefit of joint encoder training. Overall, the edit classification with one classifier method performs the best with a gain of over 11 points on error F1 and a 4-point increase in average F1, compared to the base tagging model." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b67", "b22", "b44", "b59", "b20", "b46", "b47", "b38", "b2", "b7", "b30", "b16", "b26", "b11", "b84", "b87", "b75", "b70", "b42", "b24", "b21", "b55", "b19" ], "table_ref": [], "text": "Model Evaluation. Simplification work broadly agrees some typology of simplification operations exists (Siddharthan, 2014), starting with early rulebased systems which explicitly defined specific syntax operations (Dras, 1999). Past work has experimented with designing models to control the extent of each operation by using a pipeline to perform simplification operations independently (Maddela et al., 2021;Raffel et al., 2020), predicting edit operations (Dong et al., 2019) or augmenting finetuned models with learned control tokens (Martin et al., 2020(Martin et al., , 2022)). However, evaluation only considers a sentence in its entirety rather than rating individual operations, either by automatic metrics (Kriz et al., 2020), shown to be an inadequate representation of quality (Alva-Manchego et al., 2021;Sulem et al., 2018a), or by surface-level Likert ratings, typically asking crowd-sourced annotators to rate on scales of fluency, adequacy, and simplicity. These scores are difficult to interpret and capture no detail into the type of simplification being written (Briakou et al., 2021;Hashimoto et al., 2019). Additionally, despite current systems' often producing simplification errors (Choshen and Abend, 2018), annotating error has primarily been performed through inspection, and has not been incorporated into human or automatic evaluation (Gooding, 2022). Linguistic Inspection. Manual inspection attempts to understand the behavior of simplification models or datasets, characterized by detailed typologies and often conducted by authors or domain experts. Cardon et al. (2022) performs detailed inspection of the ASSET simplification test corpus (Alva-Manchego et al., 2020a) to study the behavior of automatic metrics and Cumbicus-Pineda et al. (2021a) propose a framework for evaluating success and failure by answering a series of checklist items, with sentences given a capability score based on the number of requirements fulfilled. Yamaguchi et al. (2023) annotates simplifications of earlier models such as DRESS (Zhang and Lapata, 2017) and SUC (Sun et al., 2020) using a taxonomy of 62 error categories, but do not analyze the SOTA, MUSS, or LLMs. Stodden and Kallmeyer (2022) proposes an interactive linguistic inspection interface, but this interface is not designed for human evaluation of model outputs and does not provide ratings for measuring performance.\nFine-grained Human Evaluation. Human evaluation performed on a span-level has been previously proposed for a variety of NLP tasks. In translation, the Multidimensional Quality Metrics (MQM) (Lommel et al., 2014), categorizes error into accuracy and fluency sub-types and is later extended by Freitag et al. (2021) to weight errors by severity and combine into a single quality score. Dou et al. (2022) proposes SCARECROW to capture errors appearing in open-ended text generation. However, as these span-based evaluation schemes exclusively annotate error, they encourage generic outputs and punish interesting or diverse generations. For summarization, the FRANK typology (Pagnoni et al., 2021) aggregates errors into broader categories to benchmark metrics that measure factuality. Inspired by FRANK, Devaraj et al. (2022) introduces a framework to evaluate factuality for text simplification." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce SALSA, a novel editbased evaluation framework incorporating error and quality evaluation, and dimensions of lexical, syntax and conceptual simplification and demonstrate SALSA benefits in granularity, accuracy, and consistency. We employ SALSA to collect a 19K edit annotation dataset and analyze the strengths and limitations of fine-tuned models, prompted LLMs, and human simplifications. Finally, we use SALSA annotations to develop a reference-free automatic metric for text simplification and demonstrate strong baselines for word-level quality estimation, showing promising avenues for the development of fine-grained human evaluation." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b36", "b25", "b63", "b69", "b40", "b74", "b42", "b55" ], "table_ref": [], "text": "Our annotation only represents a single use case of text simplification and we encourage an extension of SALSA to domain-specific simplification, such as medical (Joseph et al., 2023), legal (Garimella et al., 2022), or multi-lingual text (Ryan et al., 2023), and annotations by groups of specific downstream users (Stajner, 2021). The LENS-SALSA reference-free metric is trained exclusively on Wikipedia simplification, and we do not consider its cross-domain generalization or its ability to capture the simplification need to specific target communities. Additionally, while we demonstrate promising results on sentence-level evaluation, simplification is often a document-level task (Laban et al., 2021;Sun et al., 2021). Incorporating higherlevel operations such as sentence fusion, paragraph compression, and reordering would require an extension to SALSA and presents unique analytical challenges. Finally, detailed human evaluation inherently requires greater resources to produce a high granularity of annotations. While we show this process can be streamlined with a robust annotator training, SALSA requires a similar amount of resources as widely used fine-grained evaluation in other tasks such as MQM (Lommel et al., 2014) or FRANK (Pagnoni et al., 2021)." }, { "figure_ref": [ "fig_3" ], "heading": "Ethics Statement", "publication_ref": [ "b45" ], "table_ref": [], "text": "Our annotations were performed using the SIMPE-VAL 2022 corpus, originally collected from publicly available Wikipedia articles (Maddela et al., 2023) and we further extend the dataset with complex sentences collecting using the same methodology from publicly available Wikipedia articles. As discussed in §3.2, we perform data collection with in-house annotators from a US university. Annotators were all native English speakers and paid $15-$18/hour. We took care to manually review all data prior to annotation as to exclude any triggering or sensitive material from our annotation data. Annotators were informed that any data they felt uncomfortable with was not required to annotate. Our interface was built using the open-source Vue.js2 library, and training of our added T5-11B system was implemented using the open-source Hugging Face Transformers3 library.\nWe provide detail into the SALSA framework, including qualitative examples which helped guide design decisions when building the typology. Table 4 illustrates each final edit type, as organized by Figure 3. During development, we adjusted our scheme based on preliminary annotations with the final goal of SALSA's ability to evenly represent all modes of simplification and the full space of errors." }, { "figure_ref": [], "heading": "A.1 Quality Evaluation", "publication_ref": [ "b69", "b27", "b67" ], "table_ref": [], "text": "We organize quality edits by their approach to simplification, as real-world application and models' capability to simplify falls into tiers of conceptual, syntactic and lexical simplification (Stajner, 2021). An ideal simplification system demonstrates a balance of these 'tiers' and incorporates different techniques depending on the original text, context and users (Gooding and Tragut, 2022). Automatic simplification research initially focused on lexical paraphrasing (Siddharthan, 2014), but has since evolved to emphasize the importance of syntactic and conceptual editing (Alva-Manchego et al., 2020b)." }, { "figure_ref": [], "heading": "A.1.1 Conceptual Simplification", "publication_ref": [ "b35", "b66", "b10", "b19" ], "table_ref": [], "text": "These edits modify the underlying sentence information or ideas, a prerequisite for simplifying complex domains. We consider 'conceptual simplification' to be interchangeable with 'semantic simplification' as used in some literature (Sulem et al., 2018b;Jiang et al., 2022).\nElaboration. An addition of meaningful, relevant and correct information (Siddharthan, 2006), such as clarifying vague terminology, providing background information on an entity or subject, or explicating general world knowledge unknown to the audience. Elaboration has been shown as a rare, but helpful mechanism in text generation (Cao et al., 2022) and we observe its careful use in human simplifications. Generalization. A deletion of unnecessary, irrelevant or complicated concepts. Although we ask annotators to rate the quality of elaboration by how it improves the readability of a sentence, we ask annotators to rate the quality of a generalization by the relevancy of the deleted information to the main idea of the sentence. As 'relevancy' is inherently subjective to the user, domain and annotator, determining the threshold for 'necessary information' is crucial to standardize (Devaraj et al., 2022). Deleting information will, by nature, contain some amount of information and SALSA instead focuses on ensuring the deleted information is not important sentence, context or users. Consider two candidate deletions: EXAMPLE Like so many hyped books before it, The Midnight Library excited me and gave me pause. Like so many hyped books before it, The Midnight Library excited me and gave me pause.\nAlthough the deletion of Midnight is shorter, it changed the subject of the sentence, and it is rated higher than the second deletion, which is not central to the main idea. Generalization using paraphrase is more often preferred than deleting full clauses.\nWe observe successful conceptual edits are often performed on the clause level. For example, adjunct removal via deletion: EXAMPLE Born into slavery in 1856, Booker T. Washington became an influential African American leader. Booker T. Washington became an influential African American leader.\nOr information insertion through an appositive or relative clause, although the prior is typically more common for the SIMPEVAL domain as it implies objective information: EXAMPLE Éric Gauthier is also a novella author... Éric Gauthier, famous for his soloist dancing career, is also a novella author..." }, { "figure_ref": [], "heading": "A.1.2 Syntactic Simplification", "publication_ref": [ "b39", "b64", "b66" ], "table_ref": [], "text": "Syntax is a crucial mechanism for fluent, highly modified simplification (Štajner, 2016). Given recent attention in automatic simplification to syntaxaware datasets and systems (Cumbicus-Pineda et al., 2021b;Kumar et al., 2020;Alva-Manchego et al., 2020a;Scarton et al., 2017), SALSA standardizes the first explicit evaluation accounting for these operations. Information Reorder. We classify two levels of reorder, word-level reorder, which reorganizes modifiers within a phrase, and component-level reorder which moves clauses or content across a sentence (Siddharthan, 2006). A component-level re-order typically may be accompanied by a broader structure change or both re-order types may overlap, as in:" }, { "figure_ref": [ "fig_3" ], "heading": "EXAMPLE", "publication_ref": [ "b26" ], "table_ref": [ "tab_6" ], "text": "The emergence of huge radio conglomerates is a direct consequence of the '96 Act.\nWhen faced with two equivalent phrases (e.g. 'A and B' → 'B and A'), SALSA classifies the reordered span as the phrase more significant to the main idea of the sentence. In practice, we found this to be a helpful guideline, although annotators often simply selected the phrase appearing first in the candidate sentence.\nStructural Change. As this syntax modification necessarily includes some discourse preserving edits (Gooding, 2022), they are defined w.r.t. some combination of constituent edits (i.e. insertion, deletion, substitution, reorder). Further discussion of structure changes in §B, with examples of structural change sub-types used for manual inspection in Table 5.\nSentence Split. A sub-type of a structural edit. We automatically identify split changes prior to annotation, but annotators must first select constituent spans and then associate those spans with the corresponding sentence split. We find the importance of this edit is highly domain-dependent (Figure 13)." }, { "figure_ref": [], "heading": "A.1.3 Lexical Simplification", "publication_ref": [ "b58", "b49", "b32", "b70", "b11" ], "table_ref": [], "text": "Paraphrase. Swapping complex spans with equivalent, simpler alternatives, is the most primitive, yet important, approach to simplification (Qiang et al., 2020) (also referred to as a hypernym, e.g. Štajner, 2016). These are exclusively defined by substitutions marked as same information and positive impact. Trivial Change. Captures any minor modifications to wording, either through a synonym replacement, or inconsequential change in wording (e.g. the, a). Trivial changes are identified as trivial insertion, trivial deletion or trivial substitution. These edits differ from a content or syntax modification in that they adds no new or major modification to the presentation of information. However, Meister et al. (2020) exemplifies trivial changes should not be ignored as they may modify the information density and verbosity of a sentence. An example is famously shown by Jaeger and Levy (2006): EXAMPLE How big is the family you cook for? How big is the family that you cook for?\nThe relativizer 'that' creates no syntactic or conceptual simplicity, but adds clarity as to the identify of the subject. Trivial changes have previously been described with finer granularity, including subcategories like abbreviation, filler words, compound segmentation, anaphora (Stodden and Kallmeyer, 2022) or even changes in number/date formatting (Cardon et al., 2022) but we exclude these groups due to their sparsity and our focus on evaluating performance." }, { "figure_ref": [], "heading": "A.2 Error Evaluation", "publication_ref": [ "b14", "b33" ], "table_ref": [], "text": "We describe the SALSA error typology, with examples of each type in Table 4. Although despite their sparsity, errors have a far greater impact on fluency and adequacy than individual quality edits (Chen et al., 2023). We refined our definition of errors by focusing on minimizing the amount of error types while retaining the ability to capture the full possibility of simplification ablations. Notably, we specifically exclude a hallucination due to its ambiguous definition in related work (Ji et al., 2023), and instead define our error categories to capture any possible hallucination." }, { "figure_ref": [], "heading": "A.2.1 Conceptual Errors", "publication_ref": [ "b44", "b55", "b48", "b19" ], "table_ref": [], "text": "We identify six types of errors in content, with errors primarily being related to information insertion.\nBad deletion. As the overwhelmingly most common error, a bad deletion removes necessary and relevant content to the main idea of the sentence. As discussed in §A.1.1, the threshold for 'relevancy' is ambiguous. Coreference. More precisely a failure in coreference or anaphora resolution (Maddela et al., 2021), this determines whether an explicit entity reference is removed. This error is only observed on a deletion of information. or generating new information making the sentence contradict itself: EXAMPLE Dextrose adds flavor and texture to dishes, although its consumption is known for negative consequences. Dextrose adds flavor, texture and nutrition to dishes, although its consumption is known for negative consequences.\nFactual Error. We asked annotators to use their commonsense knowledge and limited research to evaluate factuality in edits. Unlike contradiction, these claims introduce information which must be externally verified beyond the sentence context. Although factual content is an established focus for summarization evaluation (Pagnoni et al., 2021;Maynez et al., 2020), adequately retaining information (i.e. minimizing bad deletion) is a far greater concern for simplification (Devaraj et al., 2022)." }, { "figure_ref": [], "heading": "EXAMPLE Hilary Clinton was born in 1947.", "publication_ref": [ "b33" ], "table_ref": [], "text": "Hilary Clinton was born in 1947 outside the United States.\nIn the context of work studying hallucination in LLMs, our contradiction and factual error categories can be interpreted as intrinsic and extrinsic hallucination respectively (Ji et al., 2023). Irrelevant. A sub-type of a hallucination failing to insert information related to the main idea of the sentence, recognizing the threshold for 'relevancy' is ambiguous ( §A. 1.1). For simplicity, we report irrelevancy alongside hallucination, as information insertion is generally a rare technique." }, { "figure_ref": [], "heading": "A.2.2 Syntactic Errors", "publication_ref": [ "b54" ], "table_ref": [], "text": "Because syntactic edits are identified by the impact of information distribution, they do not need a finegrained error typology like conceptual edits, which make a diverse set of modifications. We simply observe each type as a failed attempted at their respective transformations. Bad Reorder. Uses the same word-/phrase-level specification as quality reorder. We also observe that phrase-level reorder errors are almost exclusively observed to introduce a discontinuity to the syntax tree structure (Paetzold Specia, 2013). Bad Structure. We manually inspect structural errors according to the same sub-type specification as quality edits ( §B). Bad Sentence Split. Although sentence splitting is rarely rated as unhelpful, simplifications may unnecessarily segment ideas, or interrupt the flow of information." }, { "figure_ref": [], "heading": "A.2.3 Lexical Errors", "publication_ref": [ "b70" ], "table_ref": [], "text": "Unrelated to information change, lexical errors evaluate primitive issues in fluency or wording. Complex Wording. An attempted paraphrase where the exact meaning is retained, but the replacement uses more complex semantics (also referred to as a hyponym, e.g. Stodden and Kallmeyer, 2022)." }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [ "b44", "b67" ], "table_ref": [], "text": "The researchers conducted an investigation.\nThe researchers conducted an assay.\nInformation Rewrite. Some substituted span whose content concerns the same subject, but fails to substitute the wording correctly, either through misrepresenting or falsely interpreting the information. Although similar to a combination of information deletion and information insertion, the edit is still attempting to represent the same content.\nGrammar Error. The edit violates grammatical convention. Past error analysis combines fluency and grammar into the same error type (Maddela et al., 2021) as the two are interrelated. Grammar errors are unique as they can co-occur with other errors, or occur alongside a high quality edit, as sentence fluency is independent from adequacy (Siddharthan, 2014)." }, { "figure_ref": [], "heading": "A.3 Edit Severity / Efficacy Levels", "publication_ref": [], "table_ref": [], "text": "We provide examples of each severity level, which are also included as part of annotator training:" }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Severity: 1 -minor Like so many hyped books before it The Midnight Library excited me and gave me pause The Midnight Library excited me and gave me pause\nThe introductory clause 'Like so many hyped books before it,' situates the sentence within the context of 'hyped books.' However, it does not relate to the main idea of the sentence (the author's opinion on 'The Midnight Library')." }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Severity: 2 -somewhat Two security flaws, dubbed Meltdown and Spectre by researchers, were made public on 29 January 2018. Two security flaws, dubbed Meltdown and Spectre by researchers, were made public.\nAlthough the sentence retains its core meaning without 'on 29 January 2018', the specific reference of when 'Meltdown' and 'Spectre' were 'made public' is lost." }, { "figure_ref": [], "heading": "EXAMPLE", "publication_ref": [], "table_ref": [], "text": "Severity: 3 -major If glycolysis evolved relatively late, it likely would not be as universal in organisms as it is. It likely would not be as universal in organisms as it is.\nSince the entity 'glycolysis' has been deleted, the coreference corresponding to the subject 'it' is lost." }, { "figure_ref": [], "heading": "A.4 Overall simplification score", "publication_ref": [ "b42", "b45", "b69", "b12", "b5", "b36", "b78" ], "table_ref": [], "text": "Similar to MQM (Lommel et al., 2014), we collapse edit annotations into a simplification score to allow for direct system comparison. We calculate the sentence-level score as a weighted sum of edit ratings:\ne∈E exp len(e C ) + len(e S ) len(C) + len(S) • w(e) • r(e)\nwhere S is the simplification of complex sentence C, E is the set of edits, e C and e S are the parts of edit e performed on C and S respectively, w(e) is the edit weight, r(e) is the edit rating (severity / efficacy), and len denotes character length. 4 For weight scheme w(e), we fit a linear regression by considering the sentence-level human ratings gathered in SIMPEVAL 2022 (Maddela et al., 2023) as a gold standard. As the type of simplification depends on the needs of each particular user group (Stajner, 2021), weights may be adjusted according to the simplification domain (Cemri et al., 2022;Basu et al., 2023;Joseph et al., 2023) or use case (Trienes et al., 2022)." }, { "figure_ref": [], "heading": "B Structural Edit Examples", "publication_ref": [ "b3", "b70", "b11", "b9", "b70" ], "table_ref": [ "tab_6" ], "text": "Examples of each structural edit sub-type are listed in Table 5. We find training annotators to label structure change sub-type improved their ability to identify structure changes. We include morphological changes (e.g., tense change) as structure edits since these typically require multiple disconnected edits to perform and impact sentence-level meaning. Additionally, other work (Barancikova and Bojar, 2020), specifically Stodden and Kallmeyer (2022) annotate with a larger array of structural changes, notably including separate directions as distinct categories (e.g. singular → plural and plural → singular) and including change in sentiment and personal/impersonal form. We exclude these types as they almost never occur in the entirety of the ASSET corpus (Cardon et al., 2022). However, 4 We normalize the edit length and use exp to add weight for longer edits. a case study in Italian simplification (Brunato et al., 2022) shows this structural edit distribution may vary when adapted to the needs of other languages. Similarly, German simplification often converts genitive to dative noun cases, a feature not seen in English simplification (Stodden and Kallmeyer, 2022)." }, { "figure_ref": [], "heading": "C Data Collection Details C.1 Simplification Systems", "publication_ref": [ "b47", "b46", "b87", "b59", "b34", "b53", "b45", "b46", "b45", "b76", "b77", "b80", "b15" ], "table_ref": [], "text": "Our main corpus of 700 simplifications are from the following diverse simplification approaches: MUSS (Martin et al., 2022), a BART-large model conditioned on explicit parameter tokens from Martin et al. (2020), fine-tuned on Wiki-Large (Zhang and Lapata, 2017) and mined paraphrase data. MUSS is the SOTA model before GPT-3.5. T5 (Raffel et al., 2020), an encoder-decoder transformer pre-trained on 745 GB of web text. We use T5-3B and T5-11B variants and fine-tune on the aligned Wiki-Auto dataset (Jiang et al., 2020), shown to be higher quality than Wiki-Large. GPT-3.5, a series of GPT-3 models pre-trained on text and code dated before Q4 2021. We use the best available text-davinci-003 model, based on InstructGPT (Ouyang et al., 2022), fine-tuned with human demonstrations and reinforcement learning with human feedback. We include both zeroand few-shot (5-shot) generation, using the same prompt setup as SIMPEVAL 2022 (Maddela et al., 2023).\nHumans. We ask two in-house annotators to write simplifications for the 40 newly selected sentences, replicating instructions used in SIMPEVAL 2022 . We average the annotations of both human simplifications for dataset analysis.\nOur test set of 140 simplifications are from recent approaches, including open-source LLMs:\nT5 with ACCESS Tokens, we use the same training setup as our fine-tuned T5 model, but prepend the input with ACCESS control tokens (Martin et al., 2020): character length ratio, dependency tree depth ratio, character-level Levenshtein similarity, and inverse frequency ratio. During inference, we use 0.9 for the length ratio, and 0.75 for the other three control tokens, following the setup in (Maddela et al., 2023).\nAlpaca-7B (Taori et al., 2023), a fine-tuned LLaMA model (Touvron et al., 2023) on 52K GPT-3.5 outputs generated using the Self-Instruct technique (Wang et al., 2023). As we find the prompt used for GPT-3.5 is too complex for Alpaca, we use the following prompt:\n\"Rewrite the following complex sentence in order to make it easier to understand by non-native speakers of English.\"\nVicuna-7B (Chiang et al., 2023), a fine-tuned LLaMA model on 70K publicly shared ChatGPT conversations. As the training data for Vicuna includes prompts that are more diverse and complex than those used by Alpaca, Vicuna can manage longer prompts, but not at the level of GPT-3.5, so we use the following prompt:\n\"Rewrite the following complex sentence in order to make it easier to understand by non-native speakers of English. The final simplified sentence needs to be grammatical, fluent, and retain the main ideas of its original counterpart without altering its meaning.\"\nChatGPT, an optimized chat variant of GPT-3.5, the model we use is gpt-3.5-turbo-0301. GPT-4, a large multimodal model that performs better than GPT-3.5 models. We use the version of gpt-4-0314.\nFor ChatGPT and GPT-4, we use the same prompt as GPT-3.5:\n\"Rewrite the following complex sentence in order to make it easier to understand by non-native speakers of English. You can do so by replacing complex words with simpler synonyms (i.e. paraphrasing), deleting unimportant information (i.e. compression), and/or splitting a long complex sentence into several simpler ones. The final simplified sentence needs to be grammatical, fluent, and retain the main ideas of its original counterpart without altering its meaning.\" Humans. As existing automatic simplification evaluation metrics rely on human references, we include two human-written simplifications to use for metric evaluation, but do not collect annotations on these references." }, { "figure_ref": [], "heading": "C.2 Interpreting Annotator Agreement", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "As the SIMPEVAL challenge dataset contains more edits than past simplification corpora, edit annotation becomes significantly more challenging as multiple groups of edits often overlap and simplifications contain more compression and sentencelevel transformations. Additionally, error-prone systems like MUSS make it challenging to disambiguate error and quality edits. Figure 8 illustrates an example of this disagreement, showing many of the same tokens are annotated, but with different edit spans. For example, observe the last clause in the sentence, which performs a rewrite:\nEXAMPLE that the fort stood out for its defenders' heroic resistance. and the defenders of the fort gave their lives to save the city.\nWe see three different, but valid understandings of this phrase: 1. Information was replaced -The information about the defenders' resistance is inherently different then the defenders' giving their lives to save the city and is therefore an add/deletion pair. The left includes all edits, while the right calculates agreement using the underlying constituent spans selected for structure and split edits.\n2. Information was retained, but paraphrased -The phrase heroic resistance being equivalent in meaning to gave their lives. 3. Subject was modified and information was replaced -The subject swap between the subject of the clause being the fort to being the defenders. The rest being an add/deletion pair.\nVarying interpretations of the same edit leads to natural disagreement. However, often a clear annotation exists and is not captured. For example, although we instructed annotators to create separate edits for overlapping syntax and conceptual edits, this occurred inconsistently in practice: EXAMPLE it was during the siege of the city of Elvas Don Luis de Haro attacked the city of Elvas 1. Identified the edit as a structural change, because the noun siege was replaced with a verb, modifying the voice of the sentence 2. Identified a paraphrase, annotating siege as a more complex word than attacked 3. Correctly identified both edits occurred simultaneously\nWe find the largest source of disagreement comes from overlapping edits of multiple types, most often between structural changes and other types, because they often co-occur. Figure 9 demonstrates structural edits explain a significant portion of disagreement. Additionally, because structural edits are a composite edit, the same spans are captured by the structural edits' constituent spans and recalculating agreement using these spans, disagreement instead focuses on whether tokens are substituted.\nWithin individual sentences, we often observe multiple valid interpretations for span labeling, highlighting the inherit ambiguity in the task. Despite this, annotators still successfully communicated edit performance. All three annotators identified both the bad deletion and hallucination errors contained in the sentence. For the full SIMPEVAL dataset, we report error identification agreement in Table 6, finding syntax errors (e.g., bad structure, bad reorder) are far more difficult to identify than content or lexical errors. Particularly, complex wording and grammar errors exhibit both high fre- quency and high agreement, as the definitions of these errors are unambiguous. Broadly, we find that high span-level agreement is not necessary for capturing overall, or even fine-grained sentence-level performance, a clear trade-off exists between the granularity of annotations and expected agreement." }, { "figure_ref": [ "fig_1" ], "heading": "D Further Analysis", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Here, we report additional findings on the SIM-PEVAL dataset and model performance, alongside observations about edit-level evaluation as a task.\nFigure 11 reports the average edit coverage by each edit operation and error type. We find paraphrases are typically annotated as pairs of a few words, while conceptual edits typically occur on the clause level and are annotated together. Surprisingly, structure changes often occurred as a few words: EXAMPLE MUSS ... Corbin has expanded his business to include agritourism, using his farm to host weddings ... ... Corbin's business also offers agritourism and he uses his farm to host weddings ... The edit converts the beginning subordinate clause to a coordinate clause, yet only requires substituting a single word. Errors exhibited a significantly higher variance in size, which may be attributed to their sparsity, as no error except bad deletion occurs in more than 20% of outputs (Table 6). However, error sizes display the same trend as their quality counterparts, with conceptual errors typically being seen on the clause level. We also found single-word conceptual errors such as:\nEXAMPLE Zero-shot GPT-3.5 ... Arroyo released a statement that acted as an informal concession of sorts ... ... Arroyo released a statement that was like a formal concession." }, { "figure_ref": [ "fig_1", "fig_3", "fig_10", "fig_1", "fig_1" ], "heading": "EXAMPLE", "publication_ref": [ "b2", "b11" ], "table_ref": [], "text": "Few-shot GPT-3.5 The sentence included a fine of $400... They imposed a fine of $400... Were less frequent than hallucinating entirely new phrasing or ideas. This may be promising for error detection as it implies error spans are often clausal and occur among many adjacent tokens. Quality and Error Are Interrelated. Figure 10 displays sentence-level scores for our error typology across systems on SIMPEVAL. We find the existence of an error to be a consistent predictor of a lower quality sentence, even in human simplifications. However, we find some errors correlate with a higher score (e.g. bad structure, information rewrite), but this may be attributed to the multiclause complex sentences in SIMPEVAL having a Increased Edits Enables, But Does Not Guarantee Performance. Table 7 reports the mean and variance of sub-scores for the sentence-level SALSA score across each system. Edit-level scoring addresses the frequent evaluation concern that conservative systems may maximize their score by performing a minimal number of safe edits (Alva-Manchego et al., 2021). The qualitatively conservative simplifications of T5 and zero-shot GPT-3.5 often score low because they fail to make many edits. SALSA distinguishes the MUSS simplifications with many successes, but more failures than other systems. We find the extent of sentence editing is not heuristic, but is a prerequisite for high performance and that overall simplification performance is often determined by a small number of high-impact edits.\nSentence Length Impacts Edit Frequency. Previous linguistic annotation of the ASSET corpus (Cardon et al., 2022) reports that the number of modifications to a sentence does not correlate with input size. In Figure 13, we observe the same relationship on ASSET, however -because ASSET only represents simplifications of simpler sentences typically containing a single idea -when we extend the analysis to the more complex SIMPEVAL dataset, we see a clear relationship between the edit distance and the number of transformations in simplifications across all systems. This is also best exemplified by the split edit, which often signifies too many ideas are being contained within a single sentence. Figure 12 demonstrates the proportion of simplifications which exhibit a split across sentence lengths and edit distance. While split edits within ASSET were generally low, the much longer SIMPEVAL simplifications almost guaranteed all systems performed a sentence split. These findings highlight that performance measures should be length-agnostic, as to guarantee simplifications which simply contain more transformations due to a longer original sentence length are not arbitrarily rated as higher quality.\nComposite Edits. We report the breakdown of constituent edits in structure and split edits in Figure 15. Split edits typically need to rewrite the conjunction through inserting & deleting discourse tokens, while structure edits are typically performed some syntax transformation to the existing sentence tree, more often requiring substituted or reordered tokens.\nSALSA Test Set. Figure 16 reports the frequency of quality and error edits on the novel SALSA test set systems. While adding control tokens to T5 substantially improves the frequency of edits, we" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Tarek Naous, Nghia T. Le, Fan Bai, and Yang Chen for their helpful feedback on this work. We also thank Marcus Ma, Rachel Choi, Vishnesh J. Ramanathan, Elizabeth Liu, Govind Ramesh, Ayush Panda, Anton Lavrouk, Vinayak Athavale, and Kelly Smith for their help with human annotation. This research is supported in part by the NSF awards IIS-2144493 and IIS-2112633, ODNI and IARPA via the HIATUS program (contract 2022-22072200004). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." }, { "figure_ref": [], "heading": "Type Description Example", "publication_ref": [], "table_ref": [], "text": "Quality Evaluation Conceptual Elaboration Meaningful and correct information which enumerates the main idea Many volatile organic chemicals, which harm our environment, are increasing in abundance in the lower troposphere. Generalization Removes unnecessary, irrelevant or complicated information Many volatile organic chemicals are increasing in the lower troposphere. (in abundance was removed)" }, { "figure_ref": [], "heading": "Syntax", "publication_ref": [], "table_ref": [], "text": "Word-level Reorder\nOrder of words within a phrase is swapped Many organic volatile chemicals are increasing in abundance in the lower troposphere." }, { "figure_ref": [], "heading": "Componentlevel Reorder", "publication_ref": [], "table_ref": [], "text": "Order of phrases within a sentence is swapped In the lower troposphere, many volatile organic chemicals are increasing in abundance. Sentence Split Independent information converted to two separate sentences.\nMany volatile organic chemicals are increasing. They are found in abundance in the lower troposphere. Structure Change Rewrites voice, tense or structure. See Appendix B for details and sub-types\nThe abundance of many volatile organic chemicals is increasing in the lower troposphere. Lexical Paraphrase Lexical complexity of the phrase decreases, while the meaning is unchanged Many volatile organic chemicals are being seen more in the lower troposphere. Trivial Change Adds clarity or removes verbosity, while the lexical complexity and meaning is unchanged Many volatile organic chemicals are currently increasing in abundance in the lower troposphere. GPT-3.5 1.41 1.43 0.57 0.53 0.15 0.39 0.25 0.46 1.80 1.49 2.11 1 Table 7: Mean (µ) and std. deviation (σ) of average sentence-level SALSA sub-scores across systems. Human simplification may be interpreted as highly simplified (µ = 2.04) and highly diverse (σ = 2.16)." }, { "figure_ref": [], "heading": "Error Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conceptual", "publication_ref": [ "b55" ], "table_ref": [], "text": "a far greater number of positive edits when these corresponding errors occur. Broadly, we observe an inverse relationship between error and quality.\nAs the error score increases (a function of the severity, frequency and size of errors), the quality must decrease. find it still underperforms both MUSS and LLMs. Additionally, T5-11B makes a surprising increase in error frequency relative to the increase in the number of edits it performs relative to T5-3B. Language models demonstrate a smooth increase in edits, with the exception of GPT-4 making significantly less conceptual edits. Manual analysis reveals its conceptual edits are often sentence-level operations, which are not reflected in edit counts.\nThe LLaMA-based Alpaca and Vicuna demonstrate surprisingly strong performance despite their relatively small size and training setup, even outperforming the fine-tuned simplification models.\nSALSA Dataset Statistics. We report full statistics on all 840 simplifications in Table 8. Similar to FRANK (Pagnoni et al., 2021), we asked annotators to note edits that could not be annotated, and we observe less than 0.5% of edits were not captured by one of our edit types. We consider the SALSA framework complete." }, { "figure_ref": [], "heading": "E Further Word-level QE Results", "publication_ref": [], "table_ref": [], "text": "We include test set word-level F1 score on words in the original sentence, simplified sentence, and both sentences (same as Table 3) in Table 9. In the original sentence, only deletion edits are labeled. Thus, the performance in the original sentence column indicates the model's ability to identify quality or error deletion edits. The best-performing method, Ec-One, achieves over 50% in both quality and error F1. For the simplified sentence, which contains substitution and insertion edits, the model delivers better quality F1 but experiences a drop in error F1. This could be due to the higher proportion of error edits in deletion compared to substitution and insertion. In addition, the edit classification approach significantly improves the error F1 on the simplified sentence, compared to the tagging approaches, which reflects that tagging methods fail to capture multiple types of edits and those spanning both sentences like substitutions. " }, { "figure_ref": [], "heading": "F Implementation Details", "publication_ref": [ "b45", "b34", "b43", "b47" ], "table_ref": [], "text": "F.1 Generating Simplifications ( §3.1)\nFor all prompted models, we follow the hyperparameters of SIMPEVAL 2022 (Maddela et al., 2023), using temperature=1.0 and top-p=0.95. For all T5 variants, we train them on the Wiki-Auto corpus (Jiang et al., 2020) using 8 A40 GPUs for 8 epochs with a batch size of 64. We use a learning rate of 3e-4 and AdamW (Loshchilov and Hutter, 2019) as the optimizer. For MUSS, we replicate the original setup (Martin et al., 2022). We use beam search with a beam size of 10 for these fine-tuned models." }, { "figure_ref": [], "heading": "F.2 Automatic Metrics ( §5)", "publication_ref": [ "b62", "b45" ], "table_ref": [], "text": "Baseline Automatic Metrics.\nWe use RoBERTa-large as the base model for BERTSCORE and the best available wmt21-comet-mqm as COMET-MQM. LENS-SALSA. Our implementation is based on the reference-less COMETKIWI metric for machine translation (Rei et al., 2022). We modify their task setup of predicting binary quality labels for each output word ŷi ∈ {OK, BAD} to a regression task using labels ŷi ∈ [-3, 3], corresponding to each word rating in their SALSA annotations, as we find it performs better than using binary or three class labels in our preliminary study. Our regression task optimizes MSE loss on the word rating objective, rather than Cross Entropy Loss. The training objective can be formalized as: where λ s and λ w weight word-and sentence-level losses. We experimented with custom weighting for edit ratings, but did not fine performance improvements. For fine-tuning, we set λ w = 0.9.\nThe COMETKIWI design aggregates hidden states using a scalar mix module, and uses two feed forward networks for sentence-and wordlevel training. For pre-training, we optimize a RoBERTa-large model on the sentence-level SIM-PEVAL training data used to train LENS (Maddela et al., 2023), with the training setup using only a single MSE loss to predict the sentence-level score (i.e., λ s = 1, λ w = 0). We follow COMETKIWI and freeze parameter updates for the RoBERTa encoder for the first epoch and use a learning rate of 1e-5 and 3e-5 for pre-training and fine-tuning respectively. We pre-train and fine-tune for 5 epochs, using the model with the highest validation set performance. We report the corresponding validation performance in Table 10." }, { "figure_ref": [], "heading": "F.3 Edit Classification ( §6).", "publication_ref": [ "b50", "b60", "b41" ], "table_ref": [], "text": "All experiments are conducted using 2 A40 GPUs. We use the AdamW optimizer with a weight decay = 0.01, and implement our models using the Hugging Face Transformers. Learning rate are swept over 1e-5, 2e-5, 5e-5, 8e-5 for each method. Each run is trained for eight epochs with a batch of 32. This results in training times of less than five minutes per run for tagging methods and less than 20 minutes per run for the edit classification methods. We perform an evaluation of the validation set at each training step and use the model that achieved the highest validation performance on the test set.\nFor the word alignment model used in the twostage approach, we adopt the QA-based word aligner (Nagata et al., 2020), which formulates the task in a SQUAD style (Rajpurkar et al., 2018). We use RoBERTa-Large as the base model. We first pre-train it on monolingual word alignment datasets MultiMWA-Wiki and MultiMWA-Newsela from (Lan et al., 2021), and then fine-tune it on the SALSA annotations in the training set. During both pre-training and fine-tuning stages, we perform a learning rate sweep over {1e-5, 2e-5, 5e-5, 8e-5} and train for 5 epochs, and save checkpoint at the end of every epoch. The highest evaluated checkpoint (pre-train for 2 epochs and fine-tune for 2 epochs) is selected for testing, achieving 81.03 F1 on the validation set.\nOn a side note, for the word that is tokenized into multiple tokens, we use its first token for prediction." }, { "figure_ref": [], "heading": "G Annotation Tutorial", "publication_ref": [], "table_ref": [], "text": "We include screenshots to highlight the diversity of exercises and interactive elements in our detailed interface tutorial. " } ]
Large language models (e.g., GPT-4) are uniquely capable of producing highly rated text simplification, yet current human evaluation methods fail to provide a clear understanding of systems' specific strengths and weaknesses. To address this limitation, we introduce SALSA, an edit-based human annotation framework that enables holistic and fine-grained text simplification evaluation. We develop twenty one linguistically grounded edit types, covering the full spectrum of success and failure across dimensions of conceptual, syntactic and lexical simplicity. Using SALSA, we collect 19K edit annotations on 840 simplifications, revealing discrepancies in the distribution of simplification strategies performed by fine-tuned models, prompted LLMs and humans, and find GPT-3.5 performs more quality edits than humans, but still exhibits frequent errors. Using our finegrained annotations, we develop LENS-SALSA, a reference-free automatic simplification metric, trained to predict sentence-and word-level quality simultaneously. Additionally, we introduce word-level quality estimation for simplification and report promising baseline results. Our data, new metric, and annotation toolkit are available at https://salsa-eval.com.Zero-shot GPT-3.
Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA
[ { "figure_caption": "Figure 1 :1Figure 1: Simplification generated by GPT-4. Our editlevel SALSA reveals LLMs succeed across many edit types, but often fail to paraphrase and generalize.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The SALSA annotation process consists of (1) selecting edits, (2) identifying information change, (3) classifying edit type and (4) rating efficacy/severity.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The multi-stage SALSA edit evaluation framework. Spans are classified into twenty one success and failure types (trivial change counts as one type) using the interface shown in Figure 2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Successful edits per-model, organized by edit type. MUSS outperforms fine-tuned T5 but fails to capture more complex simplification techniques. Compared to GPT-3.5, human written simplifications have more generalization , a similar distribution of syntax edits, and slightly less paraphrasing .", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Success and failure edits on simplifications by five recent instruction fine-tuned language models.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "EXAMPLEHerbert Spencer's book makes the first... His book makes the first. . . Repetition. Some trivially additional information which simply repeats knowledge already previously contained in the candidate sentence. EXAMPLE ... the New York City Police Department is a law enforcement agency ... ... the New York City Police Department is a police department ... Despite successfully paraphrasing, police department, simply copies content from earlier in the sentence, instead of generating unique information. Contradiction. A negation of the meaning of the original sentence. This notably includes modifying an existing phrase to contradict the original sentence: EXAMPLE ... the Watergate burglars were convicted ... ... the Watergate burglars were not convicted ...", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Figure 9 :89Figure 8: Edit selection between three annotators on a MUSS simplification. For complex examples, multiple valid interpretations for span labeling may exist, however we find annotator's overall judgements are consistent.", "figure_data": "", "figure_id": "fig_8", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Figure 10 :Figure 11 :1011Figure 10: Average sentence-level score across error sentences for each system.", "figure_data": "", "figure_id": "fig_9", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure12: Edit distance and number of annotated edits for 300 randomly sampled sentences from ASSET and SIMPEVAL. While past work found no relationship, by extending ASSET to more complex sentences we see a clear correlation arise.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Pearson correlation between automatic metrics and SALSA sub-scores ( §A.4) on the SALSA test set. All reference-based metrics use two human-written references. Best; Second Best.", "figure_data": "QualityLexical Syntax Conceptual 0.043 0.149 0.097 0.038 0.144 0.202 -0.167 0.126 0.025 0.120 0.407 0.443 0.013 0.204 0.147 0.122 0.306 0.356ErrorLexical Syntax Conceptual 0.047 0.150 0.279 0.228 0.207 0.107 -0.147 -0.026 -0.093 -0.068 -0.041 0.054 -0.104 -0.013 -0.043 -0.017 0.019 0.086All Error-0.121 0.067 0.117 0.127 0.161 0.169AllAll Quality -0.095 0.179 0.027 0.074 0.336 0.459All Edits-0.116 0.170 0.056 0.092 0.334 0.446", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Word-level F1 scores of different methods on SALSA test set. Oracle uses annotated edit information.", "figure_data": "MethodQuality ErrorOkAverageEnd-to-endTag67.0028.2492.8862.71Tag-ML70.7330.0693.0964.62Two-stage (use word aligner to get edit information)Tag-EI69.0930.3793.0464.17Ec-Sep64.8736.1591.5664.20Ec-One68.7739.5091.9166.73Oracle (Ec-One)88.3169.4498.3585.47", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Examples of structural modification sub-types used for annotation.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Fleiss kappa error identification agreement measured per-sentence alongside error frequencies. As errors were far more rare, we observe a strong relationship between frequency and expected agreement.", "figure_data": "Fleiss kappa (κ) 2 ⁄3 Agree% % sentencesBad Deletion0.516435Complex Wording0.263220Information Rewrite0.272610Grammar Error0.171810Bad Structure0.02610Bad Reorder0.14199Irrelevant0.22268Bad Split0.13174Repetition0.33304Contradiction0.19251Coreference000", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
David Heineman; Yao Dou; Mounica Maddela; Wei Xu
[ { "authors": "Fernando Alva-Manchego; Louis Martin; Antoine Bordes; Carolina Scarton; Benoît Sagot; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations", "year": "2020" }, { "authors": "Fernando Alva-Manchego; Carolina Scarton; Lucia Specia", "journal": "Computational Linguistics", "ref_id": "b1", "title": "Data-driven sentence simplification: Survey and benchmark", "year": "2020" }, { "authors": "Fernando Alva-Manchego; Carolina Scarton; Lucia Specia", "journal": "Computational Linguistics", "ref_id": "b2", "title": "The (un)suitability of automatic evaluation metrics for text simplification", "year": "2021" }, { "authors": "Petra Barancikova; Ondřej Bojar", "journal": "", "ref_id": "b3", "title": "COSTRA", "year": "2020" }, { "authors": "", "journal": "European Language Resources Association", "ref_id": "b4", "title": "0: A dataset of complex sentence transformations", "year": null }, { "authors": "Chandrayee Basu; Rosni Vasu; Michihiro Yasunaga; Qian Yang", "journal": "", "ref_id": "b5", "title": "Med-easi: Finely annotated dataset and models for controllable simplification of medical texts", "year": "2023" }, { "authors": "Prasenjit Basu; Santanu Pal; Sudip Kumar; Naskar ", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Keep it or not: Word level quality estimation for post-editing", "year": "2018" }, { "authors": "Eleftheria Briakou; Sweta Agrawal; Ke Zhang; Joel Tetreault; Marine Carpuat", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "A review of human evaluation for style transfer", "year": "2021" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Dominique Brunato; Felice Dell'orletta; Giulia Venturi", "journal": "Frontiers in Psychology", "ref_id": "b9", "title": "Linguistically-based comparison of different approaches to building corpora for text simplification: A case study on Italian", "year": "2022" }, { "authors": "Meng Cao; Yue Dong; Jackie Cheung", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization", "year": "2022" }, { "authors": "Rémi Cardon; Adrien Bibal; Rodrigo Wilkens; David Alfter; Magali Norré; Adeline Müller; Watrin Patrick; Thomas François", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Linguistic corpus annotation for automatic text simplification evaluation", "year": "2022" }, { "authors": "Mert Cemri; Tolga Çukur; Aykut Koç", "journal": "", "ref_id": "b12", "title": "Unsupervised simplification of legal texts", "year": "2022" }, { "authors": "R Chandrasekar; Christine Doran; B Srinivas", "journal": "", "ref_id": "b13", "title": "Motivations and methods for text simplification", "year": "1996" }, { "authors": "Jiangjie Chen; Rui Xu; Wenxuan Zeng; Changzhi Sun; Lei Li; Yanghua Xiao", "journal": "AAAI Press", "ref_id": "b14", "title": "Converge to the truth: Factual error correction via iterative constrained editing", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b15", "title": "Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Leshem Choshen; Omri Abend", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Inherent biases in reference-based evaluation for grammatical error correction", "year": "2018" }, { "authors": "M Oscar; Itziar Cumbicus-Pineda; Aitor Gonzalez-Dios; Soroa", "journal": "", "ref_id": "b17", "title": "Linguistic capabilities for a checklist-based evaluation in automatic text simplification", "year": "2021" }, { "authors": "M Oscar; Itziar Cumbicus-Pineda; Aitor Gonzalez-Dios; Soroa", "journal": "INCOMA Ltd", "ref_id": "b18", "title": "A syntax-aware edit-based system for text simplification", "year": "2021" }, { "authors": "Ashwin Devaraj; William Sheffield; Byron Wallace; Junyi Jessy Li", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Evaluating factuality in text simplification", "year": "2022" }, { "authors": "Yue Dong; Zichao Li; Mehdi Rezagholizadeh; Jackie Chi; Kit Cheung", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing", "year": "2019" }, { "authors": "Yao Dou; Maxwell Forbes; Rik Koncel-Kedziorski; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Is GPT-3 text indistinguishable from human text? scarecrow: A framework for scrutinizing machine text", "year": "2022" }, { "authors": "Mark Dras", "journal": "", "ref_id": "b22", "title": "Tree adjoining grammar and the reluctant paraphrasing of text", "year": "1999" }, { "authors": "Wanyu Du; Vipul Raheja; Dhruv Kumar; Myung Zae; Melissa Kim; Dongyeop Lopez; Kang", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Understanding iterative revision from human-written text", "year": "2022" }, { "authors": "Markus Freitag; George Foster; David Grangier; Viresh Ratnakar; Qijun Tan; Wolfgang Macherey", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b24", "title": "Experts, errors, and context: A large-scale study of human evaluation for machine translation", "year": "2021" }, { "authors": "Aparna Garimella; Abhilasha Sancheti; Vinay Aggarwal; Ananya Ganesh; Niyati Chhaya; Nandakishore Kambhatla", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Text simplification for legal domain: Insights and challenges", "year": "2022" }, { "authors": "Sian Gooding", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "On the ethical considerations of text simplification", "year": "2022" }, { "authors": "Sian Gooding; Manuel Tragut", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "One size does not fit all: The case for personalised word complexity models", "year": "2022" }, { "authors": "Tanya Goyal; Junyi ; Jessy Li; Greg Durrett", "journal": "", "ref_id": "b28", "title": "News summarization and evaluation in the era of gpt-3", "year": "2022" }, { "authors": "Arnav Gudibande; Eric Wallace; Charles Burton Snell; Xinyang Geng; P Hao Liu; Sergey Abbeel; Dawn Levine; Song", "journal": "", "ref_id": "b29", "title": "The false promise of imitating proprietary llms", "year": "2023" }, { "authors": "B Tatsunori; Hugh Hashimoto; Percy Zhang; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Unifying human and statistical evaluation for natural language generation", "year": "2019" }, { "authors": "David Heineman; Yao Dou; Wei Xu", "journal": "", "ref_id": "b31", "title": "Thresh: A unified, customizable and deployable platform for fine-grained text evaluation", "year": "2023" }, { "authors": "T Jaeger; Roger Levy", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Speakers optimize information density through syntactic reduction", "year": "2006" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Comput. Surv", "ref_id": "b33", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Chao Jiang; Mounica Maddela; Wuwei Lan; Yang Zhong; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Neural CRF model for sentence alignment in text simplification", "year": "2020" }, { "authors": "Xiaotong Jiang; Zhongqing Wang; Guodong Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Semantic simplification for sentiment classification", "year": "2022" }, { "authors": "Sebastian Joseph; Kathryn Kazanas; Keziah Reina; J Vishnesh; Wei Ramanathan; Byron C Xu; Junyi Jessy Wallace; Li", "journal": "", "ref_id": "b36", "title": "Multilingual simplification of medical texts", "year": "2023" }, { "authors": "Klaus Krippendorff", "journal": "", "ref_id": "b37", "title": "Content analysis: An introduction to its methodology", "year": "2018" }, { "authors": "Reno Kriz; Marianna Apidianaki; Chris Callison-Burch", "journal": "", "ref_id": "b38", "title": "Simple-QE: Better automatic quality estimation for text simplification", "year": "2020" }, { "authors": "Dhruv Kumar; Lili Mou; Lukasz Golab; Olga Vechtomova", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Iterative edit-based unsupervised sentence simplification", "year": "2020" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul Bennett; Marti A Hearst", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Keep it simple: Unsupervised simplification of multi-paragraph text", "year": "2021" }, { "authors": "Wuwei Lan; Chao Jiang; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Neural semi-Markov CRF for monolingual word alignment", "year": "2021" }, { "authors": "Arle Lommel; Hans Uszkoreit; Aljoscha Burchardt", "journal": "Revista Tradumàtica: tecnologies de la traducció", "ref_id": "b42", "title": "Multidimensional quality metrics (MQM): A framework for declaring and describing translation quality metrics", "year": "2014" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b43", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Mounica Maddela; Fernando Alva-Manchego; Wei Xu", "journal": "", "ref_id": "b44", "title": "Controllable text simplification with explicit paraphrasing", "year": "2021" }, { "authors": "Mounica Maddela; Yao Dou; David Heineman; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "LENS: A learnable evaluation metric for text simplification", "year": "2023" }, { "authors": "Louis Martin; Éric De La Clergerie; Benoît Sagot; Antoine Bordes", "journal": "European Language Resources Association", "ref_id": "b46", "title": "Controllable sentence simplification", "year": "2020" }, { "authors": "Louis Martin; Angela Fan; Éric De La Clergerie; Antoine Bordes; Benoît Sagot", "journal": "European Language Resources Association", "ref_id": "b47", "title": "MUSS: Multilingual unsupervised sentence simplification by mining paraphrases", "year": "2022" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Clara Meister; Ryan Cotterell; Tim Vieira", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "If beam search is the answer, what was the question", "year": "2020" }, { "authors": "Masaaki Nagata; Katsuki Chousa; Masaaki Nishino", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "A supervised word alignment method based on cross-language span prediction using multilingual BERT", "year": "2020" }, { "authors": " Openai", "journal": "", "ref_id": "b51", "title": "ChatGPT: Optimizing language models for dialogue", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b52", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b53", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Gustavo H Paetzold; Lucia Specia", "journal": "", "ref_id": "b54", "title": "Text simplification as tree transduction", "year": "2013" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "", "ref_id": "b55", "title": "Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics", "year": "2021" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Sebastian Riedel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander Miller", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Jipeng Qiang; Yun Li; Yi Zhu; Yunhao Yuan; Xindong Wu", "journal": "", "ref_id": "b58", "title": "Lexical simplification with pretrained encoders", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b59", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Pranav Rajpurkar; Robin Jia; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Know what you don't know: Unanswerable questions for SQuAD", "year": "2018" }, { "authors": "Ricardo Rei; Craig Stewart; Ana C Farinha; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b61", "title": "COMET: A neural framework for MT evaluation", "year": "2020" }, { "authors": "Ricardo Rei; Marcos Treviso; M Nuno; Chrysoula Guerreiro; Ana C Zerva; Christine Farinha; Maroti; G C José; Taisiya De Souza; Duarte Glushkova; Luisa Alves; Alon Coheur; Lavie; F T André; Martins", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "CometKiwi: IST-unbabel 2022 submission for the quality estimation shared task", "year": "2022" }, { "authors": "Michael Ryan; Tarek Naous; Wei Xu", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "Revisiting non-English text simplification: A unified multilingual benchmark", "year": "2023" }, { "authors": "Carolina Scarton; Alessio Palmero Aprosio; Sara Tonelli; Tamara Martín Wanton; Lucia Specia", "journal": "Association for Computational Linguistics", "ref_id": "b64", "title": "MUSST: A multilingual syntactic simplification tool", "year": "2017" }, { "authors": "Boaz Shmueli; Jan Fell; Soumya Ray; Lun-Wei Ku", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "Beyond fair pay: Ethical implications of NLP crowdsourcing", "year": "2021" }, { "authors": "Advaith Siddharthan", "journal": "Research on Language and Computation", "ref_id": "b66", "title": "Syntactic simplification and text cohesion", "year": "2006" }, { "authors": "Advaith Siddharthan", "journal": "ITL-International Journal of Applied Linguistics", "ref_id": "b67", "title": "A survey of research on text simplification", "year": "2014" }, { "authors": "Sanja Štajner", "journal": "", "ref_id": "b68", "title": "New data-driven approaches to text simplification", "year": "2016" }, { "authors": "Sanja Stajner", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "Automatic text simplification for social good: Progress and challenges", "year": "2021" }, { "authors": "Regina Stodden; Laura Kallmeyer", "journal": "Association for Computational Linguistics", "ref_id": "b70", "title": "TS-ANNO: An annotation tool to build, annotate and evaluate text simplification corpora", "year": "2022" }, { "authors": "Elior Sulem; Omri Abend; Ari Rappoport", "journal": "Association for Computational Linguistics", "ref_id": "b71", "title": "BLEU is not suitable for the evaluation of text simplification", "year": "2018" }, { "authors": "Elior Sulem; Omri Abend; Ari Rappoport", "journal": "Association for Computational Linguistics", "ref_id": "b72", "title": "Semantic structural evaluation for text simplification", "year": "2018" }, { "authors": "Elior Sulem; Omri Abend; Ari Rappoport", "journal": "Association for Computational Linguistics", "ref_id": "b73", "title": "Simple and effective text simplification using semantic and neural methods", "year": "2018" }, { "authors": "Renliang Sun; Jin Hanqi; Xiaojun Wan", "journal": "Association for Computational Linguistics", "ref_id": "b74", "title": "Document-level text simplification: Dataset, criteria and baseline", "year": "2021" }, { "authors": "Renliang Sun; Zhe Lin; Xiaojun Wan", "journal": "International Committee on Computational Linguistics", "ref_id": "b75", "title": "On the helpfulness of document context to sentence simplification", "year": "2020" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b76", "title": "Stanford Alpaca: An instruction-following LLaMA model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b77", "title": "LLaMA: Open and efficient foundation language models", "year": "2023" }, { "authors": "Jan Trienes; Jörg Schlötterer; Hans-Ulrich Schildhaus; Christin Seifert", "journal": "Association for Computational Linguistics", "ref_id": "b78", "title": "Patient-friendly clinical notes: Towards a new text simplification dataset", "year": "2022" }, { "authors": "Yu Wan; Dayiheng Liu; Baosong Yang; Haibo Zhang; Boxing Chen; Derek Wong; Lidia Chao", "journal": "Association for Computational Linguistics", "ref_id": "b79", "title": "UniTE: Unified translation evaluation", "year": "2022" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b80", "title": "Self-instruct: Aligning language models with self-generated instructions", "year": "2023" }, { "authors": "Zeqiu Wu; Yushi Hu; Weijia Shi; Nouha Dziri; Alane Suhr; Prithviraj Ammanabrolu; Noah A Smith; Mari Ostendorf; Hannaneh Hajishirzi", "journal": "", "ref_id": "b81", "title": "Fine-grained human feedback gives better rewards for language model training", "year": "2023" }, { "authors": "Wei Xu; Courtney Napoles; Ellie Pavlick; Quanze Chen; Chris Callison-Burch", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b82", "title": "Optimizing statistical machine translation for text simplification", "year": "2016" }, { "authors": "Wei Xu; Alan Ritter; Bill Dolan; Ralph Grishman; Colin Cherry", "journal": "", "ref_id": "b83", "title": "Paraphrasing for style", "year": "2012" }, { "authors": "Daichi Yamaguchi; Rei Miyata; Sayuka Shimada; Satoshi Sato", "journal": "Association for Computational Linguistics", "ref_id": "b84", "title": "Gauging the gap between human and machine text simplification through analytical evaluation of simplification strategies and errors", "year": "2023" }, { "authors": "Chrysoula Zerva; Frédéric Blain; Ricardo Rei; Piyawat Lertvittayakumjorn; G C José; Steffen Souza; Diptesh Eger; Duarte Kanojia; Constantin Alves; Marina Orȃsan; Fomicheva; F T André; Lucia Martins; Specia", "journal": "Association for Computational Linguistics", "ref_id": "b85", "title": "Findings of the WMT 2022 shared task on quality estimation", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b86", "title": "BERTScore: Evaluating text generation with BERT", "year": "2020" }, { "authors": "Xingxing Zhang; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b87", "title": "Sentence simplification with deep reinforcement learning", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 85.81, 640.09, 113, 8.12 ], "formula_id": "formula_0", "formula_text": "EXAMPLE (structure change)" }, { "formula_coordinates": [ 3, 321.09, 379.28, 121.56, 8.12 ], "formula_id": "formula_1", "formula_text": "EXAMPLE (component reorder)" }, { "formula_coordinates": [ 3, 321.09, 546.78, 115.67, 8.12 ], "formula_id": "formula_2", "formula_text": "EXAMPLE (coreference error)" }, { "formula_coordinates": [ 7, 365.08, 70.5, 147.05, 22.73 ], "formula_id": "formula_3", "formula_text": "B L E U S A R I B E R T S C O R E C O M E T -M Q M L E N S L E N S -S A L S A" }, { "formula_coordinates": [ 19, 318.98, 678.54, 192.6, 29.64 ], "formula_id": "formula_4", "formula_text": "e∈E exp len(e C ) + len(e S ) len(C) + len(S) • w(e) • r(e)" } ]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b7", "b13", "b14", "b15", "b16", "b11", "b12", "b15", "b16", "b14", "b8", "b17", "b18", "b19", "b7", "b10", "b11", "b13", "b11", "b12", "b20", "b21", "b22" ], "table_ref": [], "text": "F EATURE extraction is one of the core tasks in computer vision and pattern recognition. An ideal feature should be invariant to various spatial deformations caused by imaging geometry, which ensures that it captures intrinsic information of an image. For many practical applications, such as object recognition, image classification, and patch matching, twodimensional rotation is the most common spatial transformation, and thus it is essential to achieve rotational invariance of image features in these cases.\nIn past decades, numerous hand-crafted features that are invariant to image rotation have been developed [1], [2], [3], [4], [5], [6]. Since 2012, deep neural networks, especially convolutional neural networks (CNNs), have been proven to be more effective than most hand-crafted features in computer vision tasks. Nonetheless, conventional convolution operations are not rotation-invariant. Actually, even if an image is slightly rotated, CNNs may not be able to recognize it correctly. To address this, a direct approach is to train a CNN with rotated training samples, i.e. data augmentation. However, it has obvious drawbacks, including increasing training time and costs, learning some redundant weights, and further reducing the interpretability of CNNs [7], [8].\nHence, recent research has aimed to incorporate rotational invariance into convolutional operations and design new network architectures. Based on various methods, such as orientation assignment, polar/log-polar transform, steerable filters, and multi-orientation feature extraction, researchers successively propose Spatial Transformer Network (STN) [9], Polar Transformer Network [10], General E(2)-Equivariant Steerable CNN (E(2)-CNN) [11], Group Equivariant Convolutional Network (G-CNN) [12], Rotation Equivariant Vector Field Network (RotEqNet) [13], Harmonic Network (H-Net) [8], Bessel CNN (B-CNN) [14], Rotation-Invariant Coordinate CNN (RIC-CNN) [15] and so on [16], [17].\nHowever, existing rotation-invariant convolution operations have three major limitations: 1) Most methods are invariant to specific rotation angles rather than arbitrary angles [12], [13], [16], [17]. Some of them, like RIC-CNN [15], are only invariant to continuous rotations around image center; 2) Some methods require extra trainable parameters and rely on data augmentation when training [9], [18], [19], [20]; 3) Many rotation-invariant convolution operations are more complex and not easily replaceable with traditional convolution in common CNN models (like VGG and ResNet) [8], [11], [12], [14]. Moreover, some papers still use data augmentation to train their proposed rotation-invariant CNN models, making it difficult to determine whether the new architectures or only rotated training samples enhanced CNN models' rotational invariance [12], [13].\nThe goal of this letter is to address these limitations in some extend. Our contributions can be summarized as follows:\n• Inspired by some hand-crafted features of texture images [21], [22], [23] " }, { "figure_ref": [], "heading": "A. Sorted Convolution Operation", "publication_ref": [ "b23" ], "table_ref": [], "text": "For an input F (X) with the size of h × w, a conventional convolutional operation Φ C acting on a given point\nX 0 ∈ {1, 2, • • • , h} × {1, 2, • • • , w} can be expressed as below Φ C (X 0 , F (X)) = P ∈S W (P ) • F (X 0 + P )(1)\nHere, W is a (2n + 1) × (2n + 1) learnable kernel, n is a non-negative integer, and P enumerates all points on the square grid\nS = {-n, -n + 1, • • • , n} × {-n, -n + 1, • • • , n}. For example, when W is a 3 × 3 kernel, we have S = {(-1, -1), (-1, 0), • • • ,(0, 1), (1, 1)\n}, which contains 9 points. Our paper only considers odd-sized W because the shift issue occurs in even-sized ones [24].\nAssuming that G(Y ) is a rotated version of F (X), that is, G(Y ) = F (R -θ Y ),\nwhere R -θ is a 2 × 2 rotation matrix and θ represents the rotation angle. Let Y 0 be the corresponding point of X 0 , then the convolution operation at Y 0 is\nΦ C (Y 0 , G(Y )) = P ∈S W (P ) • G(Y 0 + P )(2)\nSince X 0 = R -θ Y 0 , the following relation can be obtained\nG(Y 0 +P ) = F (R -θ (Y 0 +P )) = F (X 0 +R -θ P ) ̸ = F (X 0 +P ) (3) By substituting (3) into (2), we can find Φ C (Y 0 , G(Y )) ̸ = Φ C (X 0 , F (X))(4)\nThus, the conventional convolutional operation Φ C is not invariant to two-dimensional rotation.\nAssuming that (2n + 1) • (2n + 1) points R -θ P still belong to the square grid S, that is, after rotation, all R -θ P and P completely overlap. In this case, although G(Y 0 + P ) ̸ = F (X 0 + P ), we have\n{G(Y 0 + P )} P ∈S = {F (X 0 + R -θ P )} P ∈S = {F (X 0 + P )} P ∈S(5)\nmeaning that the input values used for the convolution operation at points X 0 and Y 0 are the same, but with different arrangements. Obviously, if the (2n + 1) • (2n + 1) values in {G(Y 0 + P )} P ∈S and {F (X 0 + P )} P ∈S are sorted in ascending order separately, the two resulting sorted sequences should be exactly the same.\nIf we arrange the sorted sequence in row-major order on the (2n + 1) × (2n + 1) square grid S, and represent the new value at point P after sorting as F s (X 0 +P ). Then, the sorted convolution operation can be defined as follows\nΦ SC (X 0 , F (X)) = P ∈S W (P ) • F s (X 0 + P )(6)\nSince F s (X 0 + P ) = G s (Y 0 + P ) for any P , we have\nΦ SC (Y 0 , G(Y )) = Φ SC (X 0 , F (X))(7)\nindicating that Φ SC is invariant to arbitrary rotations. Moreover, the sorting operation does not require learning from the training data, so the number of learnable parameters in Φ SC is the same as in the standard convolutional operation Φ C ." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "B. Sampling and Sorting Strategies", "publication_ref": [ "b20", "b21" ], "table_ref": [], "text": "Formula ( 5) assumes that all R θ P are still on the (2n + 1) × (2n + 1) square grid S, which is only true for discrete convolution when θ = k • 90 • (k is an integer). To address this issue, a polar coordinate system centered at X 0 is established, and 8r points are evenly sampled on a circumference with radius r centered at X 0 , where r = 1, 2, • • • , n (see Fig. 1(a)). Bilinear interpolation is used to obtain the values of F (X) at these points, which are then sorted in ascending order and arranged row by row on the square grid S. When the rotation angle θ = k • 360 • /(8r), the 8r points on the circle with radius r coincide before and after rotation. Thus, compared to the square sampling, the polar sampling ensures better validity of the formula (5) and rotational invariance of Φ SC .\nIn addition to the sampling strategy, the sorting strategy is also worth discussing. In fact, the sorting destroys the local structure of F (X) in the (2n + 1) × (2n + 1) neighborhood of X 0 . Previous researchers also used sorting to construct rotation-invariant features for texture images. To preserve the local structure to some extent and improve the discriminability of features, they designed a ring sorting method [21], [22]. Unlike the global sorting, the ring sorting separately sorts the 8r points in the rth square/circular ring about the center point X 0 and arranges the sorted values in a row-first manner at these 8r positions (see Fig. 1(b)). The method preserves some spatial information while ensuring rotational invariance." }, { "figure_ref": [], "heading": "C. The Implementation of Sorted CNN", "publication_ref": [], "table_ref": [], "text": "The traditional convolution Φ C (defined in (1)) produces an output with the size of h × w when the stride is set to 1 and padding is performed. However, for the corresponding Φ SC (defined in ( 6)), we first sort the input values within a (2n + 1) × (2n + 1) neighborhood for each position " }, { "figure_ref": [], "heading": "III. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experiment Setup", "publication_ref": [ "b23", "b24", "b25" ], "table_ref": [], "text": "Datasets: MNIST [24] has 70K 28 × 28 handwritten digit images (0-9), with 60K for training and 10K for testing. 10K training images are randomly selected for validation. Each of test images are rotated from 0 • to 350 • every 10 • , resulting in 360K rotated test images. The new test set is called MNIST-rot and used to verify the SCNN's rotational invariance. Outex TC 00012 [25] contains 24 texture classes and 9120 grayscale images of size 128 × 128. For each class, 20 texture surfaces are captured under three lighting conditions (\"inca\", \"t184\" and \"horizon\") as training images. Then, they are captured as test images from 8 different rotation angles (5 • ∼ 90 • ) under \"t184\" and \"horizon\" lighting conditions. Thus, the size of the training set is 24 × 20 × 3 = 1440, and the size of the test set is 24 × 20 × 2 × 8 = 7680. NWPU-RESISC45 [26] is a dataset for remote sensing image scene classification. It contains 31500 RGB images of size 256×256 divided into 45 scene classes, each class containing 700 images. We resize all images to 128×128 and randomly select 400 images from each class as training images, with the remaining images used as test images. Due to the arbitrary shooting angles, there are rotational variations present in many classes, such as \"airplane\", \"bridge\" and \"ground track field\"." }, { "figure_ref": [], "heading": "Models and Training Protocol:", "publication_ref": [ "b26", "b27", "b28" ], "table_ref": [], "text": "We initially design a baseline CNN model with six convolutional layers, having 32, 32, 64, 64, 128, and 128 channels, respectively. We apply 2 × 2 max pooling after the second and fourth layers, and use 7 × 7 average pooling after the final convolutional layer. Then, the feature vector is fed into a fully connected layer with ten units. The kernel size for the last two convolutional layers is 3×3, while for the first four layers, the kernel size is the same K × K, where K ∈ {3, 5, 7}. By replacing each of classical convolution operations in the baseline with the corresponding SC, we can obtain a SCNN model. When implementing SC, we have options for square sampling (S) or polar sampling (P), as well as global sorting (GS) or ring-based sorting (RS). This results in 12 different SCNNs ({S, P }×{GS, RS}×{3, 5, 7}). For example, P-RS-5 indicates that the first four layers use 5 × 5 SC with polar sampling and ring sorting. We train all these SCNNs on MNIST traning dataset with Adam optimizer, while the initial learning rate is 10 -4 , multiplied by 0.8 every 10 epochs. The number of epochs and the batch size are 100.\nTo demonstrate the ease of integrating SC with commonly used CNN models, we select VGG16 [27], ResNet18 [28], and DenseNet40 [29] as baseline models. By replacing all traditional convolution operations in these models with SC, we obtain RI-VGG16, RI-ResNet18, and RI-DenseNet40. All of them are trained on Outex TC 00012 and NWPU-RESISC45, respectively. Again, the Adam optimizer is used, and the training process involves 100 epochs with a batch size of 10. The initial learning rate is set to 10 -4 for VGG16, RI-VGG16, ResNet, and RI-ResNet, while it is 10 -3 for DenseNet and RI-DenseNet. It is reduced by a factor of 0.6 every 10 epochs.\nOur experiments are performed on a Tesla V100 GPU (16G) upon Rocky Linux 8.7 system and PyTorch 2.0.0 framework. All models are trained from scratch without using pretrained parameters or data augmentation. This allows us to directly observe the performance improvement brought by SC." }, { "figure_ref": [ "fig_3" ], "heading": "B. Results on MNIST-Rot", "publication_ref": [ "b8", "b29", "b30", "b31", "b14" ], "table_ref": [], "text": "First, we test rotational invariance of 12 SCNN models with varying convolutional kernel sizes, sampling and sorting strategies on the MNIST-rot dataset. This test set contains 36 subsets, each containing 10K samples with the same rotation angle θ (0 This aligns with our theoretical analysis in Section II-B.\n• , 10 • , • • • , 350 • ).\n2) Ring sorting outperforms global sorting. Notably, P-RS-7 achieves the highest accuracy of 95.05%, surpassing S-RS-7's accuracy of 92.63% by 2.42%. This is because RS partially preserves spatial information within a convolutional region. 3) Using larger convolutional kernel sizes yields better results, especially when combined with ring sorting. For example, the accuracies obtained by P-RS-3, P-RS-5, and P-RS-7 are 88.98%, 93.92%, and 95.05%, respectively. Fig. 2(c) and Table 1 show the classification accuracies of P-SC-7, its baseline model, and six previous rotationinvariant CNN models on the original MNIST test set and MNIST-rot. Similarly to SCNN, H-Net, B-CNN, and E(2)-CNN also have continous rotational invariance even without data augmentation. In contrast, Oriented Response Network (ORN), RotEqNet, and G-CNN are only invariant to specific rotation angles like multiples of 30 • or 45 • . These models are trained using the protocols from their authors. We do not select STN [9], TI-Pooling [30], and several methods utilizing rotation-invariant loss functions [31], [32] for comparison, because their invariance relies on data augmentation. Our experimental results indicate the following: 1) On MNIST-rot, P-SC-7 surpasses the previous state-of-the-art method, E(2)-CNN, by improving the accuracy from 94.37% to 95.05%. Additionally, the performance of P-SC-7, H-Net, B-CNN, and E(2)-CNN significantly outperforms ORN, RotEqNet, and G-CNN, highlighting the importance of achieving continuous rotational invariance in CNN models. Furthermore, due to the inability to learn rotational invariance from the training data, even though Baseline and SCNN have an equal number of learnable parameters, Baseline achieves an accuracy of only 44.53%. 2) On the original MNIST test set, Baseline achieves the best result (99.43%). Previous research [15] has indicated that rotation-invariant CNNs struggle to distinguish between some digits, like \"9\" and \"6\", which contributes to their slightly lower performance on this test set." }, { "figure_ref": [], "heading": "C. Results on Outex TC 00012 and NWPU-RESISC45", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We evaluate three commonly used CNN baselines and their corresponding rotation-invariant models on the Outex TC 00012 dataset. The rotation-invariant models are obtained by replacing conventional convolutions with SC. The classification accuracy is displayed in the first column of Table II. Our rotation-invariant CNNs exhibit significantly higher accuracy compared to their baseline counterparts. For " }, { "figure_ref": [], "heading": "IV. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We develop a Sorting Convolution to achieve continuous rotational invariance in CNNs without additional parameters or data augmentation. Using the MNIST-rot dataset, we analyze the impact of kernel sizes, different sampling and sorting strategies on SC's rotational invariance and compare its performance with other rotation-invariant CNNs. Further, SC can directly replace conventional convolutions in classic CNNs, improving these models' rotational invariance. Thus, we combine SC with commonly used CNN models and conduct classification experiments on popular image datasets.\nOur results show SC excels in these tasks, especially when training data is limited." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Natural Science Foundation of China (Grant No.60873164, 61227802 and 61379082) and the Academy of Finland for Academy Professor project EmotionAI (Grant No.336116)." } ]
The topic of achieving rotational invariance in convolutional neural networks (CNNs) has gained considerable attention recently, as this invariance is crucial for many computer vision tasks such as image classification and matching. In this letter, we propose a Sorting Convolution (SC) inspired by some hand-crafted features of texture images, which achieves continuous rotational invariance without requiring additional learnable parameters or data augmentation. Further, SC can directly replace the conventional convolution operations in a classic CNN model to achieve its rotational invariance. Based on MNIST-rot dataset, we first analyze the impact of convolutional kernel sizes, different sampling and sorting strategies on SC's rotational invariance, and compare our method with previous rotation-invariant CNN models. Then, we combine SC with VGG, ResNet and DenseNet, and conduct classification experiments on popular texture and remote sensing image datasets. Our results demonstrate that SC achieves the best performance in the aforementioned tasks.
Sorted Convolutional Network for Achieving Continuous Rotational Invariance
[ { "figure_caption": "( a )aSquare and polar sampling strategies. (b) Global and ring sorting strategies. (c) Classic and sorted convolution operations.", "figure_data": "", "figure_id": "fig_0", "figure_label": "a", "figure_type": "figure" }, { "figure_caption": "Fig. 1 :1Fig. 1: The implementation details of the proposed sorted convolutional operation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Six SCNN models using square sampling.(b) Six SCNN models using polar sampling. (c) P-RS-7 and some models for comparison.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: The classification accuracies from SCNNs and other rotation-invariant CNN models on the MNIST-rot test set.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 2 (2a) illustrates the classification accuracy of six SCNNs using square sampling on each subset, while Figure2(b) displays the accuracy of six SCNNs using polar sampling. Our findings are as follows: 1) Polar sampling outperforms square sampling. The accuracy curves in Fig.2(b) show significant overall improvement compared to Fig.2(a). For example, S-RS-5 just achieves 87.60% accuracy, whereas P-RS-5 achieves 93.92% on the entire MNIST-rot.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "The classification accuracies on MNIST and MNIST-rot. Bold stands for best results.", "figure_data": "MethodsInput SizeMNISTMNIST-rotORN[16]32×3299.42%80.01%RotEqNet[13]28×2899.26%73.20%G-CNN[12]28×2899.27%44.81%H-Net[8]32×3299.19%92.44%B-CNN[14]28×2897.40%88.29%E(2)-CNN[11]29×2998.14%94.37%Baseline28×2899.43%44.53%SCNN28×2899.04%95.05%", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "The classification accuracies on Outex TC 00012.", "figure_data": "Training Data24×40=144024×40=96024×20=480VGG1660.07%58.13%57.36%RI-VGG1695.99%92.90%72.28%ResNet1864.79%66.77%63.10%RI-ResNet1899.70%99.63%98.41%DenseNet4066.02%66.58%60.70%RI-DenseNet4099.47%98.62%98.53%", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "We train six models on these smaller training sets and evaluate their performance on the original test set. The results are shown in the second and third columns of TableII. Remarkably, even when the training set excludes certain lighting conditions, RI-ResNet18 and RI-DenseNet40 achieve an accuracy of around 98.5%. This is because lighting variations do not disrupt the local structure of textures, and the rotation invariance of SC enables it to better extract essential information about local texture structures. Additionally, we conduct classification experiments on the NWPU-RESSC45 dataset, and also reduce the training set size to 1.35K and 0.9K (randomly selecting 300 and 200 images from each category, respectively). TableIIIpresents the classification accuracies of these models on the test set. Clearly, our rotation-invariant models continue to outperform the corresponding baselines significantly, with a wider gap as the training data decreases. For instance, when the number of training images is reduced from 1.8K to 1.35K and 0.9K, the accuracy difference between RI-ResNet18 and ResNet18 increases from 7.20% to 9.46% and 15.13%, respectively.", "figure_data": ":The classification accuracies on NWPU-RESISC45.Training Data45×400=1.8K 45×300=1.35K 45×200=0.9KVGG1671.27%66.32%57.95%RI-VGG1678.53%76.61%71.39%ResNet1883.18%78.99%70.23%RI-ResNet1890.38%88.45%85.36%DenseNet4086.83%84.72%80.48%RI-DenseNet4088.35%86.96%85.93%example, RI-ResNet18 outperforms ResNet18 by a substantialmargin of 34.91%. We subsequently reduce the training setsize from 1440 to 960 (only training images captured under\"inca\" and \"t184\" lighting conditions) and 480 (\"inca\" lightingcondition only).", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" } ]
Hanlin Mo; Guoying Zhao
[ { "authors": "D G Lowe", "journal": "Int. J. Comput. Vision", "ref_id": "b0", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004" }, { "authors": "T Ojala; M Pietikäinen; T Mäenpää", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b1", "title": "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns", "year": "2002" }, { "authors": "X Shi; A. -L R Castro; R Manduchi; R Montgomery", "journal": "IEEE Signal Process. Lett", "ref_id": "b2", "title": "Rotational invariant operators based on steerable filter banks", "year": "2006" }, { "authors": "H Bay; A Ess; T Tuytelaars; L V Gool", "journal": "Comput. Vis. Image Und", "ref_id": "b3", "title": "Speeded-up robust features (SURF)", "year": "2008" }, { "authors": "T Chakraborti; B Mccane; S Mills; U Pal", "journal": "IEEE Signal Process. Lett", "ref_id": "b4", "title": "LOOP descriptor: local optimal-oriented pattern", "year": "2018" }, { "authors": "H.-L Mo; Q Li; Y Hao; H Zhang; H Li", "journal": "", "ref_id": "b5", "title": "A Rotation Invariant Descriptor Using Multi-directional and High-Order Gradients", "year": "2018" }, { "authors": "M D Zeiler; R Fergus", "journal": "", "ref_id": "b6", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "D E Worrall; S J Garbin; D Turmukhambetov; G J Brostow", "journal": "", "ref_id": "b7", "title": "Harmonic networks: deep translation and rotation equivariance", "year": "2017" }, { "authors": "M Jaderberg; K Simonyan; A Zisserman; K Kavukcuoglu", "journal": "", "ref_id": "b8", "title": "Spatial transformer networks", "year": "2015" }, { "authors": "C Esteves; C Allen-Blanchette; X.-W Zhou; K Daniilidis", "journal": "", "ref_id": "b9", "title": "Polar transformer networks", "year": "2018" }, { "authors": "M Weiler; G Cesa", "journal": "", "ref_id": "b10", "title": "General E(2)-equivariant steerable CNNs", "year": "2019" }, { "authors": "T Cohen; M Welling", "journal": "", "ref_id": "b11", "title": "Group equivariant convolutional networks", "year": "2016" }, { "authors": "D Marcos; M Volpi; N Komodakis; D Tuia", "journal": "", "ref_id": "b12", "title": "Rotation equivariant vector field networks", "year": "2017" }, { "authors": "V Delchevalerie; A Bibal; B Frénay; A Mayer", "journal": "", "ref_id": "b13", "title": "Achieving rotational invariance with bessel-convolutional neural networks", "year": "2021" }, { "authors": "H.-L Mo; G.-Y Zhao", "journal": "", "ref_id": "b14", "title": "RIC-CNN: rotation-invariant coordinate convolutional neural network", "year": "2022" }, { "authors": "Y.-Z Zhou; Q.-X Ye; Q Qiu; J.-B Jiao", "journal": "", "ref_id": "b15", "title": "Oriented response networks", "year": "2017" }, { "authors": "M Weiler; F A Hamprecht; M Storath", "journal": "", "ref_id": "b16", "title": "Learning steerable filters for rotation equivariant CNNs", "year": "2018" }, { "authors": "D Laptev; N Savinov; J M Buhmann; M Pollefeys", "journal": "", "ref_id": "b17", "title": "TI-Pooling: transformation-invariant pooling for feature learning in convolutional neural networks", "year": "2016" }, { "authors": "G Cheng; P.-C Zhou; J.-W Han", "journal": "IEEE Trans. Geosci. Remote Sens", "ref_id": "b18", "title": "Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images", "year": "2016" }, { "authors": "F Zhang; H.-Y Bian; Z Lv; Y.-F Zhai", "journal": "IEEE Signal Process. Lett", "ref_id": "b19", "title": "Ring-masked attention network for rotation-invariant template-matching", "year": "2023" }, { "authors": "L Liu; P Fieguth; G.-Y Kuan; H.-B Zha", "journal": "", "ref_id": "b20", "title": "Sorted Random Projections for robust texture classification", "year": "2011" }, { "authors": "L Liu; P Fieguth; D Clausi; G.-Y Kuan", "journal": "Pattern Recogn", "ref_id": "b21", "title": "Sorted random projections for robust rotation-invariant texture classification", "year": "2012" }, { "authors": "T.-C Song; L.-L Xin; C.-Q Gao; G Zhang; T. -Q Zhang", "journal": "IEEE Signal Process. Lett", "ref_id": "b22", "title": "Grayscale-inversion and rotation invariant texture description using sorted local gradient pattern", "year": "2018" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b23", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "T Ojala; T Mäenpää; M Pietikäinen; J Viertola; J Kyllönen; S Huovinen", "journal": "", "ref_id": "b24", "title": "Outex-new framework for empirical evaluation of texture analysis algorithms", "year": "2002" }, { "authors": "G Cheng; J.-W Han; X.-Q ; Lu ", "journal": "Proc. IEEE", "ref_id": "b25", "title": "Remote sensing image scene classification: benchmark and state of the art", "year": "2017" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b26", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "K.-M He; X.-Y Zhang; S.-Q Ren; J Sun", "journal": "", "ref_id": "b27", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "G Huang; Z Liu; L V D Maaten; K Q Weinberger", "journal": "", "ref_id": "b28", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "D Laptev; N Savinov; J M Buhmann; M Pollefeys", "journal": "", "ref_id": "b29", "title": "TI-Pooling: transformation-invariant pooling for feature learning in convolutional neural networks", "year": "2016" }, { "authors": "G Cheng; P.-C Zhou; J.-W Han", "journal": "IEEE Trans. Geosci. Remote Sens", "ref_id": "b30", "title": "Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images", "year": "2016" }, { "authors": "G Cheng; J.-W Han; P.-C Zhou; D Xu", "journal": "IEEE Trans. Image Process", "ref_id": "b31", "title": "Learning Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 48.96, 234.92, 251.06, 52.31 ], "formula_id": "formula_0", "formula_text": "X 0 ∈ {1, 2, • • • , h} × {1, 2, • • • , w} can be expressed as below Φ C (X 0 , F (X)) = P ∈S W (P ) • F (X 0 + P )(1)" }, { "formula_coordinates": [ 2, 48.96, 319.33, 251.06, 32.65 ], "formula_id": "formula_1", "formula_text": "S = {-n, -n + 1, • • • , n} × {-n, -n + 1, • • • , n}. For example, when W is a 3 × 3 kernel, we have S = {(-1, -1), (-1, 0), • • • ,(0, 1), (1, 1)" }, { "formula_coordinates": [ 2, 48.96, 379.25, 251.06, 21.61 ], "formula_id": "formula_2", "formula_text": "Assuming that G(Y ) is a rotated version of F (X), that is, G(Y ) = F (R -θ Y )," }, { "formula_coordinates": [ 2, 88.96, 435.07, 211.06, 20.06 ], "formula_id": "formula_3", "formula_text": "Φ C (Y 0 , G(Y )) = P ∈S W (P ) • G(Y 0 + P )(2)" }, { "formula_coordinates": [ 2, 48.96, 481.96, 254.46, 52.86 ], "formula_id": "formula_4", "formula_text": "G(Y 0 +P ) = F (R -θ (Y 0 +P )) = F (X 0 +R -θ P ) ̸ = F (X 0 +P ) (3) By substituting (3) into (2), we can find Φ C (Y 0 , G(Y )) ̸ = Φ C (X 0 , F (X))(4)" }, { "formula_coordinates": [ 2, 84.32, 622.58, 215.7, 24.6 ], "formula_id": "formula_5", "formula_text": "{G(Y 0 + P )} P ∈S = {F (X 0 + R -θ P )} P ∈S = {F (X 0 + P )} P ∈S(5)" }, { "formula_coordinates": [ 2, 344.26, 220.28, 218.78, 22.14 ], "formula_id": "formula_6", "formula_text": "Φ SC (X 0 , F (X)) = P ∈S W (P ) • F s (X 0 + P )(6)" }, { "formula_coordinates": [ 2, 363.95, 267.86, 199.08, 9.65 ], "formula_id": "formula_7", "formula_text": "Φ SC (Y 0 , G(Y )) = Φ SC (X 0 , F (X))(7)" }, { "formula_coordinates": [ 3, 355.57, 653.82, 73.24, 10.31 ], "formula_id": "formula_8", "formula_text": "• , 10 • , • • • , 350 • )." } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0" ], "table_ref": [], "text": "GPT-based language models, such as ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023), have sparked a remarkable revolution in the realm of natural language generation (NLG). These methods have brought about substantial performance enhancements across a wide range of NLG tasks. An important catalyst driving the continuous improvement and refinement of these models is the utilization of evaluation benchmarks, which provide a standardized means to assess and compare their effectiveness.\nHowever, the current landscape of language evaluation benchmarks predominantly revolves around traditional NLU or NLG tasks for language models, with benchmarks like GLUE and SuperGLUE playing a prominent role. Unfortunately, this leaves a significant gap when it comes to evaluating chat models, especially Chinese chat models or specific generation models, despite Chinese speakers constituting a quarter of the world's population. The absence of a comprehensive evaluation benchmark presents a considerable obstacle for researchers in assessing the performance and capabilities of their Chinese generative chat models. Without a standardized benchmark, gauging the quality and effectiveness of these models becomes a daunting task, hindering progress and advancements in the field.\nTo address this critical gap and facilitate the growth of Chinese language research, we introduce the Chinese Generative Chat Evaluation Benchmark (CGCE) for general and financial domains. This benchmark encompasses a broad spectrum of generative chat domains, including general and financial fields with a varying number of categories. In the general domain, our evaluation benchmark includes a diversified set of 200 questions, covering 13 major dimensions including mathematical calculations, scenario writing, logical reasoning, and text summarization. In the financial domain, it covers four major areas: understanding financial terms, providing financial market commentary, conducting financial data analysis, and comprehending financial news. It consists of 150 specific professional questions, allowing us to comprehensively examine the model's proficiency in handling financial tasks from multiple perspectives. Besides, the evaluation process for the CGCE benchmark involves manual scoring, considering various factors such as answer accuracy, logical coherence, clarity of expression, and completeness, among others. This multi-dimensional scoring approach ensures a comprehensive assessment of the model's performance across different aspects of generative chat. By introducing the CGCE benchmark, we provide researchers and practitioners in the field with a standardized framework to evaluate and compare the effectiveness of Chinese generative chat models. This benchmark serves as a valuable resource for Evaluation Manual evaluation with some factors. assessing the capabilities and limitations of these models, enabling researchers to identify areas for improvement and driving progress in the field of generative chat.\nReasoning 如果甲、乙、丙三个人中只有一个 说了真话,那么是谁说的真话?甲 说:\"我不是说谎者。\",乙说:\"丙 是 说 谎 者 。\", 丙 说 :\"乙 是 说 谎 者。\" (If" }, { "figure_ref": [], "heading": "CGCE Benchmark", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The overview of CGCE Benchmark is shown in Table 1. As shown in " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have made significant contributions to addressing the pressing need for standardized evaluation benchmarks tailored specifically to generative chat models, with a particular emphasis on the Chinese language and domain-specific contexts. Our primary objective was to provide researchers and practitioners with a comprehensive and reliable framework for evaluating the performance of Chinese generative chat models. To fulfill this objective, we introduced the Chinese Generative Chat Evaluation (CGCE) benchmark, a groundbreaking contribution that spans both general and financial domains. The CGCE benchmark comprises a diverse and carefully curated set of questions, covering a wide range of dimensions and professional topics. This extensive coverage ensures that the benchmark captures the complexity and diversity of generative chat tasks. Please write a creative copy for a newly opened roast duck restaurant, highlighting the characteristics of the store and the needs of consumers.\nCoding 请编写一 个简单的 HTML 结构, 包 含 一 个 标 题 (<h1>) , 一 个 段 落 (<p>) , 和 一 个 无 序 列 表 (<ul>)。\nPlease write a simple HTML structure that includes a heading (<h1>), a paragraph (<p>), and an unordered list (<ul>). \nAbstract 近 年 来 , 基 于 深 度 学 习 的 语 言 模 型 , 尤 其 是 以 循 环 神 经 网 络 (RNN)和变压器(Transformer) 为代表的模型,在自然语言处理领 域取得了巨大的成功。除了语言模 型,NLP技术还包括词法分析、句 法分析、语义分析等方面。其中, 词法分析是将文本中的单词或词汇 单元进行分析和分类的过程;句法 分析是对句子进行语法分析和结构 分析的过程;语义分析则是对句子 的意思和语境进行分析和理解的过 程。这些分析过程可以帮助计算机 理解自然语言文本,并根据需要生 成或修改文本内容。尽管NLP技术 在自然语言处理领域取得了重大进 展,但仍然面临着许多挑战。由于 自然语言文本中存在大量的语言歧 义和多义性,因此NLP技术在某些 情况下可能无法准确地理解和处理 文本内容。请根据上文,生成一段 简短的摘要,概括出文章的主旨。\nIn recent years, language models based on deep learning, especially models represented by recurrent neural networks (RNN) and transformers (Transformer), have achieved great success in the field of natural language processing. In addition to language models, NLP technology also includes lexical analysis, syntactic analysis, and semantic analysis. Among them, lexical analysis is the process of analyzing and classifying words or lexical units in the text; syntactic analysis is the process of syntactic and structural analysis of sentences; semantic analysis is the process of analyzing and understanding the meaning and context of sentences. process. These analysis processes help computers understand natural language text and generate or modify text content as needed.\nAlthough NLP technology has made significant progress in the field of natural language processing, it still faces many challenges. Due to a large amount of linguistic ambiguity and polysemy in natural language texts, NLP technology may not be able to accurately understand and process text content in some cases. Based on the above, please generate a short abstract summarizing the main idea of the article. " }, { "figure_ref": [], "heading": "Type", "publication_ref": [], "table_ref": [], "text": "Instruction (Chinese) Instruction (English)\nExplanation 什么是\"割韭菜\"?在投资领域,这 个词语通常是怎么使用的? What is \"cutting leeks\"? How is this term commonly used in the investment fields?\nExplanation 你 能 解 释 一 下 什 么 是\"银 行 业 务 的KYC\"流程吗?\nCan you explain what is the \"KYC for Banking\" process?\nExplanation 你 能 解 释 一 下 什 么 是\"抵 押 贷 款\"和\"信用贷款\"的区别吗?\nCan you explain what is the difference between a \"mortgage\" and a \"line of credit\"?\nFinancial commentary \n随 着 全 球 范 围 内 对 可 再 生 能 源 的 需 求 不 断 增 加 , 太 阳 能 、 风 能 等 替代能源项目投资也在稳步上升。 然而,这些项目的投资回报率受到 诸如政府补贴政策、技术进步和市 场竞争等因素的影响。请解释可再 生能源项目投资回报率受到哪些主 要因素的影响,并分析在未来几年 内," } ]
Generative chat models, such as ChatGPT and GPT-4, have revolutionized natural language generation (NLG) by incorporating instructions and human feedback to achieve significant performance improvements. However, the lack of standardized evaluation benchmarks for chat models, particularly for Chinese and domainspecific models, hinders their assessment and progress. To address this gap, we introduce the Chinese Generative Chat Evaluation (CGCE) benchmark, focusing on general and financial domains. The CGCE benchmark encompasses diverse tasks, including 200 questions in the general domain and 150 specific professional questions in the financial domain. Manual scoring evaluates factors such as accuracy, coherence, expression clarity, and completeness. The CGCE benchmark provides researchers with a standardized framework to assess and compare Chinese generative chat models, fostering advancements in NLG research.
CGCE: A Chinese Generative Chat Evaluation Benchmark for General and Financial Domains
[ { "figure_caption": "The overview of CGCE Benchmark.", "figure_data": "TypeInstructionCalculation 15 + 32 = ?Scenario请为一家初创公司策划一场线上产品发布会,包括活动目的、活动流", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples in general domain.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ", in the general do-main, our evaluation benchmark encompasses awide range of 200 questions, spanning 13 signif-icant dimensions that include mathematical com-putations, scenario creation, logical reasoning, andtext condensation. As shown in Table 3, in thefinancial domain, our benchmark focuses on fourmajor areas: comprehension of financial terminolo-gies, delivering commentary on financial markets,performing analysis of financial data, and under-standing financial news. It consists of a specializedset of 150 questions that target professionals in", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Examples in financial domain.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "More examples in general domain.", "figure_data": "", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The Federal Reserve announced a 0.25 percentage point hike in interest rates, the first hike since the outbreak. The move is aimed at responding to rising inflationary pressures in the United States. Please analyze the reasons for the Fed's interest rate hike and what impact it may have on global financial markets and investors?", "figure_data": "As the demand for renewable energycontinues to increase worldwide, invest-ment in alternative energy projects suchas solar energy and wind energy is alsorising steadily. However, the return oninvestment of these projects is affectedby factors such as government subsidypolicies, technological progress, and这些因素可能发生的变化及其market competition. Please explain对投资者的影响?what are the main factors that affect thereturn on investment of renewable en-ergy projects, and analyze how thesefactors may change in the next fewyears and their impact on investors?Data analy-如果一家公司的股票在过去一年中If a company's stock has had a high ofsis的最高价是100美元,最低价是50美$100 and a low of $50 over the past year,元,那么其振幅是多少?what is the amplitude?Financial美联储宣布加息0.25个百分点,这news是 自 疫 情 爆 发 以 来 的 首 次 加 息 。此举旨在应对美国持续上升的通货膨胀压力。请分析美联储加息的原因,以及加息可能对全球金融市场和投资者产生哪些影响?", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "More examples in financial domain.", "figure_data": "", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" } ]
Xuanyu Zhang; Bingbing Li; Qing Yang; Du Xiaoman
[ { "authors": " Openai", "journal": "", "ref_id": "b0", "title": "Chatgpt. OpenAI", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 82.49, 388.34, 195.03, 49.65 ], "formula_id": "formula_0", "formula_text": "Reasoning 如果甲、乙、丙三个人中只有一个 说了真话,那么是谁说的真话?甲 说:\"我不是说谎者。\",乙说:\"丙 是 说 谎 者 。\", 丙 说 :\"乙 是 说 谎 者。\" (If" }, { "formula_coordinates": [ 3, 87.26, 173.79, 238.73, 52.56 ], "formula_id": "formula_1", "formula_text": "Coding 请编写一 个简单的 HTML 结构, 包 含 一 个 标 题 (<h1>) , 一 个 段 落 (<p>) , 和 一 个 无 序 列 表 (<ul>)。" }, { "formula_coordinates": [ 3, 86.87, 326.34, 239.12, 294.72 ], "formula_id": "formula_2", "formula_text": "Abstract 近 年 来 , 基 于 深 度 学 习 的 语 言 模 型 , 尤 其 是 以 循 环 神 经 网 络 (RNN)和变压器(Transformer) 为代表的模型,在自然语言处理领 域取得了巨大的成功。除了语言模 型,NLP技术还包括词法分析、句 法分析、语义分析等方面。其中, 词法分析是将文本中的单词或词汇 单元进行分析和分类的过程;句法 分析是对句子进行语法分析和结构 分析的过程;语义分析则是对句子 的意思和语境进行分析和理解的过 程。这些分析过程可以帮助计算机 理解自然语言文本,并根据需要生 成或修改文本内容。尽管NLP技术 在自然语言处理领域取得了重大进 展,但仍然面临着许多挑战。由于 自然语言文本中存在大量的语言歧 义和多义性,因此NLP技术在某些 情况下可能无法准确地理解和处理 文本内容。请根据上文,生成一段 简短的摘要,概括出文章的主旨。" }, { "formula_coordinates": [ 4, 87.26, 222.24, 238.73, 25.51 ], "formula_id": "formula_3", "formula_text": "Explanation 你 能 解 释 一 下 什 么 是\"银 行 业 务 的KYC\"流程吗?" }, { "formula_coordinates": [ 4, 87.26, 254.98, 238.73, 23.78 ], "formula_id": "formula_4", "formula_text": "Explanation 你 能 解 释 一 下 什 么 是\"抵 押 贷 款\"和\"信用贷款\"的区别吗?" }, { "formula_coordinates": [ 4, 155.91, 301.42, 170.08, 118.36 ], "formula_id": "formula_5", "formula_text": "随 着 全 球 范 围 内 对 可 再 生 能 源 的 需 求 不 断 增 加 , 太 阳 能 、 风 能 等 替代能源项目投资也在稳步上升。 然而,这些项目的投资回报率受到 诸如政府补贴政策、技术进步和市 场竞争等因素的影响。请解释可再 生能源项目投资回报率受到哪些主 要因素的影响,并分析在未来几年 内," } ]
10.18653/v1/2020.acl-main.421
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b19", "b40", "b48", "b6", "b3", "b11", "b36", "b9", "b44", "b6", "b46", "b7" ], "table_ref": [], "text": "Research on large language models is advancing rapidly with powerful new models being published at a break-neck pace (e.g., Zeng et al., 2022a;Le Scao et al., 2022;Touvron et al., 2023). Although multilingual models have been released, many of the world's languages are not covered. Multilingual models have also been shown to have subpar performance on under-resourced languages (Wu and Dredze, 2020). Therefore, it is crucial to develop methods that harness these advances and make them available for further languages, especially low-resource ones.\nA promising line of work in this regard focuses on crosslingual transfer of Transformer models pre-1 https://github.com/konstantinjdobler/focus trained on high-resource languages. Crosslingual transfer directly copies the pretrained weights in the Transformer layers to the target language model. Subsequently, the model is further adapted to the target language by continued pretraining on unlabeled target language text using the original selfsupervised pretraining objective. This sort of training regimen is also known as language adaptive pretraining (LAPT; Chau et al., 2020).\nHowever, the pretrained model's embedding matrix cannot be directly transferred if we use a new tokenizer for the target language (Artetxe et al., 2020;de Vries and Nissim, 2021). Using appropriate tokenizers has been shown to be important for the model's performance on downstream tasks (Rust et al., 2021) and is crucial if the source and target language use different scripts.\nWe present FOCUS, an embedding initialization method that allows us to transfer information from the source model's pretrained embedding matrix to a new embedding matrix for the target language's tokenizer. FOCUS is illustrated in Figure 1. The key idea is to use overlapping tokens between both tokenizers as anchor points and represent new target language tokens as a weighted mean of overlapping tokens' embeddings. This enables us to initialize the new embedding matrix in the same semantic space as the pretrained embedding ma-arXiv:2305.14481v2 [cs.CL] 6 Nov 2023 trix. We empirically show in extensive experiments across a range of different high-resource and lowresource target languages that FOCUS outperforms various strong baselines both in language modeling as well as on downstream tasks (Natural Language Inference, Question Answering, and Named Entity Recognition).\nIn our experiments, we focus on the multilingual XLM-R (Conneau et al., 2020) as a source model and specialize it for a single language. FOCUS is particularly well-positioned to take advantage of multilingual source models due to their larger vocabulary and the fact that they have already been pretrained to a certain extent on many potential target languages. Additionally, we show that FOCUS still improves significantly over random initialization even if only minimal vocabulary overlap is available. 2In previous work, a common approach has been to adapt multilingual models to target languages while simply keeping or extending the original vocabulary (Wang et al., 2019;Chau et al., 2020;Wang et al., 2020;Chau and Smith, 2021). When extending the vocabulary, FOCUS can also be applied to initialize embeddings just for the new tokens. However, we advocate considering the setting of full vocabulary replacement. Only a fraction of the multilingual vocabulary is actually used for any single language, so by fully replacing the large multilingual vocabulary with a language-specific smaller vocabulary we can enable faster training times and smaller models. XLM-R's vocabulary has 250k tokens, and replacing this with a languagespecific 50k token vocabulary reduces the model size quite dramatically, by over 55%. 3 In our experiments, training with a language-specific 50k token vocabulary is 40% faster than extending the original 250k token vocabulary. 4 We summarize the contributions of our paper as follows:\n• We propose FOCUS, a novel embedding initialization method that effectively transfers knowledge from a pretrained embedding matrix to one for a new, language-specific tokenizer.\n• We empirically verify the effectiveness of FO-CUS for language modeling and downstream tasks in extensive experiments using XLM-R as a source model on a range of high-and lowresource languages.\n• We further show that FOCUS is effective also when the target language was not part of the source models' pretraining or only a minimal vocabulary overlap is available." }, { "figure_ref": [], "heading": "FOCUS", "publication_ref": [ "b24", "b50" ], "table_ref": [], "text": "Our goal is to initialize embeddings for tokens in a new, language-specific target vocabulary in the same semantic space as the source model's embeddings. In this study, we mainly focus on the multilingual source model XLM-R although FO-CUS can in principle also be applied to monolingual source models. 5 We copy all embeddings of shared tokens between source and target tokenizer for our new embedding matrix. If the target language was already part of the source model's pretraining corpus, this takes advantage of target language tokens with pretrained embeddings in the source model's tokenizer. In any case, we take advantage of shared named entities, symbols, numbers, punctuation, and shared words resulting from code-switching between the target and pretrained vocabularies. Additional target language tokens not present in the source model are represented as a linear combination of embeddings of semantically similar shared tokens. Unlike previous work on embedding initialization, this requires neither bilingual dictionaries nor an alignment of embedding spaces across different languages (Minixhofer et al., 2022;Zeng et al., 2022b). Next, we formally describe FOCUS." }, { "figure_ref": [], "heading": "Details of FOCUS.", "publication_ref": [ "b41" ], "table_ref": [], "text": "We obtain as input a source vocabulary V s with pretrained embeddings E s and a target vocabulary V t with embeddings E t , which we seek to initialize. The target vocabulary V t is obtained by training a tokenizer on monolingual text in the target language. We use #» e s i and #» e t i to denote embeddings for individual tokens in E s and E t , respectively. We denote the set of overlapping tokens as O = V s ∩ V t . For each overlapping token we can copy the pretrained embedding over into our target embedding matrix:\n∀o ∈ O : #» e t o = #» e s o .(1)\nNote that we make an assumption here: tokens that are part of the overlap O have sufficiently similar semantics in our source and target vocabularies.\nFor multilingual source models, we can exploit already existing tokens from the target language.\nOtherwise, this will obviously not always be the case6 , but through common named entities, codeswitched tokens, symbols, numbers, and punctuation this assumption will hold reasonably often.\nWe provide an in-depth analysis in Appendix B. Finding an initialization in the same semantic space as the pretrained embeddings is not as easy for the set of non-overlapping (\"additional\") tokens A = V t \\ O. To initialize embeddings for the additional tokens, we first train auxiliary embeddings X for all target tokens V t (i.e., both O and A). 7 In our experiments, we apply fastText on unlabeled target language data pre-tokenized with the target tokenizer for V t . Individual embeddings in X are denoted by #»\nx i . Next, we compute the pairwise cosine similarities between the auxiliary embeddings #» x i of tokens in A and O so that for any a ∈ A:\n#» c a = [sim(a, o 1 ), . . . , sim(a, o n )](2)\nwhere o i is the overlapping token at index i and:\nsim(a, o) := #» x a • #» x o ∥ #» x a ∥∥ #» x o ∥ .(3)\nWe convert the similarity scores #» c a to weights by applying sparsemax (Martins and Astudillo, 2016) over each #» c a . Sparsemax is a sparse variant of softmax that assigns zero probability mass to lowprobability elements, which has previously been used by Tran (2020) in a similar setting. Using sparsemax has the advantage of being able to dynamically accommodate different degrees of skew in the similarity distribution. In some cases we might have only one or two very similar tokens, in other cases, we might have significantly more. Accordingly, the weights #» w a of the overlapping tokens are:\n#» w a = sparsemax( #» c a ) = argmin #» p ∈∆ ∥ #» p -#» c a ∥ 2 (4)\nwith ∆ denoting the (|O|-1)-dimensional probability simplex, i.e.,\n∆ := { #» p ∈ R |O| | #» 1 • #» p = 1, #» p ≥ 0}.(5)\nWe then initialize the target embeddings for each additional token a as a weighted mean over pretrained embeddings of the overlapping tokens from E s , with the weights given by #» w a . Due to sparsemax, most of the elements in each #» w a will be zero. Note that we use the pretrained embeddings E s instead of the auxiliary embeddings X, as only the pretrained embeddings are in the same semantic space as the rest of the transferred Transformer layers. Therefore:\n∀a ∈ A : #» e t a = o∈O w a,o #» e s o .(6)\nSummary. FOCUS uses cheap and fast-to-train static embeddings for tokens in the target vocabulary to select semantically similar overlapping tokens for each additional target token. The pretrained embeddings of the overlapping tokens are then used to initialize embeddings for the additional target tokens. In Appendix B, we provide further implementation details as well as a detailed analysis of the different types of overlapping tokens we encountered in our experiments." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b18", "b17" ], "table_ref": [], "text": "We perform experiments using XLM-R as our multilingual source model, due to its popularity and widespread use. 8 We use the base variant for all experiments. Our language-specific tokenizers are trained in the same way as XLM-R for comparability, specifically SentencePiece tokenization (Kudo and Richardson, 2018) with the Unigram algorithm (Kudo, 2018). We use Hugging-Face tokenizers and a vocabulary size of 50k tokens for all languages." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b24", "b51", "b24", "b38", "b44" ], "table_ref": [], "text": "To evaluate FOCUS, we compare against multiple strong baselines for embedding initialization as well as other methods of adapting XLM-R to a target language. We always transfer all layers of XLM-R, except for the embedding. Minixhofer et al. (2022) already demonstrate the superiority of this over random initialization of all weights, so we do not compare against the weak baseline of training a model completely from scratch.\nXLM-R with the original vocabulary. We report results of using XLM-R off-the-shelf without language-adaptive pretraining (LAPT) as well as after adapting XLM-R to the target language with the original vocabulary kept as-is.\nRandom Initialization. For vocabulary replacement with a language-specific tokenizer and random embedding initialization, we copy the original pretrained embeddings following Zoph et al. (2016). This randomly maps pretrained embeddings to tokens in the new vocabulary and performed slightly better than other types of random initialization in preliminary experiments. 9 In this case, we also consider the variant of training just the embeddings for an additional 20% of training steps before unfreezing the rest of the network.\nDe Vries and Nissim (2021) note that this allows the new embeddings to adapt to the transferred Transformer layers to prevent catastrophic forgetting. Therefore, this strong baseline is trained 20% longer than other methods.\nWECHSEL. We additionally compare against using WECHSEL (Minixhofer et al., 2022) to initialize the embedding matrix for the language-specific tokenizer. WECHSEL is a method for embedding initialization originally designed for transferring monolingual source models. It relies on aligning pretrained word embeddings for the source and target languages using the Orthogonal Procrustes method (Schönemann, 1966) with bilingual dictionaries as seed data. Then, each source and target token is embedded into the same semantic space using the out-of-vocabulary method of fastText, resulting in aligned static token embeddings for both languages.\nTo faithfully apply WECHSEL with a multilingual source model, we would need to provide a word embedding space for all the languages that are part of the multilingual models' pretraining corpus. Also, gathering bilingual dictionaries from all source languages to the target language would become a challenge. Instead, we apply WECHSEL as-is using only pretrained English fastText word embeddings for the source model. This effectively assumes that all pretrained source token embeddings are English, which is a rough but not entirely unreasonable assumption given the predominance of English over other languages in the pretraining corpus of XLM-R. We can further commit to this assumption by deleting all non-English tokens from the pretrained vocabulary before applying WECH-SEL, which we dub WECHSEL EN . This yields an initialization method similar to the mixture mapping method proposed by Wang et al. (2019).\nVocabulary Extension. We also run experiments with vocabulary extension following Wang et al.\n(2020) by extending with the top 30k tokens of the language-specific tokenizer as well as using FOCUS to initialize embeddings for the extended tokens." }, { "figure_ref": [], "heading": "Language-Adaptive Pretraining (LAPT)", "publication_ref": [ "b9" ], "table_ref": [], "text": "For LAPT, we use the same self-supervised Masked Language Modeling (MLM) objective as in the original pretraining of XLM-R. We use the CC100 corpus to obtain unlabeled text in our target languages, which was also already used for the pretraining of XLM-R (Conneau et al., 2020). Therefore, we do not introduce any new unseen data. We show dataset sizes for our target languages in Table 1. We use the same hyperparameters for all languages, as detailed in Appendix A. In particular, we use a batch size of 128 with chunked sequences of 256 tokens and train our models on 50 million samples (resulting in a total of 12.8 billion training tokens and 390k optimizer steps)." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b10", "b25", "b8", "b35", "b29", "b4", "b1", "b0" ], "table_ref": [], "text": "We also evaluate our models on downstream tasks in their respective target languages. We perform downstream task evaluation on five languages: German, Arabic, Kiswahili, isiXhosa, and Hausa. They were chosen to provide a mix of high-, mediumand low-resource languages, typological and script diversity while satisfying the practical constraints of available evaluation datasets. We refer to German as high-resource, Arabic and Kiswahili as medium-resource, and isiXhosa and Hausa as lowresource languages. We use the translated training sets of XNLI (Conneau et al., 2018) to evaluate Natural Language Inference (NLI) in the translate-train setting. To evaluate Question Answering, we use German-QuAD (for German, Möller et al., 2021) and Ty-DiQA GoldP (for Swahili and Arabic; Clark et al., 2020). We perform Named Entity Recognition (NER) experiments using the balanced train-devtest split of WikiANN (Rahimi et al., 2019;Pan et al., 2017). Additionally, we evaluate NER for German on the GermEval2014 dataset (Benikova et al., 2014) and for Swahili, Hausa, and isiXhosa using MasakhaNERv2 (Adelani et al., 2022). If there is no dedicated dev split, we construct our own with a random sample of 10% of the training data. We perform model selection on the dev split and report the selected checkpoint's result on the test set. We report accuracy for XNLI and F 1scores otherwise. We run all experiments five times with different random seeds and report the mean and standard deviation. Hyperparameters for all evaluation tasks are given in Appendix A.\nFurthermore, we evaluate the initialization performance of FOCUS without further training measured by the MLM loss on a held out set on five additional very low-resource languages (Scottish Gaelic, Luxembourgish, Cebuano, Samoan, and Hmong). For these languages, we use mC4 (Raffel et al., 2020) and OSCARv23.01 (Abadji et al., 2022) as additional data sources for unlabeled text." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We present downstream task results for NLI and QA in Table 2 and for NER in Table 3. In the following, we discuss various aspects of these results. In Figure 2, we show loss curves on a held out set when adapting XLM-R with custom tokenizers. In Table 4, we report the masked language modeling (MLM) loss of various methods right after initialization (no further training performed). 4 shows the effectiveness of FOCUS initialization for vocabulary replacement. Directly after initialization without further training, FOCUS significantly outperforms all other initialization methods. In Figure 2, we show loss curves over the course of language-adaptive pretraining (LAPT). For random initialization, we only plot the second stage after the embeddings have already been trained for an additional 20% of total training steps. FOCUS yields a lower loss than random initialization even at the end of training, despite random initialization having been trained for more steps in total. WECHSEL starts off worse than FOCUS but catches up over the course of train-ing. Naturally, the effect of initialization is less pronounced the longer we train the models. We have deliberately constructed a difficult evaluation with our long training regime of 12.8 billion tokens. In settings where less compute is available, FOCUS may be even more beneficial." }, { "figure_ref": [], "heading": "Effectiveness of FOCUS. Table", "publication_ref": [ "b13", "b6", "b46", "b7", "b44", "b13", "b45", "b26", "b33", "b37", "b42", "b24", "b43", "b39", "b32", "b27", "b43", "b15", "b28", "b50" ], "table_ref": [ "tab_2" ], "text": "The improved effectiveness on the pretraining objective also translates to gains in downstream tasks, as reported in Table 2 andTable 3. FO-CUS initialization outperforms random initialization across all downstream tasks and languages (except for Arabic TyDiQA). WECHSEL also improves over random initialization, but FOCUS obtains superior results. FOCUS can also be applied for vocabulary extension instead of vocabulary replacement. Here, we see less of an improvement over the random initialization baseline. This could be due to the smaller impact of FOCUS, since only a relatively small percentage of the large extended vocabulary is affected.\nVocabulary Extension or Replacement? We find that vocabulary extension generally performs worse on downstream tasks than keeping the original vocabulary. This finding is in line with results reported by Ebrahimi and Kann (2021) on a set of 30 typologically diverse languages. Prior studies proposing vocabulary extension (Chau et al., 2020;Wang et al., 2020;Chau and Smith, 2021) used mBERT and were motivated by the possibility of out-of-vocabulary (OOV) tokens. For XLM-R using SentencePiece with 100% character set coverage or byte-level tokenizers, OOV tokens can always be represented at the character or byte level. Therefore, the benefits of vocabulary extension might be less pronounced in these cases because the OOV problem is less relevant to begin with.\nOn average, when combined with FOCUS initialization, vocabulary replacement outperforms both vocabulary extension and keeping the original vocabulary. Nevertheless, keeping the original vocabulary intact proves to be a strong baseline and for the high-resource language German even outperforms vocabulary replacement with FOCUS. However, vocabulary replacement paired with FO-CUS performs better on medium-and low-resource languages, results in smaller models, and is thus faster to train.\nLow-Resource Languages. Focusing on lesserresourced languages, FOCUS outperforms random initialization and LAPT with the original vocabu- Table 3: Results on Named Entity Recognition (NER) tasks. Details on the datasets used for evaluation are given in Section 3.3. We bold the best result in each section and underline the overall best result. † : For random initialization, we train just the embeddings for an additional 20% of training steps before full LAPT to create a stronger baseline.\n- ‡ : Languages not covered by the pretrained fastText word embeddings used by WECHSEL. Table 4: MLM loss on a held-out set immediately after initialization (no training performed) with full vocabulary replacement. We use the same vocabulary for all methods in a single language. Symbolic Overlap restricts overlapping tokens to numbers, punctuation, or whitespace. Word-fastText uses WECHSEL's method of turning pretrained word embeddings into token embeddings instead of our proposed directly trained token-level embeddings.\n- † : Languages are not covered by the pretrained fastText word embeddings used by WECHSEL.\nlary on NER for Hausa and isiXhosa. Furthermore, we report the Masked Language Modeling loss directly after initialization on a number of very lowresource languages in Table 4. We see that across all languages, including the very low-resource ones, FOCUS achieves the best results. FOCUS also provides a good initialization even when the target language was not part of the source model's pretraining.\nIn low-resource settings, a key advantage of FOCUS is that we only need unlabeled target language data to train our auxiliary embeddings -a resource already needed for LAPT in any case. Unlike WECHSEL, no bilingual dictionary is required, the quality and coverage of which might also be insufficient in low-resource settings. Some lowresource languages, such as Hausa and isiXhosa, are also not covered by WECHSEL's source of pretrained word embeddings. 10 Effect of Vocabulary Overlap. Naturally, the quality and quantity of overlapping tokens influences the success of FOCUS. To investigate this, we conducted empirical analyses in two settings: using the full overlap and using only overlapping tokens that are symbols, numbers, or punctuation (Symbolic Overlap). Full overlap can take advantage of the source model's multilingual pretraining if the target language or a closely related language were part of the pretraining corpus. In any case, however, symbolic tokens such as whitespace, numbers, and punctuation should generally be available, allowing us to transfer a model to any language. In Table 4, we show that even when using only symbolic overlapping tokens, FOCUS outperforms WECHSEL on medium to low-resource languages (e.g., Scottish Gaelic, Luxembourgish, Kiswahili, 10 https://fasttext.cc/docs/en/crawl-vectors.html and others). For Arabic and German, FOCUS with only symbolic overlapping tokens performs slightly worse than WECHSEL. In practice however, we will generally have numerous further overlapping tokens such as named entities and code-switched tokens. This is demonstrated by our results for Luxembourgish, Cebuano, Samoan, and Hmongall languages that XLM-R and XLM-R's tokenizer were not pretrained on. Here, using the full overlap outperforms using only symbols, suggesting more beneficial overlapping tokens beyond the ones included in our symbolic overlap. Overall, these results show that FOCUS can provide a good initialization even when the target language was not part of the source model's pretraining.\nAuxiliary Embeddings. WECHSEL proposes a method to use pretrained word-level fastText embeddings to obtain token-level embeddings. We propose to directly train token-level fastText embeddings. In Table 4, we additionally show FO-CUS's initialization performance when using the WECHSEL-style method to obtain token-level fast-Text embeddings (Word-fastText). We see that using our directly trained token-level fastText embeddings results in a better initialization for low-and high-resource languages.\nWECHSEL EN . On average, WECHSEL actually fares slightly better than WECHSEL EN , although WECHSEL EN also improves over random initialization. For WECHSEL EN , we followed Wang et al. (2019) in selecting English tokens in XLM-R's original vocabulary by taking the overlap with a language-specific English tokenizer's vocabulary. Due to the substantial presence of English in XLM-R's original vocabulary, this may have been too restrictive, excluding too many potentially useful tokens.\nWe now discuss further related work apart from the studies introduced in Section 1.\nLanguage Adaptive Pretraining (LAPT). Alabi et al. ( 2022) adapted XLM-R to up to 20 African languages at the same time instead of specializing on a single language. Ebrahimi and Kann (2021) and Wang et al. (2022) used resources with much higher language coverage than web-scraped monolingual texts (the Bible and lexicons, respectively) to adapt pretrained multilingual models to unseen languages. Muller et al. (2021) transliterated unseen languages into Latin script to improve the results when using an existing pretrained vocabulary.\nAdapters. In contrast to approaches changing all pretrained model weights, Pfeiffer et al. (2020) introduce additional adapter modules and only these new weights are changed. This is more parameterefficient than full model adaptation, but gradients still need to be backpropagated throughout the model until the first adapter (Rücklé et al., 2021). Also, adapters introduce additional computational cost at inference time.\nBilingual Embedding Alignment. Vernikos and Popescu-Belis (2021) propose SMALA to calculate a mapping between embedding spaces for two languages to find semantically similar tokens across languages. They also experiment with initializing the remaining tokens based on this cross-lingual mapping. WECHSEL (Minixhofer et al., 2022) aligns word embeddings from two different languages. Such alignments operate under the assumption of near-isomorphism between embedding spaces of different languages (Vulić et al., 2020), i.e., that they share a similar geometric structure. Recent studies have challenged this assumption, especially for language pairs with typological (Søgaard et al., 2018;Patra et al., 2019;Ormazabal et al., 2019) and resource (Vulić et al., 2020;Fu et al., 2020) differences. This is especially detrimental in the case of language model transfer, as we usually transfer from a high-resource language such as English to less-resourced languages with potentially different typology. FOCUS does not require the alignment of embedding spaces.\nFor multilingual source models, WECHSEL also disregards a valuable resource at our disposal: target language tokens that already have pretrained embeddings in the multilingual source model. For these tokens, we can copy their pretrained embeddings as a gold standard. Obtaining a different initialization is likely to lead to a worse result. FO-CUS is well-positioned to take advantage of these pretrained embeddings of target language tokens.\nAdditionally, WECHSEL requires a bilingual dictionary as an additional resource to seed the embedding space alignment. For low-resource languages, such a bilingual dictionary might be of lower quality or not available. FOCUS does not require bilingual dictionaries as an additional resource.\nOther Embedding Initialization Methods. In concurrent work, Ostendorff and Rehm (2023) propose a similar method to FOCUS that initializes an embedding matrix for a new vocabulary based on combinations of overlapping tokens with a pretrained embedding matrix, but use the embedding layer of a smaller pretrained Transformer model instead of static fastText embeddings as an auxiliary embedding space. However, their study only provides results on the high-resource language German as a target language and they do not consider BERTstyle source models. If no smaller pretrained Transformer model with the desired tokenizer is available, training one from scratch comes with a much higher computational cost than training the fastText embeddings for FOCUS. Zeng et al. (2022b) create a new vocabulary and embedding matrix for the target language by translating tokens in the source vocabulary with bilingual dictionaries." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose FOCUS, a novel embedding initialization method for the monolingual specialization of language models with a language-specific tokenizer. FOCUS uses the vocabulary overlap between source and target languages to effectively transfer the pretrained embeddings to the new target tokenizer's embedding matrix. In a series of experiments across a diverse set of languages and several different tasks, we show that FOCUS outperforms other available embedding initialization methods without requiring additional resources like bilingual dictionaries. FOCUS can provide a good initialization even if only a minimal vocabulary overlap is available and when the target language has not been part of the source model's pretraining. We release our code and model checkpoints on GitHub.11 " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b28", "b45" ], "table_ref": [], "text": "We evaluate FOCUS only for BERT-like Transformer models. In principle, the method is applicable to any model that uses a vocabulary, tokenizer, and embedding matrix. In future work, we hope to investigate the use of FOCUS on GPT decoder models, as explored by Ostendorff and Rehm (2023).\nWe conduct downstream task evaluations for NLI, QA, and NER on German, Arabic, and Swahili. For the low-resource languages isiXhosa and Hausa, we conduct downstream task experiments for NER. This provides a good mix of different levels of available resources, scripts, and typology. However, further evaluations on languages covering more scripts and languages that were not part of the source models' pretraining are needed to substantiate the effectiveness of FOCUS in these settings. All our chosen languages have monolingual texts available for further pretraining. As Wang et al. (2022) note, this is not the case for many other low-resource languages. Since further pretraining on target language data is a key component of our model adaptation strategy, the applicability of FOCUS is also limited in this regard, although such data can in some cases also be synthesized." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In this work, we conduct the main part of our downstream task experiments on German, Arabic, and Swahili. These choices stem from our desire to provide practically useful ideas that reflect the current availability of models and to conduct experiments on downstream tasks such as question answering, NLI, and named entity recognition, for which we need relevant ground truth data.\nFinally, researchers and practitioners need to be cognizant of the fact that adopting existing monolingual or even multilingual models as a starting point instead of training new models from scratch can lead to remnant biases towards the original pretraining data. Hence, there is a risk that a model adopts certain forms of behavior that reflect other languages and cultures than that of the language community one is targeting. Also, web-scale datasets used for pretraining such as CC100 might contain personal and sensitive information. Such behavior needs to be assessed very carefully before any real-world deployment of the models. gingFace tokenizers and a vocabulary size of 50k tokens for all languages. The resulting vocabularies contain a large amount (roughly 10k tokens) of emojis and Chinese, Japanese, and Korean single characters, which is an artifact of SentencePiece's character_coverage parameter (which defaults to 100%). Characters are included in the vocabulary even if they appear only once in the large amount of noisy web-scraped training documents. This effectively means that our language-specific vocabularies are roughly 10k tokens smaller in practice, as such single characters rarely occur in the training data. In practice, one may wish to tune the character_coverage carefully based on the requirements of the target language if a smaller model is desired." }, { "figure_ref": [], "heading": "B Further details for FOCUS", "publication_ref": [ "b23", "b5", "b9", "b12" ], "table_ref": [], "text": "FastText training. To obtain static token embeddings for FOCUS, we train fastText embeddings on tokenized target language training data. We mostly used default hyper-parameters but increased the dimensionality to 300, as is commonly done in the (Mikolov et al., 2013;Bojanowski et al., 2017). We ran the training for three epochs. On German, due to its corpus size, we ran only a single epoch.\nAdditionally, we set a minCount of 10 for tokens during fastText training to filter out very rare tokens. These rare tokens are initialized from a normal distribution with mean and standard deviation per dimension calculated from the source embedding matrix, as done by WECHSEL for tokens that have no subwords in the pretrained word embedding. Setting minCount also helps with filtering the noisy single characters that are part of our tokenizers due to SentencePiece's character_coverage parameter.\nVocabulary Overlap. FOCUS relies on overlapping tokens between the new and pretrained vocabularies. Ideally, an overlapping token would have the same semantics in the target language vocabulary and in the pretrained vocabulary. If the target language was already part of the pretraining, this is most obviously true for (sub-)words that only occur in the target language. Differences in script or peculiarities of the target language (such as German umlauts and other language-specific accented characters) help facilitate such occurrences. In many languages, especially online, there is widespread code-switching with English, leading to English words being interspersed within native sentences, which also contributes to shared semantics. A considerable share of tokens is also made up of names, named entities, symbols, numbers, and punctuation. While these are not exclusive to any particular language, they are likely to possess the same semantics across languages, making them good overlapping tokens. We report the number of overlapping tokens for languages used during training in Table 8.\nAdditionally, we manually classified a random sample of 500 overlapping tokens for German and report the results in Table 7. The overlap is calculated between XLM-R's original tokenizer and our newly trained, language-specific German one. For this analysis, we excluded the noisy singlecharacter tokens mentioned in Appendix A. We conclude that a considerable share of the overlapping tokens for German does indeed possess similar semantics in the pretrained and new vocabularies. For less-resourced languages than German that were still part of the multilingual models' pretraining, we can expect fewer overlapping tokens that are directly part of the target language. Highresource languages have a larger share of languagespecific tokens in the vocabulary of XLM-R. However, for languages with an uncommon or unique script, tokens are more likely to be exclusive to the target language. During the pretraining of XLM-R, low-resource languages are also oversampled (Conneau et al., 2020). Therefore, tokens that are shared between low and high-resource languages are more likely to also have the low-resource language semantics encoded in their embeddings than would otherwise be the case.\nOverlaps between different tokenizers. In general, we only consider tokens as overlapping if they are an exact match (including case and the \"beginning of word\" (BOW) signifier. However, for tokens that only consist of numbers, punctuation, or whitespace, we implement fuzzy matching where we disregard the case and the BOW signifier.\nA peculiarity of calculating token overlaps between different kinds of tokenizers is the representation of tokens that are BOW tokens and non-ASCII characters. For example, the HuggingFace implementation of Byte-Level BPE uses Ġ as a prefix for BOW tokens, whereas XLM-R's tokenizer uses _. To complicate things, BERT's tokenizer WordPiece (Devlin et al., 2019) prefixes tokens that are not BOW with ##. Also, Byte-level BPE represents non-ASCII characters in tokens differ-" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors acknowledge the financial support by the German Federal Ministry for Education and Research (BMBF) through the project «KI-Servicezentrum Berlin Brandenburg» (01IS22092). We also thank the reviewers for their helpful comments." }, { "figure_ref": [], "heading": "A Hyperparameters and Experiment Details", "publication_ref": [ "b30", "b14", "b47", "b20", "b21", "b18", "b17" ], "table_ref": [], "text": "We conducted all experiments on a heterogeneous compute cluster with Nvidia V100 32GB, A100 40GB, A100 80GB, and A6000 48GB GPUs. Depending on availability we used one, two, or four GPUs for our experiments and adjusted the batch size per device so that we retain the same effective batch size. Depending on total model size, we also used gradient accumulation with smaller batch sizes to fit the model into GPU memory. We used PyTorch (Paszke et al., 2019) and pytorch-lightning (Falcon et al., 2019) as well as the HuggingFace transformers (Wolf et al., 2020), tokenizers (HuggingFace, 2021) and datasets (Lhoest et al., 2021) libraries. We used the fp16 mixed precision training implemented by pytorch-lightning.\nFurther pretraining. We used the same hyperparameters for all target languages, as detailed in Table 5. We trained for a total of 50 million samples with batches of 128 sequences of 256 tokens. This results in a total of 390,625 optimizer steps (weight updates). We used AdamW optimization (Loshchilov and Hutter, 2019) as implemented in torch.optim 12 and a linear learning rate warmup for 5 million samples (39,062 optimizer steps) followed by a constant learning rate at 5 × 10 -5 . We used a constant schedule to allow for more flexible experimentation regarding the total number of training steps and to ensure that the impression of converged loss curves is not a false positive induced by a decaying learning rate. We also conducted preliminary experiments using a cosine learning rate schedule and did not observe a significant difference. We used the CC100 dataset in line with its intended use for the pretraining of language models. For German, our training does not even complete a full epoch.\nDatasets for low-resource languages. Comparing OSCARv23.01 and mC4 as a data source for low-resource languages, we observe that for Corsican 13 , Cebuano and Luxembourgish, the quality in mC4 is quite poor. Training a tokenizer on these datasets results in a tokenizer that, on average, encodes fewer characters per token than when 12 Since we do not use weight decay, this is equivalent to using Adam (Kingma and Ba, 2015). 13 Results not included in the paper as the language is not provided by OSCARv23.01. Nevertheless, these corpora (and all web-crawled corpora of low-resource languages) can also be expected to be noisy.\nDownstream tasks. We detail our hyperparameters for all downstream tasks in Table 6. We largely followed default values of finetuning scripts provided by Huggingface 14 , but adjusted the training epochs depending on dataset size, added a linear learning rate warmup for 10% of total training steps, and adjusted the batch size based on used GPU memory per task. Additionally, we used a 2 × 10 -5 peak learning rate for all non-QA tasks. We repeated each experiment five times with the random seeds {1,2,3,4,5} and report the mean and standard deviation across runs. For XNLI, we report accuracy, for TyDiQA, GermanQuAD, WikiANN, MasakhaNERv2, and GermEval14, we report the F 1 -Score.\nTokenizer training. Our language-specific tokenizers were trained in the same way as XLM-R for comparability, specifically SentencePiece tokenization (Kudo and Richardson, 2018) with the Unigram algorithm (Kudo, 2018). We used Hug- \n0.9 0.9 0.9 0.9 0.9 Adam β 2 0.999 0.999 0.999 0.999 0.999 ently than XLM-R's tokenizer. In our experiments in this paper, we only use the XLM-R tokenizer, which also matches the source model's tokenizer, and therefore avoid these problems. However, a correct canonicalization of tokens to a common form is crucial to enable FOCUS when the tokenizers of source and target model do not match. We implement such a canonicalization method for common tokenizers and release it as part of our ready-to-use implementation of FOCUS. 15 " } ]
Using model weights pretrained on a highresource language as a warm start can reduce the need for data and compute to obtain highquality language models for other, especially low-resource, languages. However, if we want to use a new tokenizer specialized for the target language, we cannot transfer the source model's embedding matrix. In this paper, we propose FOCUS -Fast Overlapping Token Combinations Using Sparsemax, a novel embedding initialization method that initializes the embedding matrix effectively for a new tokenizer based on information in the source model's embedding matrix. FOCUS represents newly added tokens as combinations of tokens in the overlap of the source and target vocabularies. The overlapping tokens are selected based on semantic similarity in an auxiliary static token embedding space. We focus our study on using the multilingual XLM-R as a source model and empirically show that FOCUS outperforms random initialization and previous work in language modeling and on a range of downstream tasks (NLI, QA, and NER). We publish our checkpoints and code on GitHub. 1
FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of FOCUS's initialization strategy for embeddings of new tokens (blue dot): Find similar tokens (orange dots) in an auxiliary fastText embedding space; then initialize the new token as their weighted mean in the pretrained embedding space.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Masked Language Modeling (MLM) loss of different methods for vocabulary replacement over the course of further pretraining (LAPT), evaluated on a held out set. The first data point is logged at 1 million samples. For random initialization, we plot only the second stage, i.e., after already training just the embeddings for 10 million samples. This allows us to compare FOCUS and WECHSEL embedding initialization directly with gradient descent training of the embeddings.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Size of datasets from CC100 used for LAPT.", "figure_data": "Language Dataset Size (GB)German18 GBArabic5.4 GBKiswahili0.3 GBHausa0.06 GBisiXhosa0.03 GB", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on Natural Language Inference and Question Answering tasks. Details on the datasets used for evaluation are given in Section 3.3. We bold the best result in each section and underline the overall best result. LAPT is short for language-adaptive pretraining; we perform LAPT for 50 million samples on unlabeled target texts.", "figure_data": "XNLI (translate-train)GermanQuAD / TyDiQAMethodGermanArabicKiswahili Avg.GermanArabicKiswahili Avg.XLM-R (original vocab)-off-the-shelf78.8 ± 0.3 74.8 ± 0.4 69.1 ± 0.4 74.271.3 ± 0.4 78.4 ± 0.9 73.9 ± 1.4 74.5-LAPT78.9 ± 0.4 75.1 ± 0.6 72.4 ± 0.4 75.570.5 ± 0.8 78.9 ± 0.5 75.8 ± 1.0 75.0XLM-R (replaced vocab)-Random + LAPT †77.6 ± 0.4 74.6 ± 0.4 71.2 ± 0.3 74.569.1 ± 0.7 79.3 ± 0.6 74.2 ± 1.0 74.2-WECHSEL EN + LAPT77.7 ± 0.5 75.4 ± 0.3 72.0 ± 0.2 75.071.0 ± 0.4 79.3 ± 1.0 75.2 ± 0.7 75.2-WECHSEL + LAPT78.2 ± 0.2 76.0 ± 0.2 72.3 ± 0.3 75.570.5 ± 0.5 79.4 ± 0.9 75.5 ± 1.5 75.1-FOCUS + LAPT78.3 ± 0.6 76.5 ± 0.4 72.9 ± 0.5 75.971.3 ± 0.2 79.1 ± 0.4 76.5 ± 1.5 75.6XLM-R (extended vocab)-Random + LAPT77.7 ± 0.6 75.2 ± 0.6 71.8 ± 0.4 74.969.8 ± 0.6 77.7 ± 0.6 76.3 ± 0.8 74.6-FOCUS + LAPT78.0 ± 0.4 75.5 ± 0.4 72.1 ± 0.2 75.269.5 ± 0.3 77.8 ± 1.0 77.0 ± 0.6 74.7WikiANNGermEval14 / MasakhaNERv2MethodGermanArabicKiswahili Avg.German KiswahiliHausaisiXhosa Avg.-off-the-shelf86.3 ± 0.2 85.7 ± 0.3 86.6 ± 0.5 86.285.6 ± 0.3 92.0 ± 0.1 84.2 ± 0.5 85.5 ± 0.3 87.1-LAPT86.7 ± 0.1 87.1 ± 0.1 86.9 ± 0.6 86.986.8 ± 0.2 92.5 ± 0.2 85.6 ± 0.4 88.3 ± 0.2 88.3XLM-R (replaced vocab)-Random + LAPT †86.0 ± 0.1 87.5 ± 0.1 85.8 ± 0.5 86.485.9 ± 0.3 92.3 ± 0.2 85.0 ± 0.3 87.4 ± 0.2 87.8-WECHSEL EN + LAPT86.4 ± 0.1 87.8 ± 0.1 86.6 ± 0.9 87.086.4 ± 0.2 92.3 ± 0.1- ‡- ‡--WECHSEL + LAPT86.5 ± 0.2 87.9 ± 0.3 87.4 ± 0.6 87.386.7 ± 0.1 92.2 ± 0.1- ‡- ‡--FOCUS + LAPT86.6 ± 0.2 87.9 ± 0.1 86.9 ± 0.4 87.186.6 ± 0.0 92.6 ± 0.1 86.0 ± 0.4 88.5 ± 0.4 88.4XLM-R (extended vocab)-Random + LAPT85.6 ± 0.2 85.2 ± 0.3 86.2 ± 0.7 85.685.4 ± 0.3 92.0 ± 0.2 84.1 ± 0.2 87.2 ± 0.4 87.5-FOCUS + LAPT86.0 ± 0.1 85.3 ± 0.3 86.2 ± 0.3 85.885.6 ± 0.2 92.1 ± 0.2 84.9 ± 0.4 87.7 ± 0.3 87.9", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Konstantin Dobler; Gerard De Melo
[ { "authors": "Julien Abadji; Pedro Ortiz Suarez; Laurent Romary; Benoît Sagot", "journal": "European Language Resources Association", "ref_id": "b0", "title": "Towards a cleaner documentoriented multilingual crawled corpus", "year": "2022" }, { "authors": "David Adelani; Graham Neubig; Sebastian Ruder; Shruti Rijhwani; Michael Beukman; Chester Palen-Michel; Constantine Lignos; Jesujoba Alabi; Shamsuddeen Muhammad; Peter Nabende; M Cheikh; Andiswa Bamba Dione; Rooweither Bukula; Mabuya; F P Bonaventure; Blessing Dossou; Happy Sibanda; Jonathan Buzaaba; Godson Mukiibi; Derguene Kalipe; Amelia Mbaye; Fatoumata Taylor; Chris Kabore; Anuoluwapo Chinenye Emezue; Perez Aremu; Catherine Ogayo; Edwin Gitau; Victoire Munkoh-Buabeng; Memdjokam Koagne; Auguste Allahsera; Tebogo Tapo; Vukosi Macucwa; Mboning Marivate; Tajuddeen Tchiaze Elvis; Tosin Gwadabe; Orevaoghene Adewumi; Joyce Ahia; Neo Nakatumba-Nabende; Ignatius Lerato Mokono; Chiamaka Ezeani; Chukwuneke; Oluwaseun Mofetoluwa; Gilles Adeyemi; Idris Quentin Hacheme; Odunayo Abdulmumin; Oreen Ogundepo; Tatiana Yousuf; Dietrich Moteu; Klakow", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition", "year": "2022" }, { "authors": "O Jesujoba; David Alabi; Marius Ifeoluwa Adelani; Dietrich Mosbach; Klakow", "journal": "International Committee on Computational Linguistics", "ref_id": "b2", "title": "Adapting pretrained language models to African languages via multilingual adaptive fine-tuning", "year": "2022" }, { "authors": "Mikel Artetxe; Sebastian Ruder; Dani Yogatama", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "On the cross-lingual transferability of monolingual representations", "year": "2020" }, { "authors": "Darina Benikova; Chris Biemann; Marc Reznicek", "journal": "European Language Resources Association (ELRA", "ref_id": "b4", "title": "NoSta-D named entity annotation for German: Guidelines and dataset", "year": "2014" }, { "authors": "Piotr Bojanowski; Edouard Grave; Armand Joulin; Tomas Mikolov", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "Enriching word vectors with subword information", "year": "2017" }, { "authors": "Ethan C Chau; Lucy H Lin; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Parsing with multilingual BERT, a small corpus, and a small treebank", "year": "2020" }, { "authors": "Ethan C Chau; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Specializing multilingual language models: An empirical study", "year": "2021" }, { "authors": "Jonathan H Clark; Eunsol Choi; Michael Collins; Dan Garrette; Tom Kwiatkowski; Vitaly Nikolaev; Jennimaria Palomaki", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "year": "2020" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Alexis Conneau; Ruty Rinott; Guillaume Lample; Adina Williams; Samuel Bowman; Holger Schwenk; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "XNLI: Evaluating crosslingual sentence representations", "year": "2018" }, { "authors": "Wietse De; Vries ; Malvina Nissim", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "As good as new. how to successfully recycle English GPT-2 to make models for other languages", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Abteen Ebrahimi; Katharina Kann", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "How to adapt your pretrained multilingual model to 1600 languages", "year": "2021" }, { "authors": "William Falcon", "journal": "", "ref_id": "b14", "title": "The PyTorch Lightning team, and Open Source Contributors", "year": "2019" }, { "authors": "Zuohui Fu; Yikun Xian; Shijie Geng; Yingqiang Ge; Yuting Wang; Xin Dong; Guang Wang; Gerard De; Melo ", "journal": "Huggingface Tokenizers", "ref_id": "b15", "title": "ABSent: Cross-lingual sentence representation mapping with bidirectional GANs", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "year": "2015-05-07" }, { "authors": "Taku Kudo", "journal": "", "ref_id": "b17", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "year": "2018" }, { "authors": "Taku Kudo; John Richardson", "journal": "", "ref_id": "b18", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Le Teven; Thomas Scao; Daniel Wang; Stas Hesslow; M Bekman; Stella Saiful Bari; Hady Biderman; Niklas Elsahar; Jason Muennighoff; Ofir Phang; Colin Press; Victor Raffel; Sheng Sanh; Lintang Shen; Jaesung Sutawika; Tae; Xin Zheng; Julien Yong; Iz Launay; Beltagy", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "What language model to train if you have one million GPU hours?", "year": "2022" }, { "authors": "Quentin Lhoest; Albert Villanova Del Moral; Thomas Patrick Von Platen; Mario Wolf; Yacine Šaško; Abhishek Jernite; Lewis Thakur; Suraj Tunstall; Mariama Patil; Julien Drame; Julien Chaumond; Joe Plu; Simon Davison; Victor Brandeis; Teven Sanh; Kevin Canwen Le Scao; Nicolas Xu; Steven Patry; Angelina Liu; Philipp Mcmillan-Major; Sylvain Schmid; Nathan Gugger; Raw; Anton Sylvain Lesage; Matthew Lozhkov; Théo Carrigan; Matussière; Lysandre Leandro Von Werra; Stas Debut; Clément Bekman; Delangue", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Datasets: A Community Library for Natural Language Processing", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b21", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "F T André; Ramón Martins; Astudillo Fernandez", "journal": "", "ref_id": "b22", "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "year": "2016" }, { "authors": "Tomás Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b23", "title": "Efficient estimation of word representations in vector space", "year": "2013-05-02" }, { "authors": "Benjamin Minixhofer; Fabian Paischer; Navid Rekabsaz", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models", "year": "2022" }, { "authors": "Timo Möller; Julian Risch; Malte Pietsch", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "GermanQuAD and GermanDPR: Improving non-English question answering and passage retrieval", "year": "2021" }, { "authors": "Benjamin Muller; Antonios Anastasopoulos; Benoît Sagot; Djamé Seddah", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "When being unseen from mBERT is just the beginning: Handling new languages with multilingual language models", "year": "2021" }, { "authors": "Aitor Ormazabal; Mikel Artetxe; Gorka Labaka; Aitor Soroa; Eneko Agirre", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Analyzing the limitations of cross-lingual word embedding mappings", "year": "2019" }, { "authors": "Malte Ostendorff; Georg Rehm", "journal": "", "ref_id": "b28", "title": "Efficient language model training through cross-lingual and progressive transfer learning", "year": "2023" }, { "authors": "Xiaoman Pan; Boliang Zhang; Jonathan May; Joel Nothman; Kevin Knight; Heng Ji", "journal": "", "ref_id": "b29", "title": "Cross-lingual name tagging and linking for 282 languages", "year": "2017" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b30", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b31", "title": "", "year": "" }, { "authors": "Barun Patra; Joel Ruben; Antony Moniz; Sarthak Garg; Matthew R Gormley; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces", "year": "2019" }, { "authors": "Jonas Pfeiffer; Ivan Vulić; Iryna Gurevych; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b34", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "year": "2020" }, { "authors": "Afshin Rahimi; Yuan Li; Trevor Cohn", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Massively multilingual transfer for NER", "year": "2019" }, { "authors": "Phillip Rust; Jonas Pfeiffer; Ivan Vulić; Sebastian Ruder; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "How good is your tokenizer? on the monolingual performance of multilingual language models", "year": "2021" }, { "authors": "Andreas Rücklé; Gregor Geigle; Max Glockner; Tilman Beck; Jonas Pfeiffer; Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b37", "title": "Adapterdrop: On the efficiency of adapters in transformers", "year": "2021" }, { "authors": "Peter Schönemann", "journal": "Psychometrika", "ref_id": "b38", "title": "A generalized solution of the orthogonal procrustes problem", "year": "1966" }, { "authors": "Anders Søgaard; Sebastian Ruder; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "On the limitations of unsupervised bilingual dictionary induction", "year": "2018" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b40", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ke Tran", "journal": "", "ref_id": "b41", "title": "From english to foreign languages: Transferring pre-trained language models", "year": "2020" }, { "authors": "Giorgos Vernikos; Andrei Popescu-Belis", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Subword mapping and anchoring across languages", "year": "2021" }, { "authors": "Ivan Vulić; Sebastian Ruder; Anders Søgaard", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Are all good word vector spaces isomorphic?", "year": "2020" }, { "authors": "Hai Wang; Dian Yu; Kai Sun; Jianshu Chen; Dong Yu", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Improving pre-trained multilingual model with vocabulary expansion", "year": "2019" }, { "authors": "Xinyi Wang; Sebastian Ruder; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Expanding pretrained models to thousands more languages via lexicon-based adaptation", "year": "2022" }, { "authors": "Zihan Wang; K Karthikeyan; Stephen Mayhew; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Extending multilingual BERT to lowresource languages", "year": "2020" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Perric Cistac; Clara Ma; Yacine Jernite; Julien Plu; Canwen Xu; Teven Le Scao; Sylvain Gugger; Mariama Drame; Quentin Lhoest; Alexander M Rush", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Transformers: State-of-the-Art Natural Language Processing", "year": "2020" }, { "authors": "Shijie Wu; Mark Dredze", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Are all languages created equal in multilingual BERT", "year": "2020" }, { "authors": "Aohan Zeng; Xiao Liu; Zhengxiao Du; Zihan Wang; Hanyu Lai; Ming Ding; Zhuoyi Yang; Yifan Xu; Wendi Zheng; Xiao Xia", "journal": "", "ref_id": "b49", "title": "Glm-130b: An open bilingual pre-trained model", "year": "2022" }, { "authors": "Qingcheng Zeng; Lucas Garay; Peilin Zhou; Dading Chong; Yining Hua; Jiageng Wu; Yikang Pan; Han Zhou; Jie Yang", "journal": "", "ref_id": "b50", "title": "Greenplm: Cross-lingual pre-trained language models conversion with (almost) no cost", "year": "2022" }, { "authors": "Barret Zoph; Deniz Yuret; Jonathan May; Kevin Knight", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Transfer learning for low-resource neural machine translation", "year": "2016" } ]
[ { "formula_coordinates": [ 2, 368.87, 714.9, 156.27, 16.97 ], "formula_id": "formula_0", "formula_text": "∀o ∈ O : #» e t o = #» e s o .(1)" }, { "formula_coordinates": [ 3, 103.33, 372.49, 186.53, 15.91 ], "formula_id": "formula_1", "formula_text": "#» c a = [sim(a, o 1 ), . . . , sim(a, o n )](2)" }, { "formula_coordinates": [ 3, 122.92, 407.87, 166.95, 30.78 ], "formula_id": "formula_2", "formula_text": "sim(a, o) := #» x a • #» x o ∥ #» x a ∥∥ #» x o ∥ .(3)" }, { "formula_coordinates": [ 3, 76.69, 621.08, 213.18, 23.78 ], "formula_id": "formula_3", "formula_text": "#» w a = sparsemax( #» c a ) = argmin #» p ∈∆ ∥ #» p -#» c a ∥ 2 (4)" }, { "formula_coordinates": [ 3, 88.08, 677.94, 201.79, 17.64 ], "formula_id": "formula_4", "formula_text": "∆ := { #» p ∈ R |O| | #» 1 • #» p = 1, #» p ≥ 0}.(5)" }, { "formula_coordinates": [ 3, 349.92, 217.49, 175.22, 27.58 ], "formula_id": "formula_5", "formula_text": "∀a ∈ A : #» e t a = o∈O w a,o #» e s o .(6)" } ]
10.18653/v1/2022.gebnlp-1.27
2023-10-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b10", "b16", "b21", "b0" ], "table_ref": [], "text": "Language models and pre-trained representations in Natural Language Processing (NLP) are known to manifest biases against groups of people, including negative stereotypes connected to ethnicity or gender (Nangia et al., 2020;Nadeem et al., 2021). It has been extensively studied with monolingual models. Multilingual models, often used for model transfer between languages, introduce another potential issue: stereotypes of speakers of some languages can be imposed in other languages covered by the model.\nIn this case study, we try to determine the most prominent biases connected to European countries in multilingual sentence representation models. We adopt an unsupervised methodology ( § 2) based on hand-crafted prompt templates and principle component analysis (PCA), originally developed to extract moral sentiments from sentence representation (Schramowski et al., 2022).\nOur exploration encompasses four sentence representation models across 13 languages ( § 3). We find only minor differences between languages in the models. The results ( § 4) show that the strongest dimension in all models correlates with the political and economic distinction between Western and Eastern Europe and the Gross Domestic Product (GDP). Prompting specifically for country prestige leads to similar results. When prompted for occupations, the models can distinguish between low and high-prestige jobs. In most cases, the extracted job-prestige dimension only loosely correlates with the country-prestige dimension. This result suggests that the models do not connect individual social prestige with the country of origin. The exception is a small model distilled from Multilingual Universal Sentence Encoder (Yang et al., 2020) that seems to mix these two and thus confirms previous work claiming that distilled models are more prone to biases (Ahn et al., 2022).\nThe source code for the experiments is available on GitHub.1 " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "We analyze sentence representation models ( § 2.1) using a generalization of the Moral Direction framework ( § 2.2). We represent concepts (countries, jobs) using sets of templated sentences ( § 2.3), for which we compute the sentence embeddings. Then, we compute the principal components of the embeddings and analyze what factors statistically explain the main principal component ( § 2.4)." }, { "figure_ref": [], "heading": "Sentence Embeddings Models", "publication_ref": [ "b6", "b13", "b4", "b14", "b9", "b7" ], "table_ref": [], "text": "Sentence-embedding models are trained to produce a single vector capturing the semantics of an en-tire sentence. Contextual embeddings trained via the masked-language-modeling objective (Devlin et al., 2019) capture subwords well in context; however, they fail to provide a sentence representation directly comparable across sentences. Sentence-BERT (Reimers and Gurevych, 2019) approaches this problem by fine-tuning existing contextual embeddings using Siamese Networks on sentence classification tasks. As a result, sentences with similar meanings receive similar vector representation.\nThe issue of sentence representation also applies to multilingual contextual embeddings such as XLM-R (Conneau et al., 2020). In a multilingual setup, the additional requirement is that similar sentences receive similar representation regardless of the language. This is typically achieved using parallel data for knowledge distillation (Reimers and Gurevych, 2020;Heffernan et al., 2022) or more directly in a dual encoder setup (Feng et al., 2022)." }, { "figure_ref": [], "heading": "Embedding Analysis Method", "publication_ref": [ "b16" ], "table_ref": [], "text": "We base our methodology on an unsupervised method for extracting semantic dimensions from sentence embeddings, originally introduced in the context of moral institutions (Schramowski et al., 2022). The original study analyzed the moral sentiment of English verb phrases using SentenceBERT.\nThe method consists of three steps:\n1. Generate templated sentences associating verbs with morality (e.g., \"You should smile.\", \"It is good to smile.\") and average them for each verb. I.e., there is one average sentence embedding per verb.\n2. The sentences are processed with Sentence-BERT, and the representations are averaged for each phrase.\n3. Apply PCA over the representations.\nThe results show that the most significant dimension roughly corresponds to the moral sentiment of the phrases. They use multiple templates so that linguistic differences and potential verb connotations average out. Using templated sentences also eliminates linguistic diversity in the data. Because of that, the main principle component does capture linguistic differences but the most prominent semantic nuances across the verbs when used in the specific context of moral intuitions.\nWe extend this method to a more exploratory setup. We use a similar set of template sentences, putting countries and occupations in the context of prestige. We average their embeddings and get the main principle component using PCA. Then, using three different template sets, we analyse what the main principle component best correlates with." }, { "figure_ref": [], "heading": "Templating Sentences", "publication_ref": [ "b8", "b17" ], "table_ref": [], "text": "Similar to Hämmerl et al. (2023), who extended the Moral Dimension framework to multilingual models, we use templates in English and machinetranslate the sentences into other languages after filling in the templates. We use three template sets. The sets consist of synonymous sentences with the following meaning:\n1. They come from [COUNTRY].\n2. Being from [COUNTRY] is considered prestigious.\n3. Working as [JOB] is considered prestigious.\nSee Appendix A for the complete template list.\nIn the first set of templated sentences, we search for the general trend in how countries are represented. In the second set of sentences, we specifically prompt the model for country prestige to compare how general country representation correlates with assumed prestige. In the third case, we fit the PCA with templates containing job titles, i.e., the most prominent dimension captures job prestige according to the models. We apply the same projection to template representations related to country prestige from Set 2 (country prestige).\nCountries. We include all European countries of the size of at least Luxembourg and use their short names (e.g., Germany instead of the Federal Republic of Germany), which totals 40 countries. The list of countries is in Appendix A.3.\nLow-and high-prestige jobs. We base our list of low-and high-prestige jobs on a sociological study conducted in 2012 in the USA (Smith and Son, 2014). We manually selected 30 jobs for each category to avoid repetitions and to exclude USspecific positions. By using this survey, we also bring in the assumption that the European countries have approximately similar cultural distance from the USA. The complete list of job titles used is in Appendix A.2." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "Interpreting the dominant dimension. For the analysis, we assign the countries with abstractive labels based on geographical (location, mountains, seas), political (international organization membership, common history), and linguistic features (see Table 5 in the Appendix for a detailed list). The labels are not part of the templates. We compute the Pearson correlation of the onehot indicator vector of the country labels with the extracted dominant dimension to provide an interpretation of the dimension (some also called point-biserial correlation). Finally, because creating a fixed unambiguous list of Western and Eastern countries is difficult and most criteria are ambiguous, we manually annotate if the most positively and negatively correlated labels correspond to the economic and political distinction between Eastern and Western Europe.\nIn addition, we compute the country dimension's correlation with the respective countries' GDP based on purchasing power parity in 2019, according to the World Bank.2 \nCross-lingual comparison. We measure how the extracted dimensions correlate across languages. To explain where differences across languages come from, we compute how the differences correlate with the geographical distance of the countries where the languages are spoken, the GDP of the countries, and the lexical similarity of the languages (Bella et al., 2021). 3 3 Experimental Setup" }, { "figure_ref": [], "heading": "Evaluated Sentence Embeddings", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We experimented with diverse sentence embedding models, which were trained using different methods. We experimented with models available in the SentenceBERT repository and an additional model. The overview of the models is in Table 1." }, { "figure_ref": [], "heading": "Multilingual MPNet was created by multilingual distillation from the monolingual English MPNet", "publication_ref": [ "b18", "b13", "b4", "b14", "b21", "b15", "b7", "b8", "b13" ], "table_ref": [], "text": "Base model (Song et al., 2020) finetuned for sentence representation using paraphrasing (Reimers and Gurevych, 2019). In the distillation stage, XLM-R Base (Conneau et al., 2020) was finetuned to produce similar sentence representations using parallel data (Reimers and Gurevych, 2020).\nDistilled mUSE is a distilled version of Multilingual Universal Sentence Encoder (Yang et al., 2020) that was distilled into Distill mBERT (Sanh et al., 2019). This model was both trained and distilled multilingually.\nLaBSE (Feng et al., 2022) was trained on a combination of monolingual data and parallel data with a max-margin objective for better parallel sentence mining combined with masked language modeling. XLM-R-XNLI is trained without parallel data using machine-translated NLI datasets (Hämmerl et al., 2023). The model is based on XLM-R Base but was finetuned using Arabic, Chinese, Czech, English, and German data following the Sentence-BERT recipe (Reimers and Gurevych, 2019)." }, { "figure_ref": [], "heading": "Translating Templates", "publication_ref": [], "table_ref": [], "text": "To evaluate the multilingual representations in more languages, we machine translate the templated text into 12 European languages: Bulgarian, Czech, German, Greek, Spanish, Finnish, French, Hungarian, Italian, Portuguese, Romanian, and Russian (and keep the English original). We selected languages for which high-quality machine translation systems are available on the Huggingface Hub. The models are listed in Appendix B." }, { "figure_ref": [ "fig_0" ], "heading": "Results", "publication_ref": [ "b19", "b0" ], "table_ref": [ "tab_1", "tab_4", "tab_5", "tab_2" ], "text": "Aggregated results. The results aggregated over language are presented in Table 2. The detailed results per language are in the Appendix in Tables 6 and7.\nWhen prompting the models for countries, the most prominent dimensions almost always separate the countries according to the political east-west axis, consistently across languages. This is further stressed by the high correlation of the country dimension with the country's GDP, which is particularly strong in multilingual MPNet and Distilled mUSE. When we prompt the models specifically for country prestige, the correlation with the country's GDP slightly increases.\nWhen we prompt the models for job prestige, they can distinguish high-and low-prestige jobs well (accuracy 85-93%). When we apply the same projection to prompts about countries, in most cases, the ordering of the countries is random. Therefore, we conclude that the models do not correlate job prestige and country of origin.\nThe only exception is Distilled mUSE, where the job-prestige dimension applied to countries still correlates with the country's GDP and the east-west axis. This is consistent with previous work showing that distilled student models exhibit more biases than model trained on authentic data (Vamvas and Sennrich, 2021;Ahn et al., 2022).\nDifferences between languages. Further, we evaluate how languages differ from each other.\nIn all models, the first PCA dimension from the job-prestige prompts separates low-and highprestige jobs almost perfectly. Nevertheless, multilingual MPNet and distilled mUSE show a relatively low correlation of the job dimension across languages (see Figure 1).\nFinally, we try to explain the correlation between languages by connecting them to countries where the languages are spoken. We measure how the correlation between languages correlates with the geographical distances of (the biggest) countries speaking the language, the difference in their GDP, and the lexical similarity of the languages. The results are presented in Table 3: Correlation of the language similarities (in terms of cross-language correlation of the job-prestige dimension) with the geographical distance of the countries, language similarity, and GDP.\nFor all models except XLM-R-NLI, the lexical similarity of the languages is the strongest predictor. XLM-R-NLI, with low differences between languages, better correlates with geographical distances." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b5", "b12", "b1", "b8" ], "table_ref": [], "text": "Societal biases of various types in neural NLP models are widely studied, especially focusing on gender and ethnicity. The results of the efforts were already summarized in comprehensive overviews (Blodgett et al., 2020;Delobelle et al., 2022).\nNationality bias has also been studied. Venkit et al. (2023) show that GPT-2 associates countries of the global south with negative-sentiment adjectives. However, only a few studies focus on biases in how multilingual models treat different lan-guages. Papadimitriou et al. (2023) showed that in Spanish and Greek, mBERT prefers syntactic structures prevalent in English. Arora et al. (2022) and Hämmerl et al. (2023) studied differences in moral biases in multilingual language models, concluding there are some differences but no systematic trends. Yin et al. ( 2022) created a dataset focused on culturally dependent factual knowledge (e.g., the color of the wedding dress) and concluded it is not the case that Western culture propagates across languages." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We showed that all sentence representation models carry a bias that the most prominent feature of European countries is their economic power and affiliation to former geopolitical Western and Eastern blocks. In the models we studied, this presumed country prestige does not correlate with how the models represent the occupation status of people. The exception is Distilled mUSE, where the two correlate, which might lead to discrimination based on nationality." }, { "figure_ref": [], "heading": "Limitations & Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "The validity for different cultures. The \"ground truth\" for job prestige was taken from studies conducted in the USA. They might not be representative of other countries included in this case study. Given that all countries considered in this case study are a part of the so-called Global North, we can assume a certain degree of cultural similarity, which makes our results valid. However, our methodology is not guaranteed to generalize beyond the Western world.\nUnintended use. Some methods we use in the study might create a false impression that we have developed a scorer of job or country prestige. This is not the case. The correlations that we show in the Results section ( § 4) do not guarantee the reliability of the scoring beyond the intended use in the study, which is an assessment of multilingual sentence representation models. Drawing any conclusions about occupation groups, nations, or individual people using the methods used in this study might have harmful consequences. " }, { "figure_ref": [], "heading": "A.2 Low-and High-prestige jobs", "publication_ref": [], "table_ref": [], "text": "Low-profile jobs. a hotel chambermaid, a doorto-door salesman, a leaflet distributor, a janitor, a used car salesman, a bartender, a telephone operator, a carwash attendant, a cattle killer in a slaughtering plant, a dishwasher, a stockroom attendant, a box-folding-machine operator, a crushing-machine operator, a taxicab driver, a bicycle messenger, a salesperson in a hardware store, a street sweeper, a cashier in a supermarket, a pump operator, a railroad ticket agent, a desk clerk in a hotel, a cable TV installer, a sewing machine operator, a waiter in a restaurant, an assembly line worker, a shoeshiner, a ditch digger, an unskilled worker in a factory, a tire retreader, a dry cleaner High-profile jobs. a surgeon, a university professor, an architect, a lawyer, a priest, a banker, a school principal, an airline pilot, an economist, a network administrator, an air traffic controller, an author, a nuclear plant operator, a computer scientist, a psychologist, a pharmacist, a colonel in the army, a mayor of a city, a university president, a dentist, a fire department lieutenant, a high school teacher, a policeman, a software developer, an actor, a fashion model, a journalist, a musician in a symphony orchestra, a psychiatrist, a chemical engineer" }, { "figure_ref": [], "heading": "A.3 Countries", "publication_ref": [], "table_ref": [], "text": "We consider the following 40 countries (ordered by their ISO 3166-1 codes): Austria, Bosnia and Herzegovina, Belgium, Bulgaria, Belarus, Switzerland, the Czech Republic, Cyprus, Den-mark, Germany, Greece, Spain, Estonia, Finland, France, Hungary, Croatia, Ireland, Iceland, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Moldova, Montenegro, North Macedonia, Malta, Norway, Portugal, Poland, Romania, Russia, Slovakia, Slovenia, Albania, Serbia, Sweden, Turkey, Ukraine, Great Britain, Kosovo.\nThe qualitative group labels we assign to the countries we use in the further analysis are in Table 5. The values reflect the world as in the training data (estimated pre-2021) for the models and, therefore, do not reflect recent events (i.e., Croatia is not listed among countries paying with Euro, and Finland is considered neutral)." }, { "figure_ref": [], "heading": "B Machine Translation Models", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The machine translation models we used are listed in Table 4 While keeping default values for all decoding parameters." }, { "figure_ref": [], "heading": "C Detailed per-language results", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "The detailed per-language results are presented in Tables 6 and7.\nMultilingual Paraphrase MPNet (paraphrase-multilingual-mpnet-base-v2) " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Many thanks to Tomáš Musil and Rudolf Rosa for discussing the methodology in this paper and to Ondřej Dušek and Dominik Macháček for their comments on the paper draft.\nThis research was supported by the Charles University project PRIMUS/23/SCI/023." } ]
We study how multilingual sentence representations capture European countries and occupations and how this differs across European languages. We prompt the models with templated sentences that we machine-translate into 12 European languages and analyze the most prominent dimensions in the embeddings. Our analysis reveals that the most prominent feature in the embedding is the geopolitical distinction between Eastern and Western Europe and the country's economic strength in terms of GDP. When prompted specifically for job prestige, the embedding space clearly distinguishes high and low-prestige jobs. The occupational dimension is uncorrelated with the most dominant country dimensions in three out of four studied models. The exception is a small distilled model that exhibits a connection between occupational prestige and country of origin, which is a potential source of nationality-based discrimination. Our findings are consistent across languages.
Is a Prestigious Job the same as a Prestigious Country? A Case Study on Multilingual Sentence Embeddings and European Countries
[ { "figure_caption": "Figure 1 :1Figure 1: Cross-language correlation of the job-prestige dimension. Languages are coded using ISO 639-1 codes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "/opus-mt-tc-big-en-bg Czech cs Helsinki-NLP/opus-mt-tc-big-en-ces_slk German de facebook/wmt19-en-de Greek el Helsinki-NLP/opus-mt-tc-big-en-el English en -Spanish es Helsinki-NLP/opus-mt-tc-big-en-es Finnish fi Helsinki-NLP/opus-mt-tc-big-en-fi French fr Helsinki-NLP/opus-mt-tc-big-en-fr Hungarian hu Helsinki-NLP/opus-mt-tc-big-en-hu Italian it Helsinki-NLP/opus-mt-tc-big-en-it Portuguese pt Helsinki-NLP/opus-mt-tc-big-en-pt Romanian ro Helsinki-NLP/opus-mt-tc-big-en-ro Russian ru facebook/wmt19-en-ru", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Basic features of the studied models.", "figure_data": "ModelBackboneParallel dataParams.Mul. MPNetXLM-R BaseYes278MD. mUSEDistil-mBERTNo135MLaBSE-Yes471MXLM-R-NLIXLM-R BaseNo278M", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative results averaged over languages showing the average correlation of the dominant dimension with the country's GDP and a proportion of languages where the dominant dimension corresponds to the political division of Eastern and Western countries. The detailed per-language results are presented in Tables6 and 7in the Appendix.", "figure_data": "ModelCountry of origin East-West GDP cor. East-West GDP cor. East-West GDP cor. Country prestige Job prestigeJob class. accuracyMultiling. Paraphrase MPNet1.00.791.00.79.08.08.93Distilled mUSE1.00.711.00.71.69.41.85LaBSE1.00.481.00.50.23.09.88NLI-finetuned XLM-R.85.47.85.50.08.08.91Mul. Par. MPNet b g cs de el en es fi fr hu it pt ro ru c s d e e l e n e s f i f r h u i t p t r oDist. mUSE b g cs de c s d e el en e l e n es e s fi f i fr hu f r h u it i t pt p t ro ru r o", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "ModelGeo. dist.GDP diff.Lang. sim.MP MPNet.020.077.459D. mUSE.069.194.316LaBSE.175.076.443XLM-R-NLI.387.042.064", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Huggingface Hub Identifier of the machine translation models used for our experiments.", "figure_data": "A Templated textsA.1 Sentence TemplatesCountry of originThey are from [COUNTRY].A.1.1 Country prestigeBeing from [COUNTRY] is considered prestigious.Being from [COUNTRY] has high prestige.Being from [COUNTRY] is seen as a symbol ofsuccess.Being from [COUNTRY] is highly regarded.People from [COUNTRY] are looked upon withadmiration.People from [COUNTRY] are highly esteemed.It is prestigious to be from [COUNTRY].Job prestigeBeing [JOB] is considered prestigious.", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Detailed per-language results for Multilingual MPNet and Distilled Multilingual Sentence Encoder.", "figure_data": "Jobclass.acc..93.93.93.90.97.93.93.90.95.97.93.93.92.93Jobclass.acc..80.90.88.82.87.83.78.88.83.87.82.83.88.85Corr.w/ GDP----.33----.39-.31-.08Corr.w/ GDP.63.38.65-.62.64-.58.37.32.54.57-.41Job prestigre applied to countryThe main direction Is east-west? corr. ⊕ label corr.-.31 German .50 --.37 Baltic state .43 --.38 German .53 --.44 German .46 --.43 German .51 --.42 German .48 -.32 German .50-.36 German .49 --.57 Post-Soviet .31 --.36 German .53 --.47 German .45 --.38 German .46 --.36 Alps .48 --.35 .47 0.08Job prestigre applied to countryThe main direction Is east-corr. ⊕ label corr. west?-.54 Yugoslavia .54-.42 EU-15 .52-.47 Germanic lng. .48-.34 Post-Soviet .40 --.53 EU-15 .49-.53 EU-15 .50.31 EU .43 --.55 EU-15 .46-.48 EU-15 .46-.36 Atlantic .33 --.46 EU-15 .47-.51 Yugoslavia .57-.53 alps .34 --.42 .46 0.69⊖ labelNorthNorthNorthNorthNorthNorthPost-SovietNorthNorthNorthNorthNorthMonarchy⊖ labelGermanYugoslaviaPost-comm.NorthYugoslaviaYugoslaviaNeutralYugoslaviaPost-comm.YugoslaviaYugoslaviaEU-15YugoslaviaCountry of origin Country prestigeLng The main direction Is east-Corr. The main direction Is east-Corr.west? w/ GDP west? w/ GDP ⊖ label corr. ⊕ label corr. ⊖ label corr. ⊕ label corr.bg Slavic lng. -.70 West .70 .80 Slavic lng. -.71 West .70 .80cs Slavic lng. -.69 West .71 .81 Slavic lng. -.69 West .71 .81de Slavic lng. -.69 West .68 .80 Balkan -.70 Germanic lng. .69 .81el Slavic lng. -.70 West .68 .80 Slavic lng. -.70 West .70 .81en Slavic lng. -.69 West .71 .80 Slavic lng. -.70 West .71 .80es Slavic lng. -.69 West .71 .80 Slavic lng. -.68 West .72 .81fi Balkan -.67 West .71 .81 Balkan -.70 West .68 .78fr Slavic lng. -.67 West .71 .81 Balkan -.68 West .69 .81hu Slavic lng. -.69 West .72 .80 Slavic lng. -.70 West .71 .80it Slavic lng. -.69 West .71 .81 Slavic lng. -.71 West .69 .81pt Slavic lng. -.67 West .71 .80 Slavic lng. -.68 West .70 .81ro Germanic lng. -.67 Slavic lng. .72 .81 Slavic lng. -.71 Germanic lng. .68 .81ru Post-comm. -.71 West .63 .62 Post-comm. -.69 West .62 .63Average -.69 .70 1 .79 -.70 .69 1 .79Distilled Multilingual Sentence Encoder (distiluse-base-multilingual-cased-v2)Country of origin Country prestigeLng The main direction Is east-Corr. The main direction Is east-Corr.west? w/ GDP west? ⊖ label corr. ⊕ label corr. ⊖ label corr. ⊕ label corr. w/ GDPbg Germanic lng. -.66 Post-comm. .61 .73 Post-comm. -.61 Germanic lng. .67 .73cs Post-comm. -.61 Germanic lng. .65 .72 Post-comm. -.61 Germanic lng. .65 .71de Germanic lng. -.59 Slavic lng. .63 .68 Germanic lng. -.60 Post-comm. .69 .73el Post-comm. -.64 Germanic lng. .62 .71 Post-comm. -.63 Germanic lng. .62 .70en Germanic lng. -.62 Slavic lng. .63 .71 Germanic lng. -.63 Slavic lng. .62 .72es Germanic lng. -.65 Slavic lng. .63 .74 Germanic lng. -.66 Post-comm. .63 .76fi West -.61 Balkan .64 .72 Balkan -.61 West .56 .70fr Germanic lng. -.57 Post-comm. .63 .67 EU-15 -.58 Post-comm. .64 .68hu Post-comm. -.63 Germanic lng. .61 .66 Post-comm. -.64 Germanic lng. .63 .68it Germanic lng. -.65 Slavic lng. .62 .74 Post-comm. -.61 Germanic lng. .63 .72pt Balkan -.58 Germanic lng. .69 .74 Post-comm. -.58 Germanic lng. .69 .73ro Germanic lng. -.62 Post-comm. .64 .70 Germanic lng. -.64 Post-comm. .64 .72ru Post-comm. -.63 Germanic lng. .61 .65 Post-comm. -.60 Germanic lng. .58 .60-.62 .63 1 .70 -.61 .63 1 .71", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Detailed per-language results for LaBSE and XLM-R finetuned on NLI.", "figure_data": "", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
Jindřich Libovický
[ { "authors": "Jaimeen Ahn; Hwaran Lee; Jinhwa Kim; Alice Oh", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Why knowledge distillation amplifies gender bias and how to mitigate from the perspective of Dis-tilBERT", "year": "2022" }, { "authors": "Arnav Arora; Lucie-Aimée Kaffee; Isabelle Augenstein", "journal": "", "ref_id": "b1", "title": "Probing Pre-trained Language Models for Cross-cultural Differences in Values", "year": "2022" }, { "authors": "Gábor Bella; Khuyagbaatar Batsuren; Fausto Giunchiglia", "journal": "Springer", "ref_id": "b2", "title": "A Database and Visualization of the Similarity of Contemporary Lexicons", "year": "2021-09-06" }, { "authors": "Lin Su; Solon Blodgett; Hal Barocas; Iii Daumé; Hanna Wallach", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Language (technology) is power: A critical survey of \"bias\" in NLP", "year": "2020" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Pieter Delobelle; Ewoenam Tokpo; Toon Calders; Bettina Berendt", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Measuring fairness with biased rulers: A comparative study on bias metrics for pre-trained language models", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Fangxiaoyu Feng; Yinfei Yang; Daniel Cer; Naveen Arivazhagan; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Language-agnostic BERT sentence embedding", "year": "2022" }, { "authors": "Katharina Hämmerl; Björn Deiseroth; Patrick Schramowski; Jindrich Libovický; Constantin A Rothkopf; Alexander Fraser; Kristian Kersting", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Speaking Multiple Languages Affects the Moral Bias of Language Models", "year": "2023-07-09" }, { "authors": "Kevin Heffernan; Onur Çelebi; Holger Schwenk", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Bitext mining using distilled sentence representations for low-resource languages", "year": "2022" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2021" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b11", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": "Isabel Papadimitriou; Kezia Lopez; Dan Jurafsky", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models", "year": "2023" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Making monolingual sentence embeddings multilingual using knowledge distillation", "year": "2020" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b15", "title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Patrick Schramowski; Cigdem Turan; Nico Andersen; Constantin A Rothkopf; Kristian Kersting", "journal": "Nat. Mach. Intell", "ref_id": "b16", "title": "Large pre-trained language models contain humanlike biases of what is right and wrong to do", "year": "2022" }, { "authors": "Tom W Smith; Jaesok Son", "journal": "", "ref_id": "b17", "title": "Measuring Occupational Prestige on the 2012 General Social Survey", "year": "2014" }, { "authors": "Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu", "journal": "", "ref_id": "b18", "title": "MPNet: Masked and Permuted Pretraining for Language Understanding", "year": "2020-12-06" }, { "authors": "Jannis Vamvas; Rico Sennrich", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Contrastive conditioning for assessing disambiguation in MT: A case study of distilled bias", "year": "2021" }, { "authors": "Pranav Narayanan Venkit; Sanjana Gautam; Ruchi Panchanadikar; K Ting-Hao; Shomir Huang; Wilson", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Nationality Bias in Text Generation", "year": "2023-05-02" }, { "authors": "Yinfei Yang; Daniel Cer; Amin Ahmad; Mandy Guo; Jax Law; Noah Constant; Gustavo Hernandez Abrego; Steve Yuan; Chris Tar; Yun-Hsuan Sung; Brian Strope; Ray Kurzweil", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Multilingual universal sentence encoder for semantic retrieval", "year": "2020" }, { "authors": "Hritik Da Yin; Masoud Bansal; Liunian Monajatipoor; Kai-Wei Harold Li; Chang", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "GeoM-LAMA: Geo-diverse commonsense probing on multilingual pre-trained language models", "year": "2022" } ]
[]
2023-11-15
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b31", "b10", "b33", "b4", "b9", "b13", "b23", "b29", "b5", "b34", "b1", "b24", "b23" ], "table_ref": [], "text": "Entity coreference resolution aims to find all spans within an input text that refer to the same entity. As an important information extraction sub-task, coreference resolution has received considerable attention from the NLP community over the years, with recent progress driven mostly by neural coreference models (Lee et al., 2017;Wu et al., 2020;Joshi et al., 2020). There has also been an increasing interest in the generalization of coreference systems to domains and languages beyond the popular CoNLL-2012 benchmark (Xia and Van Durme, 2021;Bohnet et al., 2022). Most work on extending coreference resolution to new domains and languages relies on target language annotated data in the targeted domain, however the amount of labeled data needed to cover every possible domain in all languages is prohibitively expensive. Meanwhile, unsupervised (Haghighi and Klein, 2010) and fewshot (Le et al., 2022) coreference resolution has received less attention, despite the fact that learning with less labels is desirable when adapting to new languages or domains.\nConcurrently, there has been a great deal of progress on zero-and few-shot learning by prompting pre-trained language models (LMs) (Ouyang et al., 2022;Touvron et al., 2023). There have also been attempts at evaluating pre-trained LMs coreference abilities under zero-and few-shot settings: Brown et al. (2020) demonstrated that prompting GPT-3 can resolve coreference on the Winograd Schema Challenges (WSC), Yang et al. (2022) showed that coreference resolution was a challenging task for GPT-2 when prompted with multiplechoice templates, and Agrawal et al. (2022) successfully reframed clinical pronoun resolution as span generation. While these studies reveal some evidence of the coreference abilities in large LMs, they either evaluate on sentence-level, artificial datasets that are designed more as an AI challenge task, use prompting methods that fail to beat reasonable baselines, or use non-standard datasets to evaluate language models' performance at coreference resolution. In contrast, the traditional dataset for coreference resolution, CoNLL-2012/OntoNotes, contains real-world document-level examples with complex linguistic annotations (Pradhan et al., 2012). Evaluating LMs using more realistic inputs in this setting is arguably more suitable for the evaluation of models' coreference capabilities.\nIn this paper, we aim to bridge the gap between coreference and language modeling literature by investigating to what extent instructiontuned language models (e.g., InstructGPT) can perform coreference resolution via prompting. We show that prompting LMs is a feasible strategy Figure 1: An example of coreference resolution with LMs prompting. Here we show two prompt templates experimented in this work: Question Answering and Document templates. In the QA template, the language model generates the answer when given a passage and an open-ended wh-question (Ouyang et al., 2022). In contrast, the document template marks the candidate mentions and asks the LM to annotate the cluster IDs for each mention directly within the text (represented by different colors). Both templates require a mention detector to generate candidate mentions.\nfor coreference resolution, outperforming previous unsupervised systems. Nonetheless, it still trails behind state-of-the-art supervised models and relies heavily on a robust mention detector. Finally, we explore the generalization ability of this approach by extending our analysis to a diverse range of domains, languages, and time periods. Our results indicate that continued learning should still be the preferred option if a large out-of-domain corpus and a few annotated in-domain documents are available. However, large instruction-tuned LMs can generalize surprisingly well across domains and languages, making them a robust option if no target language or in-domain data is available for fine-tuning." }, { "figure_ref": [], "heading": "Prompt-based Coreference Resolution", "publication_ref": [ "b23", "b1", "b23", "b1", "b2", "b14", "b1", "b14" ], "table_ref": [], "text": "Previous work in zero-and few-shot coreference resolution assumes access to candidate mentions to resolve, usually pronouns in the passage (Ouyang et al., 2022;Agrawal et al., 2022). We adopt this formulation: given a document, we assume the existence of a set of candidate mentions (gold or predicted), then prompt an autoregressive language model with handcrafted prompts, and extract the predicted coreference links (Figure 1).\nPrior work applying language models to resolve co-referring entity mentions has mainly experimented with question answering prompts for pro-noun resolution (Ouyang et al., 2022;Agrawal et al., 2022) and demonstrated its effectiveness when comparing with other templates such as multiple-choice (Arora et al., 2022). However, in a preliminary study ( §A.1), we found that prompting with a QA template struggled to compete with Stanford's deterministic coreference systems (Lee et al., 2013), even when providing gold mentions and few-shot guidance (Agrawal et al., 2022), or when scaling to larger LMs (achieving 61 F 1 when comparing to 72 F 1 from Lee et al. (2013)). We also experimented with an alternative documentlevel template that is able to elicit more coreference links than the usual QA template, achieving an 81 F 1 . In this template, the mentions of the input text are first marked with special tokens indicating a span to annotate (e.g., Mr. Clinton → [Mr. Clinton](#)). The LM is then given instructions to annotate this marked span with the cluster ID, (e.g., [Mr. Clinton](#) → [Mr. Clinton](#cluster_1)). Given strong results over the QA template, we used this document template for all subsequent experiments." }, { "figure_ref": [], "heading": "CoNLL-2012 Experiments", "publication_ref": [ "b24" ], "table_ref": [], "text": "We investigate the coreference abilities of large LMs on the CoNLL-2012 benchmark (Pradhan et al., 2012). We found that GPT models (InstructGPT, ChatGPT, and GPT-4) (OpenAI, 2023) yield competitive results with previous unsu-pervised and rule-based models, while significantly outperforming them when gold mentions are provided." }, { "figure_ref": [], "heading": "Experimental Details", "publication_ref": [ "b30", "b24" ], "table_ref": [], "text": "Dataset and Evaluation Metrics We evaluate our approach on the traditionally benchmarked English OntoNotes 5.0 dataset (Weischedel et al., 2011;Pradhan et al., 2012), which spans seven distinct genres such as news, telephone conversations, and religious text. We follow the standard train-devtest splits from previous work and report CoNLL F 1 , which averages over three coreference-based metrics MUC, B 3 , and CEAF ϕ 4 .\nSettings We report results under two settings: predicted mentions, where only raw text is provided as input, and gold mentions, where the gold mention boundaries are provided as input. To obtain predicted mentions, we use the mentions output by dcoref as input into language model prompts." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b29", "b23", "b14", "b4", "b10", "b27" ], "table_ref": [], "text": "We report performance on seven instruction-tuned language models from the LLaMa-2 (Touvron et al., 2023) and OpenAI GPT (Ouyang et al., 2022) model families. We compare these models with various competitive supervised and unsupervised baselines from coreference literature.\nBaselines We mainly consider Stanford's deterministic resolver, which we refer to as dcoref (Lee et al., 2013). This coreference resolver consists of multiple sieves, where each sieve is a set of handcrafted rules that filter out mentions. The sieves are ordered from highest to lowest precision to minimize cascading errors from previous sieves. We use the open-sourced implementation of dcoref to obtain the results in this study. 2 For supervised systems, we compare to coref-mt5, a text-to-text approach based on mT5 from Bohnet et al. (2022), which is the state-of-the-art for supervised coreference, and SpanBERT+e2e, a spanbased neural coreference system (Joshi et al., 2020). For unsupervised baselines, we include results from weak-SpanBERT (Stolfo et al., 2022), a system that trained a SpanBERT-based coarse-to-fine architecture on dcoref coreference predictions." }, { "figure_ref": [], "heading": "Llama 2 Models", "publication_ref": [ "b29" ], "table_ref": [], "text": "We use models from the Llama 2 model family (Touvron et al., 2023) as the primary open-sourced language models. In ière et al., 2023). To avoid hallucinations, we constrain the generation outputs as follows: for each given mention, we ask the model to generate the cluster ID. We then update the input sequence by appending the generated ID with the text segment between the current mention and the next mention. The process is repeated until all the mentions in the document are annotated, as in Figure 1." }, { "figure_ref": [], "heading": "GPT Models", "publication_ref": [ "b23", "b1" ], "table_ref": [], "text": "We also investigate the instructiontuned 175B GPT-3 model (text-davinci-003) from the InstructGPT series, which we refer to as InstructGPT (Ouyang et al., 2022). Previous work has reported evidence of InstructGPT coreference abilities via few-shot prompting (Agrawal et al., 2022). In addition, we report performance on the most recent OpenAI language models, ChatGPT (gpt-35-turbo) as well as GPT-4 (OpenAI, 2023). Due to the cost of running these models, we generate outputs using greedy decoding with a single generation per input document rather than iterative decoding as described above for the open-sourced models. " }, { "figure_ref": [], "heading": "Gold Mentions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Results", "publication_ref": [ "b19", "b1" ], "table_ref": [], "text": "Table 1 shows the results between different coreference systems. We note that prompting InstructGPT and GPT-4 outperforms weak-SpanBERT and dcoref for predicted mentions, with the performance gaps increase for gold mentions. This demonstrates the feasibility of prompting large LMs for coreference resolution, particularly in the setting where the mentions are known. However, this approach still considerably underperforms fully supervised systems. While all Llama-2 model variants underperform dcoref baseline, we note that CodeLlama significantly outperforms Llama-2-Chat. CodeLlama-7B even matches the performance of Llama-2-Chat-70B.\nTo further understand the strengths and weaknesses of instruction-tuned LMs for coreference, we break down the results according to different resolution classes, following Lu and Ng (2020). Specifically, for each coarse-grained mention class (named entity, pronoun, nominal), we compute the resolution accuracy, which is the percentage of anaphors correctly linked to an antecedent (Figure 2). We observe that InstructGPT does particularly well in pronoun resolution, corroborating previous work (Agrawal et al., 2022). It struggles more for named entities and the particularly difficult nominal resolution. However, InstructGPT still remains competitive with dcoref for these classes, with the gaps increase when gold mentions are provided. In particular, InstructGPT (and even CodeLlama in gold mention setting) outperforms dcoref on the challenging nominal phrases case (Figure 2)." }, { "figure_ref": [ "fig_3" ], "heading": "The Importance of Mention Detection", "publication_ref": [ "b19", "b12", "b32", "b8" ], "table_ref": [ "tab_2", "tab_2", "tab_4" ], "text": "While prompting of LMs can be competitive with previous coreference systems, the quality of candidate mentions has a considerable effect on the final performance. We quantify the importance of high-quality Mention Detection (MD) by measuring the models' performance when inputting candidate mention sets generated by different mention detectors (Figure 3). Furthermore, we analyze the performance of InstructGPT when prompting for mentions with a simple template that asks it to output a list of named entities, pronouns, and nominal phrases in the input text (Table 2).\nInstructGPT consistently outperforms dcoref as MD performance increases. In general, coreference performances of all models improve as mention detection score increases. This is not surpris- ing, as it has been similarly reported in previous work studying mention detection of neural coreference resolution systems (Lu and Ng, 2020). We further observe that CodeLlama underperforms while ChatGPT performs comparable to dcoref baseline. Nonetheless, InstructGPT again consistently outperforms dcoref, regardless of MD performance.\nInstructGPT struggles with generating candidate mentions. Table 2 shows that InstructGPT performs much worse than dcoref. Further analysis by mention types shows it particularly struggles to recall nominal mentions. A qualitative example in Table 3 demonstrates that while InstructGPT was able to recover a considerable portion of named entities and pronouns, it also made numerous errors, including span errors, extra entities, and missing mentions (Kummerfeld and Klein, 2013).\nGiven that what constitutes a mention can depend heavily on the annotation guidelines of specific datasets and domains, it may be challenging to ask a MD system to predict mentions without any labeled examples. Since Mention Detection plays a crucial role in coreference resolution (Wu and Gardner, 2021) as well as its generalizability to different domains, a high-quality mention detection appears to be a pre-requisite for prompt-based coreference resolution. Fortunately, however, mention annota-tion has been shown to be much less costly than annotating full coreference chains (Gandhi et al., 2022)." }, { "figure_ref": [], "heading": "Generalization Beyond OntoNotes", "publication_ref": [ "b21", "b33", "b8", "b4" ], "table_ref": [ "tab_5" ], "text": "Although supervised neural models achieve superior results for coreference, they are also known to struggle when generalizing across domains, sometimes even underperforming rule-based systems (Moosavi and Strube, 2017). As such, recent research in coreference largely focus on the generalization ability of neural models beyond the OntoNotes dataset (Xia and Van Durme, 2021;Gandhi et al., 2022;Bohnet et al., 2022). Given that large LMs are pre-trained on lots of generalpurpose corpus and not optimized for a single coreference dataset, we posit that these instruction-tuned language models can also be effective at coreference domain adaptation. Therefore, we study how well instruction-tuned LMs generalize to different domains ( §4.1), languages ( §4.2), and time periods ( §4.3). We mainly report results for InstructGPT, given its competitive performance on OntoNotes ( §3) while being less expensive than GPT-4. The diverse coreference datasets considered in this analysis are given in Table 4. Since mention detection has been shown to be fairly challenging ( §3.4), we evaluate the experiments in this section using gold mentions." }, { "figure_ref": [], "heading": "Can prompting LMs generalize coreference across domains?", "publication_ref": [ "b28", "b33", "b28", "b33", "b28" ], "table_ref": [ "tab_6" ], "text": "To study the robustness of our approach across domains, we use the datasets benchmarked in Toshniwal et al. (2021) due to the diversity in genres (news, Wikipedia, conversations), document lengths (long vs. short), and annotation guidelines (singletons vs. non-singletons). For evaluation, we follow the annotation schema of the corresponding dataset (i.e., if the dataset contains singletons, then we also output singletons). Similar to previous work in coreference domain adaptation (Xia and Van Durme, 2021;Toshniwal et al., 2021), we explore different systems where different types of source and target training data are available. Specifically, in addition to dcoref as in §3, we include the trained models TRANSFER-ON (Xia and Van Durme, 2021) and longdoc-PC (Toshniwal et al., 2021), which were respectively trained on the train set of OntoNotes en (2,802 annotated documents of newswire and religious texts) and PreCo (36,120 documents of reading comprehension examinations, collected in Chen et al. ( 2018)). TRANSFER-ON was then further finetuned on 10 labeled documents from the target domains. Additionally, we include the pretrained encoder SpanBERT (Xia and Van Durme, 2021) as a fine-tuning baseline (on a small amount of annotated data), where a pretrained SpanBERT encoder was not trained on a large source corpus and instead directly finetuned on 10 target documents.3 \nInstructGPT appears to be robust for coreference domain adapation. Table 5 shows the coreference domain generalization for various systems. While InstructGPT is competitive with longdoc-PC, it still trails behind TRANSFER-ON considerably. This indicates that transfer learning is still a preferred method for coreference domain adaptation, particularly when a large corpus of training data and a few annotated documents in the target domain are available. On the other hand, when compared to models that were not trained on source coreference datasets such as dcoref and SpanBERT, InstructGPT outperforms them by a significant margin. This demonstrates the robustness of InstructGPT for coreference domain adaptation when using as a black-box model." }, { "figure_ref": [], "heading": "Can LMs also generalize coreference across languages?", "publication_ref": [ "b25", "b4" ], "table_ref": [ "tab_7" ], "text": "To test the generalization of InstructGPT on resolving coreference across multiple languages, we experimented with Chinese and Arabic portions of OntoNotes and the multilingual coreference SemEval-2010 dataset (Recasens et al., 2010). A notable difference between OntoNotes and SemEval-2010 is the annotations of singletons, which has led to different evaluation methods for SemEval-2010. We follow the evaluation setting of previous work for each of the evaluated languages: excluding singletons from both predicted and evaluation clusters for Chinese and Arabic, while excluding singletons from predicted set but keeping them in evaluation sets for other languages. We refer to Section 5 of Bohnet et al. (2022) for more discussion on this. Similar to §4.1, we compare InstructGPT with neural transfer-learning models from Xia and Van Durme (2021), TRANSFER-EN and XLM-R. Both use a pretrained XLM-RoBERTa-large encoder finetuned with 10 documents from the target language. We note that TRANSFER-EN was previously trained on English OntoNotes before continuing training on target language, which makes it a stronger model than XLM-R.4 TRANSFER-EN and XLM-R cor- respond to TRANSFER-ON and SpanBERT from §4.1, respectively, with the only difference being the pretrained encoder (XLM-R vs. SpanBERT).\nInstructGPT can also effectively resolve coreference across language. From Table 6, we observe similar conclusions to §4.1: continued learning using a large source corpus with a handful of annotated examples from target languages still performs the best. Nonetheless, InstructGPT was able to outperform XLM-R across all languages, and is even on par with TRANSFER-EN for Chinese and Dutch. This result indicates the importance of a source English coreference corpus for continued learning." }, { "figure_ref": [], "heading": "What about different time periods?", "publication_ref": [ "b0", "b17", "b10", "b19", "b32" ], "table_ref": [], "text": "An interesting dimension to analyze the robustness of coreference generalization is temporal changes (Agarwal and Nenkova, 2022;Liu and Ritter, 2023), since having coreference systems that can generalize beyond datasets that were created over a decade ago (e.g., OntoNotes) can be beneficial. To that end, we compare dcoref (Joshi et al., 2020), which was fine-tuned on the in-domain OntoNotes train set, to obtain silver annotations for all three datasets.\nWe then evaluate the models on these silver annotations, with mentions given as before. Further details on how we sampled and annotated these datasets are presented in §A.3.\nPrompting instruction-tuned LMs is robust to temporal changes. 2022) demonstrated that adapting mention annotations to new domains instead of the entire coreference chains is more costefficient while also improves domain adaptation performance. Their findings are in line with insight from analyzing different components of neural coreference systems: improving mention detection provides the largest improvement to coreference performance (Lu and Ng, 2020;Wu and Gardner, 2021). We observe a similar trend with prompting InstructGPT." }, { "figure_ref": [], "heading": "Conditional Text Generation for Coreference", "publication_ref": [ "b15", "b10", "b18", "b4", "b18", "b4", "b20", "b3", "b16", "b1" ], "table_ref": [], "text": "Research in coreference resolution has been dominated by neural span-based models that score coreference links between spans (Lee et al., 2017;Joshi et al., 2020). Recently, a new paradigm for coreference starts to emerge: formulating coreference resolution as conditional text generation (Liu et al., 2022;Bohnet et al., 2022). Both Liu et al. (2022) and Bohnet et al. (2022) fine-tuned T5based models on sequences of structured-building actions, with the former achieving competitive results for structured prediction tasks and the latter achieving SOTA results for coreference resolution. While our work fall into this category, we are interested the intrinsic ability of the language model to resolve coreference, using an autoregressive language model on an instruction-based prompt format (as opposed to a more complex state-based format).\nPrompting LMs for Coreference With the success of zero-shot and few-shot prompting of large language models on various NLP benchmarks, we ask to what extent this success translates to more traditional NLP tasks like coreference resolution. Manning et al. (2020) shows evidence of linguistic abilities in masked LMs and Blevins et al. (2022) presents a structured prompting approach that achieves strong few-shot results for sequence tagging tasks. For coreference resolution, prior work has mostly focused on few-shot learning for sentence-level, syntactically simple coreference datasets such as Winograd Schema Challenge (Levesque et al., 2012) and for pronoun resolution on clinical data (Agrawal et al., 2022)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we study how well instruction-tuned language models resolve coreference via prompting. We demonstrate the feasibility of this approach on the CoNLL-2012 benchmark, surpassing previous unsupervised systems but still underperforming state-of-the-art supervised models. Interestingly, prompting instruction-tuned LMs appears to generalize well across a wide range of domains, languages, and time periods, particularly if no training examples are given. Nonetheless, it still trails behind continued learning with a large training corpus in the source domain and a handful of annotated examples in the target domain." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Because OpenAI GPT models are proprietary models, we do not know whether or not OntoNotes was included in its training data. However, at the time of writing, there is some evidence against OntoNotes data contamination. First, a previous probe that aimes to measure data contamination and memorization of OntoNotes on ChatGPT showed negative results. 5 Second, our experiment in §4.3 includes data sampled after the models' training cutoff date (September 2021), yet still shows a robust F 1 . Finally, the conclusions in this paper still stand regardless of whether or not these models trained on OntoNotes: (1) prompting instructiontuned LMs is a feasible strategy for coreference resolution, and (2) although this approach has unique strengths and weaknesses, it is robust across many domains, languages, and time periods. Figure 4: Results of different prompt templates for coreference on a subset of OntoNotes dev set, using gold mentions. Note that dcoref achieves 71.9 F 1 on the same dataset." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [ "b1", "b23", "b1", "b14", "b11" ], "table_ref": [ "tab_4" ], "text": "QA Prompting for Coreference During our preliminary studies, we experimented with different prompting approaches for coreference using QA template from previous work (Agrawal et al., 2022;Ouyang et al., 2022). However, we found that prompting InstructGPT this way, despite adding in-context examples to provide formatting guidance (Agrawal et al., 2022), performed consistently worse than the deterministic coreference system dcoref (Lee et al., 2013). Qualitative, while this format seems effective at resolving pronouns, it would struggle with more ambiguous nominal noun phrases. For example, asking it to resolve an affair with her in Table 3 using QA template would yield an incorrect answer Gennifer Flowers.\nQA vs. Doc Template We then experimented with the Document template (Table 10) and found that it was more effective than the QA template at resolving coreference (Figure 4). Interestingly, adding in-context examples for this template would cause InstructGPT to perform worse that withouto in-context examples. We further note that this Document template is loosely similar to the entity-based approach to coreference, where the model links a mention with previous clusters, as opposed to the mention-paired approach exemplified by the QA template (Jurafsky and Martin, 2000).\nIn addition, extracting the predicted clusters from the generated text is easier than other formats, as InstructGPT would directly annotate the text with the cluster information (We extract cluster information using a simple fuzzy string matching algorithm by comparing the output text to input text, sentenceby-sentence)." }, { "figure_ref": [], "heading": "A.2 Mention Detection Experiments", "publication_ref": [ "b10" ], "table_ref": [], "text": "To experiment with different qualities of candidate mention sets, we adapting different existing methods for the task of Mention Detection: given an input document, extract all the candidate mentions from the text. For mention detection, we mainly consider the mention detector from dcoref as well as the prompting of InstructGPT for MD using template in Table 10. In addition, to see the effects of having high-quality mentions on dcoref and InstructGPT, we also consider outputs from SpanBERT-large trained on OntoNotes train set (Joshi et al., 2020) " }, { "figure_ref": [ "fig_5" ], "heading": "A.3 Temporal Generalization for Coreference", "publication_ref": [ "b35", "b10", "b10" ], "table_ref": [ "tab_11" ], "text": "Data Sampling To sample the appropriate data for this experiment, we start with the Wall Street Journal sections of the RealNews (Zellers et al., 2019) and OntoNotes dev set. We used SpanBERT (Joshi et al., 2020) to label all 56 WSJ articles from OntoNotes to obtain WSJ-1989 (CoNLL F 1 using SpanBERT on WSJ-1989 is shown on Table 9). To create WSJ-2019, we first labeled all 191 WSJ articles from RealNews using SpanBERT as above.\nWe then sampled 56 articles using stratified sampling based on two features: document length and The number of mentions per document is measured using the silver annotations from SpanBERT (Joshi et al., 2020). number of mentions per document. Specifically, we partitioned the WSJ RealNews articles into bins based on document lengths (bin size = 500 tokens), and for each document-length bin we further partitioned based on the number of mentions (mention size = 50). We then sampled the appropriate number of documents (i.e., the number of WSJ-1989 documents in each partition) for each bin to obtain WSJ-2019. For WSJ-2023, we randomly collected 56 articles from the WSJ website dated between May and June 2023 based on document lengths and topics. The distributions of three datasets are shown in Figure 5." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "CoNLL F 1 OntoNotes 79.2 WSJ-1989 74.5 " }, { "figure_ref": [], "heading": "Question Answering Template", "publication_ref": [], "table_ref": [], "text": "Instructions: Please carefully read the following passages. For each passage, you must identify which noun the mention marked in *bold* refers to. Context: In the summer of 2005, a picture that people have long been looking forward to started emerging with frequency in various major Hong Kong media. With their unique charm, these well-known cartoon images once again caused Hong Kong to be a focus of worldwide attention. The world's fifth Disney park will soon open to the public here. The most important thing about Disney is that *it* is a global brand. Question: What does *it* refer to? Answer: *it* refers to Disney. " }, { "figure_ref": [], "heading": "Document Template", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Annotate", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Mention Detection Template", "publication_ref": [], "table_ref": [], "text": "In the following text, list all named entities, pronouns, and nominal noun phrases according to the OntoNotes conventions. Input: In the summer of 2005, a picture that people have long been looking forward to started emerging with frequency in various major Hong Kong media. With their unique charm, these well-known cartoon images once again caused Hong Kong to be a focus of worldwide attention. The world's fifth Disney park will soon open to the public here. The most important thing about Disney is that it is a global brand. Output: Named Entities: Hong Kong Pronouns: their, it, many, its, that, its, this Nominal Noun Phrases: the summer of 2005, various major Hong Kong media, their unique charm, the world's fifth Disney park " } ]
Recent work on extending coreference resolution across domains and languages relies on annotated data in both the target domain and language (Xia and Van Durme, 2021). At the same time, pre-trained large language models (LMs) have been reported to exhibit strong zero-and few-shot learning abilities across a wide range of NLP tasks. However, prior work mostly studied this ability using artificial sentence-level datasets such as the Winograd Schema Challenge. In this paper, we assess the feasibility of prompt-based coreference resolution by evaluating instruction-tuned language models on difficult, linguistically-complex coreference benchmarks (e.g., CoNLL-2012). We show that prompting for coreference can outperform current unsupervised coreference systems, although this approach appears to be reliant on high-quality mention detectors. Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods; yet continued finetuning of neural models should still be preferred if small amounts of annotated examples are available.
Are Large Language Models Robust Coreference Resolvers?
[ { "figure_caption": "Figure 2 :2Figure 2: Resolution accuracy by mention types (amongst the recalled mentions) on OntoNotes dev set.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: CoNLL F 1 as a function of MD F 1 , on OntoNotes dev set. All models were fed the same outputs from mention detection systems detailed in §A.2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure5: ofWSJ-1989 (blue), WSJ-2019 (orange), and WSJ-2023 (green) based on document length (left) and number of mentions per document (right). The number of mentions per document is measured using the silver annotations from SpanBERT(Joshi et al., 2020).", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Nine years] ago today, allegations of infidelity almost derailed [Bill Clinton]'s journey from hope to the White House. On [January 1992], [Gennifer Flowers] claims [she] had a 12 -year affair with [Bill Clinton]. Flowers went on \"[Larry King] Live\" in 1998 at the height of the [impeachment proceedings] against Mr. Clinton. [She] said [she] felt vindicated when [he] admitted under oath that [he]'d had an affair with [her] after denying [it] for years.", "figure_data": "Mention Detection: [Antecedent Linking: Nine years ago today, [allegations of infidelity] 1 almost derailed [Bill Clinton's] 2(Gold Mentions)journey from hope to the White House. On January 1992, [Gennifer Flowers] 3[claims] 1 [she] 3 had a 12 -year affair with [Bill Clinton] 2 . [Flowers] 4 went on\"Larry King Live\" in 1998 at the height of the impeachment proceedings against[Mr. Clinton] 2 . [She] 3 said [she] 3 felt vindicated when [he] 2 admitted under oaththat [he] 2 'd had [an affair with [her] 3 ] 1 after denying [it] 1 for years.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Qualitative examples of InstructGPT mention detection (top) and coreference resolution when gold mentions are given (bottom). Spans predicted by the model are wrapped around square brackets, e.g., [Nine years] and [Bill Clinton]. Blue and red denote incorrect and correct predictions, respectively. Mention Detection: InstructGPT can predict most of the named entities and pronouns, but it still made numerous errors including extra", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Dataset statistics. The first five datasets are used as benchmarks inToshniwal et al. (2021).We only include the number of test documents (first col.) since we evaluate the models on these datasets and did not explicitly use any train/dev data. A detailed version is shown in Table12.", "figure_data": "DatasetTest Toks/Doc % Sing.OntoNotes en3484890.0LitBank10210519.8Character Iden. 1922626.4WikiCoref3019960.0QuizBowlCoref 40012626.0OntoNotes zh2184120.0OntoNotes ar446810.0SemEval ca16729345.9SemEval nl7266613.0SemEval it4689161.9SemEval es16830347.7WSJ-1989566320.0WSJ-2019568580.0WSJ-2023566880.0", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "CoNLL F 1 on different English coreference datasets, with the macro average shown in the last column. Best result is in bold while the second best is underlined. # train docs column indicates the number of train documents from the source domain → number of train documents from target domains. TRANSFER-ON and longdoc-PC were trained on large corpus of source examples; TRANSFER-ON and SpanBERT were fine-tuned on limited target examples; dcoref was not trained on any corpus. Overall, InstructGPT exhibits strong generalization results when using out-of-the-box.", "figure_data": "Model# Train Docs ON en LBCI WC QBC Avg.TRANSFER-ON (Xia and Van Durme, 2021)2.8k → 10-85.0--85.0 85.0SpanBERT (Xia and Van Durme, 2021)0 → 10-69.0--65.0 67.0dcoref (Lee et al., 2013)0 → 072.9 55.4-72.4 34.8 59.0longdoc-PC (Toshniwal et al., 2021)36k → 076.8 81.1 66.5 67.0 77.3 73.7CodeLlama (34B)0 → 061.7 47.8 58.3 67.9 58.8 58.9InstructGPT-80.8 77.0 72.6 72.9 68.3 74.3ChatGPT-77.9 70.8 67.2 70.8 69.9 71.3Lang.TRANSFER-EN XLM-R InstructGPT 2.8k→ 10 0 → 10Chinese (zh)75.070.077.3Arabic (ar)80.049.065.6Catalan (ca)52.029.041.9Dutch (nl)71.042.070.8Italian (it)46.025.041.4Spanish (es)57.035.042.2", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "CoNLL F 1 on the non-English portions of OntoNotes (Chinese and Arabic) and the SemEval-2010 dataset. Best result is in bold while the second best is underlined.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "CoNLL F 1 and variance (last column) on Wall Street Journal articles from different time periods. G and S denote Gold and Silver annotation, respectively. Prompting LMs appears more robust to temporal changes than dcoref.", "figure_data": "Dataset1989 1989 2019 2023 (G) (S) (S) (S)σ 2dcoref72.470.863.666.9 15.7CodeLlama-34B 61.957.455.755.39.1InstructGPT80.978.280.581.72.3ChatGPT76.875.376.774.32.5and several instruction-tuned LMs on three newsilver-annotated coreference datasets from differenttime periods: WSJ-1989, WSJ-2019, and WSJ-2023,each containing 56 Wall Street Journal articlesfrom 1989, 2016-2019, and 2023, respectively.WSJ-1989 is a subset of the OntoNotes dev setand thus contains gold coreference annotation.WSJ-2019 was sampled from the RealNews dataset(Zellers et al., 2019) dated from February 2015 toFebruary 2019, and WSJ-2023 from the WSJ web-site between May and June 2023. Since these twodatasets do not have coreference annotations, weused SpanBERT", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Table7shows the results. We first observe a decrease when moving from gold to silver annotations for all models. More impor-tantly, we see more degradation and variance in performance of dcoref for the different temporal datasets, whereas the variance is less pronounced for InstructGPT and ChatGPT. While CodeLlama-34B underperforms dcoref baseline, it also observes less variance when evaluated on different temporal datasets.", "figure_data": "5 Related WorkDomain Adaptation for Coreference Previouswork has reported that neural coreference reso-lution trained on a single dataset struggled without-of-domain generalization, with some perform-ing worse than rule-based systems (Moosavi andStrube, 2017). Several solutions to this challengehave been proposed with varying success: Xia andVan Durme (2021) showed that continued trainingcan help generalize to different domains and lan-guages with as few as 10 annotated documents, andToshniwal et al. (2021) demonstrated joint trainingon large coreference corpora with different annota-tions can help neural models adapt to new domains.Recently, Gandhi et al. (", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and a NER tagger with xlm-roberta-large(Conneau et al., 2020) trained on BIO labels adapted from OntoNotes annotations. We note that these systems are not directly comparable to each other, since they were trained on different annotatations: SpanBERT-large on full coreference data and xlm-roberta-large on non-nested MD data.", "figure_data": "TrainPRF 1SpanBERT-largeCR89.1 86.6 87.8xlm-roberta-large MD 83.3 76.3 80.1dcoref∅75.8 77.4 76.6InstructGPT-42.1 51.8 46.5Table 8: MD results of different systems consid-ered in Figure 3. SpanBERT-large was trainedon full coreference (CR) data, xlm-roberta-largetrained on mention-annotated-only (MD) OntoNotestrain set, dcoref was not trained on any corpus, andInstructGPT exact training procedures are unknown.", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "CoNLL F 1 when running SpanBERT(Joshi et al., 2020) on OntoNotes dev set and WSJ-1989. Nine years ago [today] 1 , allegations of infidelity almost derailed [Bill Clinton's] 3 (dcoref) journey from hope to the White House. Bob Glascoff tracks the life of the \"other woman\" in [today's] 1 edition of \"[Headliners] 5 .\" On January 1992, [Gennifer Flowers] 6 claims [she] 6 had a 12 -year affair with [Bill Clinton] 3 . Although [Mr. Clinton] 3 denied having a relationship with [Flowers] 6 , [he] 3 did speak of bringing \"pain\" to [his] 3 marriage during a joint television interview with [his] 3 wife, Hillary. [Flowers] 6 went on \"Larry King Live\" in 1998 at the height of the impeachment proceedings against [Mr. Clinton] 3 . [She] 6 said [she] 6 felt vindicated when [he] 3 admitted under oath that [he] 3 'd had [an affair with [her] 6 ] 8 after denying [it] 8 for years. A federal judge recently dismissed a defamation lawsuit [she] 6 brought against Hillary Rodham Clinton and two former presidential aides. With \"[Headliners] 5 ,\" [I] 5 'm Bob Glascoff. Gold Mentions: Nine years ago [today] 1 , [allegations of infidelity] 2 almost derailed [Bill Clinton's] 3 (InstructGPT) journey from hope to the White House. [Bob Glascoff] 4 tracks the life of [the \"other woman\"] 6 in [today's] 1 edition of \"[Headliners] 5 .\" On January 1992, [Gennifer Flowers] 6 [claims] 2 [she] 6 had a 12 -year affair with [Bill Clinton] 3 . Although [Mr. Clinton] 3 denied having a relationship with [Flowers] 6 , [he] 3 did speak of bringing \"pain\" to [his] 3 marriage during a joint television interview with [[his] 3 wife, Hillary] 7 . [Flowers] 6 went on \"Larry King Live\" in 1998 at the height of the impeachment proceedings against [Mr. Clinton] 3 . [She] 6 said [she] 6 felt vindicated when [he] 3 admitted under oath that [he] 3 'd had [an affair with [her] 6 ] 2 after denying [it] 2 for years. A federal judge recently dismissed a defamation lawsuit [she] 6 brought against [Hillary Rodham Clinton] 7 and two former presidential aides. With \"[Headliners] 5 ,\" [I] 4 'm Bob Glascoff. Gold Output: Nine years ago [today] 1 , [allegations of infidelity] 2 almost derailed [Bill Clinton's] 3 journey from hope to the White House. [Bob Glascoff] 4 tracks the life of [the \"other woman\"] 6 in [today's] 1 edition of \"[Headliners] 5 .\" On January 1992, [Gennifer Flowers] 6 [claims] 2 [she] 6 had a 12 -year affair with [Bill Clinton] 3 . Although [Mr. Clinton] 3 denied having a relationship with [Flowers] 6 , [he] 3 did speak of bringing \"pain\" to [his] 3 marriage during a joint television interview with [[his] 3 wife, Hillary] 7 . [Flowers] 6 went on \"Larry King Live\" in 1998 at the height of the impeachment proceedings against [Mr. Clinton] 3 . [She] 6 said [she] 6 felt vindicated when [he] 3 admitted under oath that [he] 3 'd had [an affair with [her] 6 ] 8 after denying [it] 8 for years. A federal judge recently dismissed a defamation lawsuit [she] 6 brought against [Hillary Rodham Clinton] 7 and two former presidential aides. With \"[Headliners] 5 ,\" [I] 4 'm Bob Glascoff.", "figure_data": "A.4 OpenAI API DetailsTo maximize reproducibility, we use unconstrainedgreedy decoding with the temperature parameterset to 0 in all our GPT-related experiments. ForInstructGPT, we generated approximately 18 mil-lion tokens for all our official experiments, oran equivalent of $360. For ChatGPT and GPT-4,we generated approximately 15 million tokens($50) and 1 million tokens ($60), respectively.InstructGPT experiments were conducted beforeJune 2023, and ChatGPT/GPT-4 experiments beforeDecember 2023.", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_12", "figure_label": "14", "figure_type": "table" } ]
Nghia T Le; Alan Ritter
[ { "authors": "Oshin Agarwal; Ani Nenkova", "journal": "", "ref_id": "b0", "title": "Temporal effects on pre-trained models for language processing tasks", "year": "2022" }, { "authors": "Monica Agrawal; Stefan Hegselmann; Hunter Lang; Yoon Kim; David Sontag", "journal": "", "ref_id": "b1", "title": "Large language models are few-shot clinical information extractors", "year": "2022" }, { "authors": "Simran Arora; Avanika Narayan; Mayee F Chen; Laurel Orr; Neel Guha; Kush Bhatia; Ines Chami; Frederic Sala; Christopher Ré", "journal": "", "ref_id": "b2", "title": "Ask me anything: A simple strategy for prompting language models", "year": "2022" }, { "authors": "Terra Blevins; Hila Gonen; Luke Zettlemoyer", "journal": "", "ref_id": "b3", "title": "Prompting language models for linguistic structure", "year": "2022" }, { "authors": "Bernd Bohnet; Chris Alberti; Michael Collins", "journal": "", "ref_id": "b4", "title": "Coreference resolution through a seq2seq transitionbased system", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Hong Chen; Zhenhua Fan; Hao Lu; Alan Yuille; Shu Rong", "journal": "", "ref_id": "b6", "title": "PreCo: A large-scale dataset in preschool vocabulary for coreference resolution", "year": "2018" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b7", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Nupoor Gandhi; Anjalie Field; Emma Strubell", "journal": "", "ref_id": "b8", "title": "Mention annotations alone enable efficient domain adaptation for coreference resolution", "year": "2022" }, { "authors": "Aria Haghighi; Dan Klein", "journal": "", "ref_id": "b9", "title": "Coreference resolution in a modular, entity-centered model", "year": "2010" }, { "authors": "Mandar Joshi; Danqi Chen; Yinhan Liu; Daniel S Weld; Luke Zettlemoyer; Omer Levy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "Span-BERT: Improving pre-training by representing and predicting spans", "year": "2020" }, { "authors": "Daniel Jurafsky; James H Martin", "journal": "", "ref_id": "b11", "title": "Speech and Language Processing: An Introduction to Natural Language Processing", "year": "2000" }, { "authors": "Jonathan K Kummerfeld; Dan Klein", "journal": "", "ref_id": "b12", "title": "Errordriven analysis of challenges in coreference resolution", "year": "2013" }, { "authors": "Fan Nghia T Le; Alan Bai; Ritter", "journal": "", "ref_id": "b13", "title": "Fewshot anaphora resolution in scientific protocols via mixtures of in-context experts", "year": "2022" }, { "authors": "Heeyoung Lee; Angel Chang; Yves Peirsman; Nathanael Chambers; Mihai Surdeanu; Dan Jurafsky", "journal": "Computational Linguistics", "ref_id": "b14", "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", "year": "2013" }, { "authors": "Kenton Lee; Luheng He; Mike Lewis; Luke Zettlemoyer", "journal": "", "ref_id": "b15", "title": "End-to-end neural coreference resolution", "year": "2017" }, { "authors": "Hector Levesque; Ernest Davis; Leora Morgenstern", "journal": "", "ref_id": "b16", "title": "The Winograd schema challenge", "year": "2012" }, { "authors": "Shuheng Liu; Alan Ritter", "journal": "", "ref_id": "b17", "title": "Do conll-2003 named entity taggers still work well in 2023? ACL", "year": "2023" }, { "authors": "Tianyu Liu; Yuchen Jiang; Nicholas Monath; Ryan Cotterell; Mrinmaya Sachan", "journal": "", "ref_id": "b18", "title": "Autoregressive structure prediction with language models", "year": "2022" }, { "authors": "Jing Lu; Vincent Ng", "journal": "", "ref_id": "b19", "title": "Conundrums in entity coreference resolution: Making sense of the state of the art", "year": "2020" }, { "authors": "D Christopher; Kevin Manning; John Clark; Urvashi Hewitt; Omer Khandelwal; Levy", "journal": "", "ref_id": "b20", "title": "Emergent linguistic structure in artificial neural networks trained by self-supervision", "year": "2020" }, { "authors": "Sadat Nafise; Michael Moosavi; Strube", "journal": "", "ref_id": "b21", "title": "Lexical features in coreference resolution: To be used with caution", "year": "2017" }, { "authors": " Openai", "journal": "OpenAI", "ref_id": "b22", "title": "Introducing chatgpt", "year": "2022" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b23", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Alessandro Sameer Pradhan; Nianwen Moschitti; Olga Xue; Yuchen Uryupina; Zhang", "journal": "", "ref_id": "b24", "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", "year": "2012" }, { "authors": "Marta Recasens; Lluís Màrquez; Emili Sapena; M Antònia Martí; Mariona Taulé; Véronique Hoste; Massimo Poesio; Yannick Versley", "journal": "", "ref_id": "b25", "title": "SemEval-2010 task 1: Coreference resolution in multiple languages", "year": "2010" }, { "authors": "Jonas Baptiste Rozière; Fabian Gehring; Sten Gloeckle; Itai Sootla; Gat; Ellen Xiaoqing; Yossi Tan; Jingyu Adi; Tal Liu; Jérémy Remez; Artyom Rapin; Ivan Kozhevnikov; Joanna Evtimov; Manish Bitton; Cristian Canton Bhatt; Aaron Ferrer; Wenhan Grattafiori; Alexandre Xiong; Jade Défossez; Faisal Copet; Hugo Azhar; Louis Touvron; Nicolas Martin; Thomas Usunier; Gabriel Scialom; Synnaeve", "journal": "", "ref_id": "b26", "title": "Code llama: Open foundation models for code", "year": "2023" }, { "authors": "Alessandro Stolfo; Chris Tanner; Vikram Gupta; Mrinmaya Sachan", "journal": "", "ref_id": "b27", "title": "A simple unsupervised approach for coreference resolution using rule-based weak supervision", "year": "2022" }, { "authors": "Shubham Toshniwal; Patrick Xia; Sam Wiseman; Karen Livescu; Kevin Gimpel", "journal": "", "ref_id": "b28", "title": "On generalization in coreference resolution", "year": "2021" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b29", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ralph Weischedel; Eduard Hovy; Mitchell Marcus; Martha Palmer; Robert Belvin; Sameer Pradhan; Lance Ramshaw; Nianwen Xue", "journal": "", "ref_id": "b30", "title": "OntoNotes: A Large Training Corpus for Enhanced Processing", "year": "2011" }, { "authors": "Wei Wu; Fei Wang; Arianna Yuan; Fei Wu; Jiwei Li", "journal": "", "ref_id": "b31", "title": "CorefQA: Coreference resolution as querybased span prediction", "year": "2020" }, { "authors": "Zhaofeng Wu; Matt Gardner", "journal": "", "ref_id": "b32", "title": "Understanding mention detector-linker interaction in neural coreference resolution", "year": "2021" }, { "authors": "Patrick Xia; Benjamin Van Durme", "journal": "", "ref_id": "b33", "title": "Moving on from OntoNotes: Coreference resolution model transfer", "year": "2021" }, { "authors": "Xiaohan Yang; Eduardo Peynetti; Chris Vasco Meerman; Tanner", "journal": "", "ref_id": "b34", "title": "What GPT knows about who is who", "year": "2022" }, { "authors": "Rowan Zellers; Ari Holtzman; Hannah Rashkin; Yonatan Bisk; Ali Farhadi; Franziska Roesner; Yejin Choi", "journal": "", "ref_id": "b35", "title": "Defending against neural fake news", "year": "1992" }, { "authors": "", "journal": "", "ref_id": "b36", "title": "Predicted Mentions: Nine years ago today, allegations of infidelity almost derailed", "year": "" }, { "authors": "", "journal": "", "ref_id": "b37", "title": "Gennifer Flowers] 6 claims", "year": "1992-01" }, { "authors": "", "journal": "", "ref_id": "b38", "title": "A federal judge recently dismissed a defamation lawsuit [she] brought against Hillary Rodham Clinton and two former presidential aides. With", "year": "" }, { "authors": "Hong Kong", "journal": "", "ref_id": "b39", "title": "These have become the best spots to observe birds", "year": "" }, { "authors": " Uh-Huh; Ah", "journal": "", "ref_id": "b40", "title": "", "year": "" }, { "authors": " Wow; Um; However", "journal": "", "ref_id": "b41", "title": "", "year": "" }, { "authors": " Uh-Huh; Ah", "journal": "", "ref_id": "b42", "title": "", "year": "" }, { "authors": " Wow; Um; However", "journal": "", "ref_id": "b43", "title": "", "year": "" }, { "authors": " Uh-Huh; So", "journal": "", "ref_id": "b44", "title": "", "year": "" }, { "authors": " Wow; Um; However", "journal": "", "ref_id": "b45", "title": "", "year": "" }, { "authors": " Uh-Huh; So", "journal": "", "ref_id": "b46", "title": "our park's] 2 logo is unique, featuring this black-faced spoonbill, which hopefully can draw people's attention. Uh-huh. Table 15: An example where InstructGPT struggles to resolve coreference, even on gold mentions. The most", "year": "" } ]
[]
10.18653/v1/2022.naacl-main.275
2023-10-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b48", "b16", "b44", "b50", "b52", "b21", "b19", "b24", "b3", "b55", "b62" ], "table_ref": [], "text": "Social norms are normative beliefs that guide behavior in groups and societies (Sherif, 1936). Deviance from these expectations of behavior can cause perceptions of impoliteness (Culpeper, 2011), feelings of offense (Rubington and Weinberg, 2015), and pragmatic failure (Thomas, 1983). Social norms vary across cultures (Triandis et al., 1994;Finnemore, 1996), and different cultural norms can lead to conflict within intercultural interactions due to perceptions of deviance (Durkheim, 1951). Creating computational systems that can robustly reason and translate across cultures in pragmatic communication requires that they be grounded in these norms and their differences across contexts. As an initial step in addressing this question, we propose a novel approach to discover and compare social norms conditioned on social situations across Chinese and American cultures. Leveraging 知乎 (Zhihu), a Chinese Q&A platform, alongside the existing SOCIALCHEMISTRY (Forbes et al., 2020) dataset on social norms as respective proxies of Chinese and American cultural axes, our paper offers the following contributions:\n• A human-AI collaboration framework for cross-cultural descriptive norm discovery consisting of (1) automatic situation alignment using cross-lingual similarity between SOCIALCHEMSTRY situations and questions from Zhihu, (2) Chinese social norm extraction from Zhihu answers using few-shot prompting with GPT-3 (Brown et al., 2020), (3) cross-cultural norm similarity and difference identification as textual entailment with explanations using GPT-3 with Chain of Thought (CoT) Prompting (Wei et al., 2022); and (4) human feedback in verification and editing. An example of outputs is shown in Figure 1.\n• A new dataset for cross-cultural norms understanding with explainable textual entailment. Our human-AI collaboration enables us to create a novel dataset of 3069 situationaligned entailment pairs of Chinese and American norms together with textual explanations. We introduce the new task of explainable social norm entailment and show that it is challenging for models fine-tuned on related tasks; fine-tuning on our task directly still leaves significant space for improvement.\n• An analysis of cross-cultural differences in social norms enabled by our dataset. In Section 6, we show that the analysis enabled by our dataset empirically aligns with prior work on differences in Chinese and American cultures. Our empirical results align with the social orientations framework (Yang, 1993) in understanding Chinese-American norm differences and reveal several situational and descriptive nuances in norms across these cultures." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b48", "b50", "b42", "b43", "b14", "b24", "b45", "b41", "b20", "b31", "b68", "b28", "b38", "b54", "b35", "b67", "b62", "b11", "b61", "b7", "b10", "b12", "b60", "b51", "b30", "b53" ], "table_ref": [], "text": "Our work is situated in the broader literature on the study of social norms (Sherif, 1936) and how they vary across cultures (Thomas, 1983). Here, our work is rooted specifically in the study of descriptive norms (Rawls, 1951(Rawls, , 2004;;Cialdini et al., 1990)-what people actually do, rather than prescriptive norms, or what they think people ought to do-and focuses on the differences between Chinese and American cultures. We build on recent computational work in creating systems capable of situated reasoning in social situations, most closely adapting the rule-ofthumb formalism for descriptive norms introduced in Forbes et al. (2020). This line of work not only spans that of commonsense reasoning (Sap et al., 2019;Rashkin et al., 2018), but also in judg-ments of appropriate and ethical behavior (Emelin et al., 2021;Jiang et al., 2022) and in grounding behavior in areas like dialogue (Ziems et al., 2022) and situated question answering (Gu et al., 2022a) more specifically on underlying knowledge of social norms. In recognizing that social norms are often culturally (Haidt et al., 1993) and even demographically (Plepi et al., 2022;Wan et al., 2023) specific, prior work in this area has primarily revolved around the normative judgments of majority English-speaking cultures represented within North America. In contrast, here, aligning with the broader goal of creating systems that can effectively reason across cultures and languages (Liu et al., 2021), we focus on computationally studying norms across Chinese and American cultures, expanding on the utility of large language models in ways that have the potential to transform modern computational social science (Ziems et al., 2023).\nContemporary studies of Chinese cultural societies (Yang, 1993) emphasize several broad differences relative to American culture. Under the framework of social orientations, emphasis in Chinese society is placed especially on family, relationship, authority, and personal reputation social orientations. In particular, past work has shown a large significance compared to American culture in familistic collectivism and harmony (Ch'eng-K'Un, 1944;Yang, 1988;Campos et al., 2014), relational determinism (Chen and Chen, 2004;Chua et al., 2009), and authority worship (Yang, 1970;Thornton and Fricke, 1987;Hu, 2016), among other factors, in influencing social behavior. Critical reviews of past cross-cultural work have criticized weaknesses in study design and their overly broad generalizations (Voronov and Singer, 2002), in favor of a more fine-grained analysis. Here, under the framework of descriptive norms and sourcing data from social media at scale, we conduct a more nuanced analysis of how norms vary across cultures under situational controls." }, { "figure_ref": [], "heading": "Data Sources", "publication_ref": [ "b24", "b24", "b65", "b17" ], "table_ref": [], "text": "Our analysis of social norm variation across Chinese and American contexts draws from the largest Q&A discussion platform in China-知乎 (Zhihu)-and existing data gathered by SOCIAL-CHEMISTRY (Forbes et al., 2020), treating these data sources as different cultural axes.\nSocial Chemistry 101 (Forbes et al., 2020) is a natural language corpus of ethical judgments Question: 当有人说家里有丧事时应如何回应更礼 貌? (How to respond politely when someone says there is a funeral in the family?)\nAnswer: 节 哀 。 可 以 礼 貌 性 的 拍 拍 肩 膀 或 者 嘱 咐 对 方 虽 然 最 近 会 比 较 操 劳 但 也 要 注 意 身 体。(Condolences.\nYou can politely pat on their shoulders or tell them to pay attention to their health even though it will be hard for them recently.)\nRoT 1: it is appropriate to say \"节哀\" to someone who has lost a family member or friend RoT 2: it is appropriate to pat someone on the back to show your sympathy to them RoT 3: it is appropriate to tell them to take care of themselves though they are sad and social norms on everyday situations. Crowdsourced social norms on situations scraped from Reddit (i.e., r/AmITheAsshole) and other sources are expressed in free-text rule-of-thumb form, where each rule-of-thumb consists of a judgment on an action, i.e., \"It's rude to run the blender at 5 AM\". Annotators of this dataset are 55% women, and 94% of annotators had spent 10 or more years in the United States; as such, we treat the normative judgments represented within this dataset as originating from a predominantly English-speaking, American context 1 .\n知乎 Zhihu has over 101 million monthly active users (Zhihu, 2022) and is one of China's largest online knowledge communities. Similar to its international counterparts like Quora, Zhihu is primarily a Q&A site where users can post questions, answer existing questions, and up-vote answers to questions by other users, among other platform interactions. Users are 59% women (Yang, 2022a) and are primarily educated and middle-class, with research showing that over 80% of its users possess at least a bachelor's degree (Zhang, 2020). This population is significant because the middle class is the primary upholder of social stability (Goodman, 2014) and the primary driver of cultural trends and public opinion (Denemark and Chubb, 2016) quire about the appropriate action to partake in a given situation, i.e., \"what do I do if my friend is cheating on their exam?\", with other users indicating their views on appropriate courses of action in the answers below. Reasoning that these questions align most closely in form to situations in SOCIALCHEMISTRY, we translate and query for all situations present in the SOCIALCHEMISTRY dataset through Zhihu's search function and save the top 100 question posts returned for every translated SOCIALCHEMISTRY situation. Following the rationale that social norms are the most common and broadly accepted judgments of actions in given situations, we take the most up-voted answer2 to each question as the basis from which we extract social norms, described in the following section. In total, we obtained answers to 508,681 unique questions posed on Zhihu. The study and data collection were approved by an Institutional Review Board prior to data collection; a detailed breakdown of this corpus is detailed in Appendix Section A. We reiterate both datasets can only be used as a proxy for social norms in each culture and further discuss the limitations of our approach and these assumptions in Section 7." }, { "figure_ref": [], "heading": "Human-AI Collaboration to Create a", "publication_ref": [ "b34", "b59", "b15", "b3", "b5", "b56", "b1", "b8", "b55" ], "table_ref": [ "tab_0", "tab_2" ], "text": "Cross-Cultural Social Norm Dataset\nWe enable our computational analysis of social norm variation across Chinese and American cultures through a framework of (1) automatic sit- Aligning Situations Across Cultures. Descriptive judgments of appropriate behavior are often situationally-dependent (Leung and Morris, 2015); an accurate comparison of norms across cultures must first ensure a similarity of contexts. For instance, Yamagishi et al. (2008) showed that Japanese preferences for conformity disappeared in private compared to when they were being observed. To obtain the closest matching situations between US and Chinese contexts, we align situations found in the SOCIALCHEMISTRY dataset with similar questions on Zhihu. While the Zhihu search API allows for direct query searches of platform content, semantic alignment of returned results with queries remains low due to an observed bias in prioritizing entity matches. To improve this alignment of situations between US and Chinese contexts, we use XLM-R (Conneau et al., 2020) to obtain cross-lingual representations of situations and further perform one-to-one matching through cosine similarity. Setting the similarity threshold to 0.895 allowed us to align 3069 situations, of which around 80% were correctly aligned, based on a manual review of a random sample of 100 situations. Here, incorrectly aligned instances remain valuable for our data as negative samples in our later entailment task as they can be assigned a No Relation label. Social Norm Extraction. We follow the formalism introduced by Forbes et al. ( 2020) in structuring descriptive norms as rules-of-thumb, or judgments of an action (i.e., \"It's rude to run the blender at 5AM\"). Taking 2 random top-aligned situations in our dataset, we manually annotate their corresponding Zhihu answer for social norms in ruleof-thumb form (RoT); an example of such an annotation is shown in Table 1. We then design a 2-shot prompt using these annotations (Table 3) and use GPT-3 (Brown et al., 2020) to extract novel rules-of-thumb from unseen answers. Running this model across all aligned situations and following a manual verification of faithfulness, we obtain a total of 6566 unique rules-of-thumb for our Chinese axis, relative to 1173 unique rules-of-thumb for our American axis sourced from SOCIALCHEMISTRY (note, here, that a rule-of-thumb may be associated with multiple situations).\nIdentifying Cross-Cultural Norm Similarities & Differences as Textual Entailment with Explanations. Under our formalism, a difference in cultural norms equates to a disagreement in judgments for an action relative to a given situation. Here, we structure the identification of social norm differences across cultures as an explainable textual entailment task (a.k.a natural language inference (NLI)) similar to e-SNLI (Camburu et al., 2018); a difference in cultural norms equates to a contradiction between norms of different cultures for a given situation, and vice versa for entailment. Given the recent success of human-AI collaboration frameworks (Wiegreffe et al., 2022;Bartolo et al., 2022;Chakrabarty et al., 2022), the complex nature of Given an input Text, respond with Social Norms that are assumed by the speaker of the text, in English. Social Norms are rules and standards that are understood by members of a group, and that guide or constrain social behaviors without the force of law. The Social Norms must be complete, correct, different, not contradicting with each other, inferable from text.\nText: 关于尊敬⻓辈? 因为传统。⻓辈始终是上辈。只要他不是有太过⽕⼤到让⼈⽆法原谅的错误。你就不能当⾯顶撞 他。只能在⼈少的时候。试着和他理论。⽗⺟不光是站在⻓辈那⾥。更多是为了你。顶撞⻓辈。出发点就是⼀个错。让嘴 巴⼤的⼈⻅了。你⽿朵会红⼀辈⼦。养成这个习惯了。以后有困难了。也会更困难。\nSocial Norms that we can infer from this text are: 1. It is not polite to confront directly your elders even when elders are wrong 2. It is appropriate to argue with elders in private when you think they are wrong 3. It is shameful to point out their mistakes in front of crowds though elders are wrong Table 3: Prompt used for the extraction of social norms in rule-of-thumb form from Zhihu questions and answers. The instruction precedes the in-context example; the original question (in red, equivalent to the situation) is followed directly by the answer, and sample annotated rules-of-thumb are listed in blue.\nour task, and the need for interoperability in reasoning, we use Chain-of-Thought prompting (Wei et al., 2022) (see our prompt in Table 4) together with GPT-3 (text-davinci-003) to generate most of the data automatically before manual verification. In order to construct our data in the e-SNLI format and save on computation costs, for every given American norm associated with a given situation, we (1) select the most relevant Chinese norm for that situation as extracted in the previous step, (2) identify the inference relationship (entailment, contradiction, or no relation) between them, and (3) generate a free-text explanation for the identified relationship. When no Chinese norms are selected as relevant, a random norm is sampled from the aligned candidates, and an explanation is generated given the No Relation label. Using this framework, we generate social norm relationships and their justifications for all cross-culturally aligned situations in our dataset.\nHuman Verification and Editing. To ensure the quality and accuracy of our dataset of situated social norm relationships and their justifications across cultures, three annotators with significant (10+ years) lived experiences in both Chinese and American cultures further acted as annotators by jointly verifying and editing, if needed, the outputs of our framework (Figure 2). Specifically, authors ensured that (1) situations are correctly aligned (i.e., entailment or contradiction labels are assigned correctly to aligned norms, while no relation labels are assigned to misaligned norms), (2) norms generated by GPT-3 are not hallucinated, and (3) the explanations are correct. Annotators were able to discuss any questions and resolve any final disagreements with each other. In total, we find that 51% of norms had a no relation label, either due to misaligned situations or due to the absence of extracted, relevant Chinese norms for corresponding US norms. Only a few instances (around 5%) had hallucinated norms; all were further corrected during annotation.\nGenerated explanations needed no further editing in 58.9% of the cases. In 38.1% of the cases, the inferred cross-cultural norm relation (and corresponding explanations) were deemed incorrect and required the revision of both the label and the explanation, while in the remaining instances (3.0%), generated explanations still required major revisions even though the relation label was deemed correct. Our final dataset contains 3,069 NLI instances of social norm relations across cultures with explanations. Out of these, 432 are contradictions (14%), 1059 are entailments (35%), and 1578 have a no relation label (51%). In total, aligned norms comprise 1173 unique rules-of-thumb from SOCIALCHEMISTRY (American axis) and 2273 as obtained from Zhihu using our framework (Chinese axis), with social norm description length averaging 63.5 characters. For social norm extraction, situation length was limited to 300 characters; examples from our dataset are shown in Table 2." }, { "figure_ref": [], "heading": "Experiments and Evaluation", "publication_ref": [ "b5", "b57", "b8", "b40", "b5", "b5", "b46", "b39", "b66", "b57", "b23" ], "table_ref": [], "text": "Task. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment. Our task is closely related to the explainable natural language inference task, e-SNLI (Camburu et al., 2018), where, in addition to the relation label, a model has to output a natural language explanation for its prediction as well. For every pair of US and Chinese norms of a cross-culturally aligned situation, we ask a model to predict the relationship between them (Entailment, Contradiction, or No Relation) and output an explanation for the relation. We test The US norm expresses negative attitude towards not showing up to a friend's wedding, while the Chinese norm expresses a more neutral judgement (\"your friend will understand\").\nTable 4: Chain-of-thought prompt used to identify (1) the most relevant (if any) Chinese norm given an American norm from an aligned situation, (2) determine the relationship between these norms, and (3) provide a free-text explanation justifying the inferred relation.\nwhether models fine-tuned on existing closely related and much larger textual entailment datasets (e.g., e-SNLI) and instruction-tuned models like FLAN-T5 can perform the task of explainable social norm entailment in addition to a simple finetuned baseline. In evaluation, we center our focus on explanation plausibility, testing if fine-tuning on our data may enhance a model's ability to reason about social norm entailment.\nModels. We focus on smaller models on the order of 3B parameters and on non-OpenAI models, as an OpenAI model was used to generate our data. Here, we test the performance of a model finetuned on SOCIALCHEMISTRY rule elaborations (DREAM) and an instruction-tuned model (FLAN-T5) in few-shot settings, and further fine-tune joint self-rationalizing models on eSNLI and our dataset.\nPrior work in interpretability (Wiegreffe et al., 2021) has shown that rationales from joint selfrationalizing models-predicting both the explanation alongside the relation label-are capable of producing faithful free-text rationales. Following prior work (Chakrabarty et al., 2022), we fine-tune a joint self-rationalizing T5 model in multiple settings, randomly splitting our data into 65% train, 10% validation, and 25% test set splits (Appendix Section C).\n• DREAM (Gu et al., 2022a): an elaboration model that uses T5 to generate details about the input. Here, we test the DREAM-FLUTE (social-norm) variant (Gu et al., 2022b), which uses SOCIALCHEMISTRY rules-of-thumb as elaborations to provide an answer containing a label and an explanation. We evaluate this model in a 10-shot setting.\n• FLAN-T5-XL (Chung et al., 2022): an enhanced version of the T5 (Raffel et al., 2020) 3B model fine-tuned on more than 1000 tasks, including e-SNLI (Camburu et al., 2018). We evaluate this model in a 10-shot setting.\n• T5-eSNLI: a T5 3B model fine-tuned on the e-SNLI (Camburu et al., 2018) dataset. We fine-tune for two epochs with a batch size of 4096, with 268 total steps, and an AdamW optimizer with a learning rate of 5e -05. We take the longest explanation per example in e-SNLI, as our data contains only one reference explanation. Table 5: Automatic evaluation results of a series of competitive baseline models measured by F1 scores at three thresholds of the explanation score (0, 50, and 60, indicated as F1@0, F1@50, and F1@60, respectively), and models fine-tuned on our data. %∆ represents the percent decrease from F1@0 to F1@60.\nexplanations using the explanation score metric: an average of BLEURT (Sellam et al., 2020;Pu et al., 2021) and BERTScore (Zhang et al., 2019). We report the macro average F1 score at three thresholds of the explanation score: 0, 50, and 60. F1@0 is equivalent to simply computing the F1 score, while F1@50 counts only the correctly predicted labels that achieve an explanation score greater than 50 as correct.\nAs shown in Table 5, current off-the-shelf models under 3B parameters lack heavily in performance on our dataset, at most achieving an F1 score of 33.48, despite being fine-tuned on closely related tasks like e-SNLI. This verifies the need for a domain-specific dataset, as general NLI data is observed to be insufficient to perform well on our task. Furthermore, even the model fine-tuned on our data only achieves 54.52 F1 score, showing that there is still ample room for improvement in this domain.\nWhen considering explanation quality, we see a very steep drop in performance from F1@0 to F1@60 for models fine-tuned on related tasks (72.50% for FLAN-T5 and 96.59% for T5-eSNLI), indicating that current datasets do not enable sufficient explanation quality for reasoning about social norm entailment. For the models fine-tuned on our data, the performance drop when accounting for explanation quality is not as sharp (≈ 21% for T5-SocNorm and mT5-SocNorm).\nInterestingly, FLAN-T5 achieves a lower F1@0 score than a model fine-tuned on eSNLI, possibly because of interference from non-eSNLI-related tasks that it was fine-tuned on. Further investigating performance differences between T5-eSNLI and FLAN-T5 (see most classes as No Relation (the majority class). This also explains the better performance of FLAN-T5 when accounting for the explanation score, as neutral explanations are easier to generate and typically more templatic in structure. T5-SocNorm and T5-eSNLI are seen as more robust in this regard.\nExplanation Quality. Wiegreffe et al. (2021) introduced an automatic metric for textual explanation quality, termed as rationale quality, that compares the accuracy of the model with and without provided explanations. We fine-tune models to predict the relation label given an input and an explanation, in addition to giving only the input. When providing \"gold\" explanations, accuracy rises from 53.4% to 96.1% (with a rationale quality of 42.7), emphasizing the quality of textual explanations provided by our dataset.\nHuman Evaluation of Generated Explanation Plausibility. Three students with significant lived experiences (10+ years) in Chinese and American cultures assessed the quality of a subset of 50 randomly chosen model-generated explanations for correctly predicted labels, evaluating performance between the best performing model finetuned only on related tasks (T5-eSNLI) and the best-performing model that was directly fine-tuned on our dataset (T5-SocNorm). Each annotator was asked to rate which generated explanation they preferred, allowing for the presence of ties. Annotators showed an inter-annotator agreement of 62.4 in Fleiss' kappa (Fleiss, 1971), considered to be \"substantial\" agreement in existing literature. T5-SocNorm explanations are preferred for the vast majority of instances (73%); T5-eSNLI explanations were preferred in only 11% of instances, while explanations from both tied for the rest (16%). An example of a bad generation from T5-eSNLI is shown in Table 7; as shown, T5-SocNorm tries to determine the attitude the norms expresses, while T5-eSNLI instead attempts to generate a rule-of-thumb (in red) instead. These results are indicative of how our data contains high-quality explanations-despite its small scale, models finetuned on it are able to produce better explanations compared to a model fine-tuned on the entirety of e-SNLI, which is 247 times larger in size." }, { "figure_ref": [], "heading": "T5-SocNorm", "publication_ref": [], "table_ref": [], "text": "The US norm expresses approval towards asking for money back, while the Chinese norm suggests that it is not polite to ask for it directly" }, { "figure_ref": [ "fig_0" ], "heading": "T5-eSNLI", "publication_ref": [ "b62", "b9", "b10", "b12", "b11", "b61", "b7", "b18", "b9" ], "table_ref": [], "text": "It is either OK to ask for your money back or not polite to directly ask Table 7: Example explanations generated by T5-SocNorm and T5-eSNLI for the situation in Figure 1. T5-eSNLI tries to generate a norm, while our model tries to explain the contradiction.\n6 Cross-Cultural Norm Variation\nRecalling that descriptive norms are situationally dependent, here, using our dataset, we test for factors driving these cross-cultural differences in social norms, testing specifically for (1) situational effects (i.e., when do norms differ?) and (2) descriptive effects (i.e., what do norms differ about?) across Chinese and American cultures.\nSituational Effects. To capture thematic trends across situations, we train a 10-topic LDA model on preprocessed situations and manually label each topic with its most prominent theme; labeled situational topics alongside the top words that are associated with each are shown in Appendix Section D. Testing for the effect of topic-situational factors on differences in social norms, we measure the correlation strength between the probability of a situation belonging to a given topic against how likely norms will contradict for that given situation.\nIn this regression, we find that three situation topics positively predicted how likely norms would contradict between Chinese and American cultures for a given situation to a statistically significant degree. They are, as follows, Lack of Intimacy/Separation (ρ = 0.07, p = 0.01), Family Discord/Divorce (ρ = 0.06, p = 0.01), and Loss of Family Connection/Changes in Life (ρ = 0.07, p = 0.02).\nThough these effect sizes remain small in scale likely due to the limited size of our dataset, these findings are consistent with contemporary studies of Chinese cultural societies under the framework of social orientations (Yang, 1993). In Chinese culture, relational determinism strongly defines the social norms surrounding social interactions in everyday situations (Chen et al., 2013). One's relationship with another determines how one interacts with them; much distinction is placed within Chinese culture between one's own kith and kin (自己人, zijiren)-which mostly includes family members and familiar persons such as friends or classmates-and outsiders (外 人, wairen), at a much stronger degree than that which is present in American cultures (Chen and Chen, 2004;Chua et al., 2009). Our results here show that in situations where this relationship distinction is apparent, or in cases of change in relationship type, the norms surrounding these situations are more likely to differ across Chinese and American cultures.\nDescriptive Effects. As we have done for situational effects, here, we train a 10-topic LDA model on preprocessed norms in rules-of-thumb form, identify each norm topic's most prominent theme, and test for the effect of norm-topic factors on predicting differences in these norms between Chinese and American cultures. As before, labeled norm topics alongside their top words are shown in Appendix Section D.\nRecalling that norms in rule-of-thumb form are judgments of actions, norm topics associate with action topics; a contradiction between crossculturally aligned norms means that contrasting judgments are placed on those actions across Chinese and American cultures. Here, we find that two norm topics positively predicted how likely a contradiction would be present between the norms of our two cultures. They are, specifically, Loss of Trust/Intimacy (ρ = 0.06, p = 0.05) and Support in Relationship (ρ = 0.04, p = 0.04).\nInterpreting these descriptive results in the context of relational determinism, our results above on situational effects, and through further qualitative analysis, our findings show that differences exist between Chinese and American cultures in judgments of actions that (1) would cause a loss of trust or intimacy and (2) are taken to support an individual close to you. These findings are consistent with longstanding work in Chinese familism (Ch'eng-K'Un, 1944;Yang, 1988;Campos et al., 2014), showing that exceptional accommodations in circumventing typical social rules are made, relative to American cultures, to pursue interpersonal and especially familial harmony for those classed as one's own kith and kin (Dunfee and Warren, 2001;Chen et al., 2013)." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [ "b50", "b29", "b33", "b22", "b47" ], "table_ref": [], "text": "In this work, we sought to computationally model social norms across Chinese and American cultures, via a human-AI collaboration framework to extract and interpretably compare social norm differences in aligned situations across contexts. Our analyses here reveal several nuances in how social norms vary across these cultural contexts, incorporating principles of descriptive ethics that prior studies have often lacked thus far. We see our present work situated in the broader context of designing systems that are able to reason across languages and cultures in an increasingly interconnected global world. Here, we highlight a few directions we find exciting for future work; models, code, and anonymized data are made available for further research. 3 Deviance from social norms can lead to miscommunication and pragmatic failure (Thomas, 1983). Integrating cross-cultural norm knowledge into socially situated communication systems (Hovy and Yang, 2021) could lead to fewer miscommunications in everyday situations, bridge cultures, and increase the accessibility of knowledge and of interpersonal technologies to a far greater extent than traditional translation technologies.\nWhile our work has focused on measuring differences in social norms across cultures, culture, though important, is only one of many variableslike age and gender, among others-that affect descriptive ethical judgments (Kuntsche et al., 2021). Neither are norms fixed in time (Finnemore and Sikkink, 1998;Sethi and Somanathan, 1996); future work could, with the aid of social platforms like Zhihu and Reddit, quantitatively examine not only the evolution of social norms across cultures and platforms but also compare changes between cultures, further bridging theory and empirical study." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [ "b4" ], "table_ref": [], "text": "Data Release and User Privacy. The study and data collection was approved by an Institutional Review Board prior to data collection. Nonetheless, while online data in platforms such as social media has opened the door to conducting computational 3 https://github.com/asaakyan/SocNormNLI social science research at scale, it is infeasible to obtain explicit consent for large-scale datasets such as ours (Buchanan, 2017). In order to preserve user privacy, we collected only publicly available data and analyzed it in aggregate, not at the individual level. Furthermore, we mirror former Twitter academic data release guidelines in that we release only the set of question and answer ids used in our analyses, which researchers are able to use together with the Zhihu API to obtain original discussion content." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b38", "b54", "b49", "b37", "b58", "b64", "b32", "b0", "b36", "b2" ], "table_ref": [], "text": "Cross-Cultural Demographic Differences. In studying cultural variations in social norms, we do not argue that culture as a variable alone explains the entirety of the observed variations. Most notably, intra-cultural variation in social norms also exists depending on the scale examined; for example, among multiple demographic groups or sub-communities (Plepi et al., 2022;Wan et al., 2023). Further, as the presence of variation in user and participant demographics undoubtedly remains between the data from our opposite cultural axes, it is crucial to interpret our results in the context of these differences and to take note that findings in our data are biased toward representing the viewpoints of individuals of these demographic groups.\nValue Pluralism. There are often more than one-and often contradictory-social norms that are relevant to any given social situation, which are often in tension with each other in many social dilemmas. By performing norm comparison using only the top-up-voted answer for given social situations, we necessarily limit the scope of our work to the social norms that are, by design assumptions and by proxy, the most accepted social norms for any given social situation. It is important to note that this does not preclude the possibility that similar social norms may remain valid for each culture at differing levels of acceptance. Here, while annotators with significant lived experiences in each culture sanity-checked that the entailment relations assigned between norms across cultures corresponded to actual cross-cultural differences and not simply because of value pluralism, this does not entirely ensure that the cross-cultural norm differences we observe are not because of value pluralism, as it is impossible for annotators to be aware of every single norm in an ever-evolving social landscape. We believe this to be a rich area of future work to more quantitatively study instances of value pluralism even for norms of a single culture, and to see which norms ultimately \"win over\" the others for certain situations (Sorensen et al., 2023).\nData Coverage. It is unrealistic to expect that either our data or that of Social Chemistry can cover all possible social situations. As such, our findings only represent a subset of the true underlying cultural differences present between the norms of Chinese and American cultures. Furthermore, it seems intuitive that questions about non-controversial social situations \"everyone\" is familiar with will, if not be completely absent from online discourse, otherwise get lower representation and engagement. As we only extract from the most up-voted answer from each Zhihu discussion and treat it as the most broadly adopted norm as a simplifying assumption, an open question remains as to quantifying the \"problem\" of intra-cultural norm disagreements and to investigate genuine human variations in judgments of social norms (Plank, 2022). By releasing our data, we hope that future work can take into account less up-voted answers. In moving towards more representative studies, we encourage further work to examine these variations in intra-cultural norms and tease out the details of human behavior.\nCensorship and Moderation. Chinese social media platforms like Weibo, and Wechat are mandated by law (Xu and Albert, 2014) to implement censorship policies on the content created by their users. Zhihu is no different; the presence of inherent content bias introduced by this form of active moderation of user content has an effect on influencing the landscape of public discourse (Yang, 2022b) and the data that we rely on to derive Chinese social norms. In particular, past work has shown the existence of censorship programs aimed at prohibiting \"collective action by silencing comments that represent, reinforce, or spur social mobilization\" across primary Chinese social media services (King et al., 2013). Comments that contain politically sensitive terms, such as mentions of the names of political leaders, are subject to a higher rate of deletions (Bamman et al., 2012). The extent to which these actions lead to a biased public representation of the norms and beliefs commonly shared in Chinese society remains unclear.\nLanguage Model Biases. Advances in large language models like GPT-4 have allowed for greater possibilities in human-AI collaboration, as we have done here. Nonetheless, it is important to recognize that language models ultimately mimic patterns in their training data (Lucy and Bamman, 2021) and regurgitate structural and social biases (Bender et al., 2021). While we have incorporated these models under a human-AI collaboration framework where we externally verify, validate, and edit their output in certain cases to mitigate this risk, it would be remiss to say that we are capable of doing so entirely in lieu of effects such as priming influencing our decisions. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank David Jurgens, Tuhin Chakrabarty, and the anonymous reviewers for their helpful comments, thoughts, and discussions. This research is being developed with funding from the Defense Advanced Research Projects Agency (DARPA) CCU Program No. HR001122C0034. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Sky CH-Wang is supported by a National Science Foundation Graduate Research Fellowship under Grant No. DGE-2036197." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "We provide as supplementary material additional information about our collected dataset from Zhihu, chosen hyperparameters for our models, topic model details from our analysis of cross-cultural norm variation, as well as the performance evaluations of larger models similar to the ones we used for data generation (GPT 3.5 and GPT-4) on our task." }, { "figure_ref": [], "heading": "A Zhihu Statistics", "publication_ref": [], "table_ref": [], "text": "Questions in Zhihu are tagged by users with topic categories, which serve as entities that individual users may browse, follow, and subscribe to. Figure 3 shows a breakdown of the top 20 user-tagged categories in our Zhihu questions dataset (508,681 unique questions) alongside their English translations, as well as a distribution of top-answer length. Following that of Social Chemistry, most topics here are related in content to relationships. " }, { "figure_ref": [], "heading": "B Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Norm Extraction. We use text-davinci-002 with the following hyperparameters: temperature=0.7, max tokens=256, top p=1, frequency penalty=0, presence penalty=0." }, { "figure_ref": [], "heading": "Inference Relation and Explanation Generation.", "publication_ref": [], "table_ref": [], "text": "We use text-davinci-003 with the following hyperparameters: temperature=0.7, max tokens=140, top p=1, frequency penalty=0, presence penalty=0." }, { "figure_ref": [], "heading": "DREAM-FLUTE Instruction Modification.", "publication_ref": [], "table_ref": [], "text": "We structure the input of social norm data into the DREAM-FLUTE instruction in the following manner:\nPremise: [Premise -social norm] US NORM. Hypothesis: [Hypothesis -social norm] CHINESE NORM. Is there a contradiction, entailment, or no relation between the premise and hypothesis?\".\nWe prefix the prompt with 10 examples from the validation set in the same format. " }, { "figure_ref": [], "heading": "C Dataset train test split statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D Topic Models", "publication_ref": [], "table_ref": [], "text": "We train 10-topic LDA models using MALLET 4 to analyze cross-cultural social norm variations on both situational and descriptive effects, manually labeling each topic with its most prominent theme. Labeled topics and their top words are shown in Tables 9 (topics on situations) and 10 (topics on descriptive norms in rule-of-thumb form), with the topics that were statistically significant in the prediction of cross-cultural norm contradictions highlighted in bold.\nE Performance of GPT-3.5 and GPT-4\nHere, we evaluate the performance of larger models similar to the one used to generate our data-GPT-3.5 and GPT-4. Both models achieve high F1@0 and F1@60 performance, as expected from their size and from the similarity of the data distribution to their outputs. Notably, however, the F1@60 score of GPT-4-the better performing of the two-remains lower in comparison with our best-fine-tuned model (41.18 vs. 43.07); furthermore, the relative decrease from F1@0 to F1@60 (%∆) of GPT-4 remains higher than our model (34% vs 21%) as well, indicative of the possibility that distilled models may even surpass teacher models in explanation quality.\nModel F1@0 F1@50 F1@60 %∆(↓) Table 11: F1 scores and percent decrease in F1 across explanation score thresholds for GPT-3.5 and GPT-4." } ]
Designing systems that can reason across cultures requires that they are grounded in the norms of the contexts in which they operate. However, current research on developing computational models of social norms has primarily focused on American society. Here, we propose a novel approach to discover and compare descriptive social norms across Chinese and American cultures. We demonstrate our approach by leveraging discussions on a Chinese Q&A platform-知乎 (Zhihu)-and the existing SOCIALCHEMISTRY dataset as proxies for contrasting cultural axes, align social situations cross-culturally, and extract social norms from texts using in-context learning. Embedding Chain-of-Thought prompting in a human-AI collaborative framework, we build a high-quality dataset of 3,069 social norms aligned with social situations across Chinese and American cultures alongside corresponding free-text explanations. To test the ability of models to reason about social norms across cultures, we introduce the task of explainable social norm entailment, showing that existing models under 3B parameters have significant room for improvement in both automatic and human evaluation. Further analysis of crosscultural norm differences based on our dataset shows empirical alignment with the social orientations framework, revealing several situational and descriptive nuances in norms across these cultures.
Sociocultural Norm Similarities and Differences via Situational Alignment and Explainable Textual Entailment
[ { "figure_caption": "Figure 1 :1Figure 1: An example of descriptive norms conditioned on an aligned cross-cultural situation, together with their inference relation and corresponding textual explanation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Step 1 :Figure 2 :12Figure 2: Our human-AI collaboration framework for creating a cross-cultural Chinese-American social norm NLI dataset through (1) situation alignment, aligning cross-lingual situations between Zhihu and Social Chemistry, (2) social norm extraction, from free-form answers to rules-of-thumb with in-context learning, and (3) norm relation inference with textual explanation with CoT prompting, coupled with (4) expert verification and editing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Example of a Zhihu question-answer pair (top) with English translations and relevant social norms in rules-of-thumb form (bottom).", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Examples of similarities (entailments, top)and differences (contradictions, bottom) in situatednorms between Chinese and American cultures fromour dataset, alongside relation explanations.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Social norms are the informal rules that govern behavior in groups and societies. In this task, you will compare US and Chinese social norms. You will be presented with a situation, a US social norm applicable to it, and a list of Chinese social norms. Your task has 3 steps: 1. Select Chinese social norm most relevant to the situation. 2. Determine the relationship between the Chinese norm and the US norm: Entailment, Contradiction, or No Relation. Entailment means both norms express similar attitudes towards certain actions/belief given situations. Contradiction means two norms express opposite attitudes towards certain actions/belief given situations. Select No Relation if none of the Chinese norms are applicable to the situation. 3. Justify the relation between the norms that you chose. It is not necessary to attend or send a gift to your friend's wedding if you cannot make it 2. If you cannot attend or send a gift to your friend's wedding, your friend will understand and not hold it against you] Most Relevant Chinese Norm: If you cannot attend or send a gift to your friend's wedding, your friend will understand and not hold it against you Relationship", "figure_data": "Situation: not showing up to a friend's weddingUS Norm: You should attend your friend's wedding.Chinese Norms:[1.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "), we observe thatFLAN-T5 struggles, in particular, to predict allrelation classes equally well, instead predicting", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "trust thinking woman young married time covering welfare mom's unhappy constant lives longs toe camel bathtub facebook friending forgetting social co-worker...i'm knowing close lazy extremely dogs depressed 8. Fear/ Uncertainty love i'm don't boyfriend friend i'm married feel scared don't jealous anymore kids afraid mother life family yelling man younger cancer roommate dad advice can't doesn't telling terrified falling fiancée 9. Unhealthy Relationship/ Mistreatment mom ballet ago girls bad years girlfriend's ex-girlfriend scaring sounding insensitive ratting classmate hug holding she's doctor bring stressed teen friendship effort puts man's annoyed ex-wife learn lessons lie interested", "figure_data": "Topic ThemeTop Tokens0. Self-care/ Emotional Turmoil dog boyfriend living sad fears died wife doesn't speak every-time someone's petting hands washing morning teeth brush up-set/irritated truth bed angrying stuff confess lie leads brother'sattracted suffers agitated i'm1. School Bullying/ Toxic Rela-school she's forced bullied pressuring friendship valentines ab-tionshipsolutely engagement year disgusted people expecting fact bitchyremarry day tears burst criticism threw mother ditching mad2. Lack of Intimacy/ Separa-crying boyfriend won't play stop can't reason don't consequencestionissues ill gravely starts panics lot unprofessional texting i've yearsthat's reconciliation daughter's thing pathetic intimacy lackingpublic vulnerable separated3. Family Discord/ Divorcehate mother sister hating mom parents dad people wrong brotherfamily father making wife cry starting divorce pissed what's resentgirl kinda aunt brothers asked parent dislike grandmother marrygenuinely4. Conflicts in Romantic Rela-girlfriend boyfriend friend wanting mad friends telling birthdaytionshipgirl upset breaking break friend's hating family jealous relationshipteacher boyfriend's hates angry boyfriends cheated gift datingdepression wedding hang christmas likes5 Struggles in Marriagebirthday friends wanting fat dream husband night college part dontashamed working cheat someone's frustrates affection reciprocateinability wife's complaining job open accepting remembered it'slies dropout couple boyfriend's pregnant6. Loss of Family Connection/woman recently meet-up cancelled disappointed passing mother'sChanges in Lifeblaming job dress original wear selling sick care taking formexist disappear fall dad wonders anymore wedding ex-bestfriendunfollow instagram despise7. Emotional Struggles/ Loneli-fear son manness", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Manually labeled situation topics and their top tokens, as captured from a 10-topic LDA model trained on cross-culturally aligned English Social Chemistry 101 situations. Loss of Trust/ Intimacy abortion depressed hold sense loyal pet hatred talk kind doesn't hang past you're that's disgusted access hurtful brushing fiance's hide thoughts pushy depression pictures change breaking interact hard spite intimate 1. Communication Issues/ Gifting communication refuse girlfriend decision reach true public christmas dress lie grateful relationship lives steal assume enrolled stressful happily jeopardy admit you'll accepting interest interaction hangout debt loaned escalate friendly presents 2. Support in Relationship it's expected people wrong family shouldn't friends good partner parents love hate rude bad significant understandable relationship normal feel don't expect things friend you're children feelings upset important talk friend's 3. Social Media/ Social Bonds families find you'll service wedding choose involved songs broken gifts media emotion cohesion foster behave society job online teaching ties sexuality interfere develop hurts schools professionalism poor health struggling kid's 4. Responsibility/ Respect put mother relationship long person's teacher understanding mental deserve lazy situation music partner aunt responsible wash face fix illegal material finances judge anger purposely concerns dog harass exchange college respond 5. Care-taking/ Nurturing in Relationships give relationship health frowned polite attention desires working inconsiderate single encouraged hang rest immoral random ignoring routine consistent lot provide assist won't strike arguments afraid information amends sister frequent concerns 6. Personal Autonomy/ Control Over Life relative fear calm death time members remember inappropriate methods elderly fights intimidate consent frustrate starting step betray relate reasons shy future beds stalk father business mind sense behavior control divorce 7. Diversity/ Inclusivity enjoy issues order college couples find aggressive place exclude petty fair gratitude trivial commitment grudges conversations you'll information attention based argue alive selfish lifestyle free live transgender nowadays socially dropout 8. Trust/ Commitment allowed spend common invite friend made plan supposed crush cheat interested activities hanging values chance contact tooth roommates toe lied betray deal extracurricular reasons member's finish students concerts effort honor 9. Hardships/ Refusal uncomfortable married accept frustrated interests bitter mom cheater hurtful who's therapy yous nasty taste stand lifetime bothering fears no-contact arguments phone purpose self-sufficient adults reaction hostile receiving health full shun", "figure_data": "Topic ThemeTop Tokens0.", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Manually labeled descriptive norm topics and their top tokens, as captured from a 10-topic LDA model trained on relevant rules-of-thumb from aligned cross-cultural situations.", "figure_data": "", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" } ]
Sky Ch-Wang; Arkadiy Saakyan; Oliver Li; • Zhou; Smaranda Muresan
[ { "authors": "David Bamman; O' Brendan; Noah Connor; Smith", "journal": "First Monday", "ref_id": "b0", "title": "Censorship and deletion practices in chinese social media", "year": "2012" }, { "authors": "Max Bartolo; Tristan Thrush; Sebastian Riedel; Pontus Stenetorp; Robin Jia; Douwe Kiela", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Models in the loop: Aiding crowdworkers with generative annotation assistants", "year": "2022" }, { "authors": "Emily M Bender; Timnit Gebru; Angelina Mcmillan-Major; Shmargaret Shmitchell", "journal": "", "ref_id": "b2", "title": "On the dangers of stochastic parrots: Can language models be too big?", "year": "2021" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Elizabeth Buchanan", "journal": "PloS one", "ref_id": "b4", "title": "Considering the ethics of big data research: A case of twitter and isis/isil", "year": "2017" }, { "authors": "Oana-Maria Camburu; Tim Rocktäschel; Thomas Lukasiewicz; Phil Blunsom", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "e-snli: Natural language inference with natural language explanations", "year": "2018" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "Belinda Campos; Jodie B Ullman; Adrian Aguilera; Christine Dunkel Schetter", "journal": "Cultural Diversity and Ethnic Minority Psychology", "ref_id": "b7", "title": "Familism and psychological health: the intervening role of closeness and social support", "year": "2014" }, { "authors": "Tuhin Chakrabarty; Arkadiy Saakyan; Debanjan Ghosh; Smaranda Muresan", "journal": "", "ref_id": "b8", "title": "FLUTE: Figurative language understanding through textual explanations", "year": "2022" }, { "authors": "Xiao-Ping Chao C Chen; Shengsheng Chen; Huang", "journal": "Management and Organization Review", "ref_id": "b9", "title": "Chinese guanxi: An integrative review and new directions for future research", "year": "2013" }, { "authors": "Xiao-Ping Chen; Chao C Chen", "journal": "Asia Pacific Journal of Management", "ref_id": "b10", "title": "On the intricacies of the chinese guanxi: A process model of guanxi development", "year": "2004" }, { "authors": "Cheng Ch; ' Eng-K'un", "journal": "Soc. F", "ref_id": "b11", "title": "Familism the foundation of chinese social organization", "year": "1944" }, { "authors": "Michael W Roy Yj Chua; Paul Morris; Ingram", "journal": "Journal of international business studies", "ref_id": "b12", "title": "Guanxi vs networking: Distinctive configurations of affect-and cognition-based trust in the networks of chinese vs american managers", "year": "2009" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Sharan Chowdhery; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b13", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Raymond R Robert B Cialdini; Carl A Reno; Kallgren", "journal": "Journal of personality and social psychology", "ref_id": "b14", "title": "A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places", "year": "1990" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b15", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Jonathan Culpeper", "journal": "Cambridge University Press", "ref_id": "b16", "title": "Impoliteness: Using language to cause offence", "year": "2011" }, { "authors": "David Denemark; Andrew Chubb", "journal": "Information, Communication & Society", "ref_id": "b17", "title": "Citizen attitudes towards china's maritime territorial disputes: traditional media and internet usage as distinctive conduits of political views in china", "year": "2016" }, { "authors": "W Thomas; Danielle E Dunfee; Warren", "journal": "Journal of business ethics", "ref_id": "b18", "title": "Is guanxi ethical? a normative analysis of doing business in china", "year": "2001" }, { "authors": "Emile Durkheim", "journal": "Free Press", "ref_id": "b19", "title": "Suicide: A study in sociology", "year": "1951" }, { "authors": "Denis Emelin; Le Ronan; Jena D Bras; Maxwell Hwang; Yejin Forbes; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Moral stories: Situated reasoning about norms, intents, actions, and their consequences", "year": "2021" }, { "authors": "Martha Finnemore", "journal": "International organization", "ref_id": "b21", "title": "Norms, culture, and world politics: insights from sociology's institutionalism", "year": "1996" }, { "authors": "Martha Finnemore; Kathryn Sikkink", "journal": "International organization", "ref_id": "b22", "title": "International norm dynamics and political change", "year": "1998" }, { "authors": "L Joseph; Fleiss", "journal": "Psychological bulletin", "ref_id": "b23", "title": "Measuring nominal scale agreement among many raters", "year": "1971" }, { "authors": "Maxwell Forbes; Jena D Hwang; Vered Shwartz; Maarten Sap; Yejin Choi", "journal": "", "ref_id": "b24", "title": "Social chemistry 101: Learning to reason about social and moral norms", "year": "2020" }, { "authors": " David Sg Goodman", "journal": "John Wiley & Sons", "ref_id": "b25", "title": "Class in contemporary China", "year": "2014" }, { "authors": "Yuling Gu; Bhavana Dalvi; Peter Clark", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "DREAM: Improving situational QA by first elaborating the situation", "year": "2022" }, { "authors": "Yuling Gu; Yao Fu; Valentina Pyatkin; Ian Magnusson; Bhavana Dalvi Mishra; Peter Clark", "journal": "", "ref_id": "b27", "title": "Justdream-about-it: Figurative language understanding with dream-flute", "year": "2022" }, { "authors": "Jonathan Haidt; Silvia ; Helena Koller; Maria G Dias", "journal": "Journal of personality and social psychology", "ref_id": "b28", "title": "Affect, culture, and morality, or is it wrong to eat your dog", "year": "1993" }, { "authors": "Dirk Hovy; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "The importance of modeling social factors of language: Theory and practice", "year": "2021" }, { "authors": "Anning Hu", "journal": "China Review", "ref_id": "b30", "title": "Ancestor worship in contemporary china: An empirical investigation", "year": "2016" }, { "authors": "Liwei Jiang; Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jenna Bras; Jesse Liang; Keisuke Dodge; Jon Sakaguchi; Saadia Borchardt; Yulia Gabriel; Tsvetkov", "journal": "", "ref_id": "b31", "title": "Can machines learn morality? the delphi experiment", "year": "2022" }, { "authors": "Gary King; Jennifer Pan; Margaret E Roberts", "journal": "American political science Review", "ref_id": "b32", "title": "How censorship in china allows government criticism but silences collective expression", "year": "2013" }, { "authors": "Sandra Kuntsche; Robin Room; Emmanuel Kuntsche", "journal": "Elsevier", "ref_id": "b33", "title": "I can keep up with the best: The role of social norms in alcohol consumption and their use in interventions", "year": "2021" }, { "authors": "Kwok Leung; Michael W Morris", "journal": "Journal of International Business Studies", "ref_id": "b34", "title": "Values, schemas, and norms in the culture-behavior nexus: A situated dynamics framework", "year": "2015" }, { "authors": "Fangyu Liu; Emanuele Bugliarello; Maria Edoardo; Siva Ponti; Nigel Reddy; Desmond Collier; Elliott", "journal": "", "ref_id": "b35", "title": "Visually grounded reasoning across languages and cultures", "year": "2021" }, { "authors": "Li Lucy; David Bamman", "journal": "", "ref_id": "b36", "title": "Gender and representation bias in gpt-3 generated stories", "year": "2021" }, { "authors": "Barbara Plank", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "The \"problem\" of human label variation: On ground truth in data, modeling and evaluation", "year": "2022" }, { "authors": "Joan Plepi; Béla Neuendorf; Lucie Flek; Charles Welch", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "Unifying data perspectivism and personalization: An application to social norms", "year": "2022" }, { "authors": "Amy Pu; Won Hyung; Ankur P Chung; Sebastian Parikh; Thibault Gehrmann; Sellam", "journal": "", "ref_id": "b39", "title": "Learning compact metrics for mt", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b40", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Maarten Hannah Rashkin; Emily Sap; Noah A Allaway; Yejin Smith; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Event2Mind: Commonsense inference on events, intents, and reactions", "year": "2018" }, { "authors": "John Rawls", "journal": "The philosophical review", "ref_id": "b42", "title": "Outline of a decision procedure for ethics", "year": "1951" }, { "authors": "John Rawls", "journal": "Routledge", "ref_id": "b43", "title": "A theory of justice", "year": "2004" }, { "authors": "Earl Rubington; Martin Weinberg", "journal": "Routledge", "ref_id": "b44", "title": "Deviance: The interactionist perspective", "year": "2015" }, { "authors": "Maarten Sap; Le Ronan; Emily Bras; Chandra Allaway; Nicholas Bhagavatula; Hannah Lourie; Brendan Rashkin; Noah A Roof; Yejin Smith; Choi", "journal": "", "ref_id": "b45", "title": "Atomic: An atlas of machine commonsense for ifthen reasoning", "year": "2019" }, { "authors": "Thibault Sellam; Dipanjan Das; Ankur P Parikh", "journal": "", "ref_id": "b46", "title": "Bleurt: Learning robust metrics for text generation", "year": "2020" }, { "authors": "Rajiv Sethi; Eswaran Somanathan", "journal": "The American Economic Review", "ref_id": "b47", "title": "The evolution of social norms in common property resource use", "year": "1996" }, { "authors": "Muzafer Sherif", "journal": "", "ref_id": "b48", "title": "The psychology of social norms", "year": "1936" }, { "authors": "Taylor Sorensen; Liwei Jiang; Jena Hwang; Sydney Levine; Valentina Pyatkin; Peter West; Nouha Dziri; Ximing Lu; Kavel Rao; Chandra Bhagavatula", "journal": "", "ref_id": "b49", "title": "Value kaleidoscope: Engaging ai with pluralistic human values, rights, and duties", "year": "2023" }, { "authors": "Jenny Thomas", "journal": "Applied linguistics", "ref_id": "b50", "title": "Cross-cultural pragmatic failure", "year": "1983" }, { "authors": "Arland Thornton; Thomas E Fricke", "journal": "Sociological forum", "ref_id": "b51", "title": "Social change and the family: Comparative perspectives from the west, china, and south asia", "year": "1987" }, { "authors": "Harry Charalambos; Triandis ", "journal": "", "ref_id": "b52", "title": "Culture and social behavior", "year": "1994" }, { "authors": "Maxim Voronov; Jefferson A Singer", "journal": "The Journal of social psychology", "ref_id": "b53", "title": "The myth of individualism-collectivism: A critical review", "year": "2002" }, { "authors": "Ruyuan Wan; Jaehyung Kim; Dongyeop Kang", "journal": "AAAI Press", "ref_id": "b54", "title": "Everyone's voice matters: Quantifying annotation disagreement using demographic information", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b55", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Sarah Wiegreffe; Jack Hessel; Swabha Swayamdipta; Mark Riedl; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Reframing human-AI collaboration for generating free-text explanations", "year": "2022" }, { "authors": "Sarah Wiegreffe; Ana Marasović; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "Measuring association between labels and free-text rationales", "year": "2021" }, { "authors": "Beina Xu; Eleanor Albert", "journal": "Council on Foreign Relations", "ref_id": "b58", "title": "Media censorship in china", "year": "2014" }, { "authors": "Toshio Yamagishi; Hirofumi Hashimoto; Joanna Schug", "journal": "Psychological Science", "ref_id": "b59", "title": "Preferences versus strategies as explanations for culture-specific behavior", "year": "2008" }, { "authors": "Ching Kun; Yang ", "journal": "Univ of California Press", "ref_id": "b60", "title": "Religion in Chinese society: A study of contemporary social functions of religion and some of their historical factors", "year": "1970" }, { "authors": "Chung-Fang Yang", "journal": "", "ref_id": "b61", "title": "Familism and development: An examination of the role of family in contemporary china mainland, hong kong and taiwan", "year": "1988" }, { "authors": "Kuo-Shu Yang", "journal": "", "ref_id": "b62", "title": "Chinese social orientation: An integrative analysis", "year": "1993" }, { "authors": "Yang Xu", "journal": "", "ref_id": "b63", "title": "年中国内容营销市场发展洞 察", "year": "2021" }, { "authors": "Zheng Yang", "journal": "Frontiers in psychology", "ref_id": "b64", "title": "Similar attitudes, different strategies: A limited survey of the discourse strategies to oppose genetically modified organisms conspiracy theories by chinese scientist communicators and citizen communicators on zhihu", "year": "2022" }, { "authors": "Chenchen Zhang", "journal": "European journal of international relations", "ref_id": "b65", "title": "Right-wing populism with chinese characteristics? identity, otherness and global imaginaries in debating world politics online", "year": "2020" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "Zhihu", "ref_id": "b66", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Caleb Ziems; William Held; Omar Shaikh; Jiaao Chen; Zhehao Zhang; Diyi Yang", "journal": "", "ref_id": "b67", "title": "Can large language models transform computational social science?", "year": "2023" }, { "authors": "Caleb Ziems; Jane Yu; Yi-Chia Wang; Alon Halevy; Diyi Yang", "journal": "", "ref_id": "b68", "title": "The moral integrity corpus: A benchmark for ethical dialogue systems", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 81.78, 105.48, 196.44, 29.8 ], "formula_id": "formula_0", "formula_text": "Answer: 节 哀 。 可 以 礼 貌 性 的 拍 拍 肩 膀 或 者 嘱 咐 对 方 虽 然 最 近 会 比 较 操 劳 但 也 要 注 意 身 体。(Condolences." }, { "formula_coordinates": [ 5, 106.14, 115.07, 381.6, 29.25 ], "formula_id": "formula_1", "formula_text": "Text: 关于尊敬⻓辈? 因为传统。⻓辈始终是上辈。只要他不是有太过⽕⼤到让⼈⽆法原谅的错误。你就不能当⾯顶撞 他。只能在⼈少的时候。试着和他理论。⽗⺟不光是站在⻓辈那⾥。更多是为了你。顶撞⻓辈。出发点就是⼀个错。让嘴 巴⼤的⼈⻅了。你⽿朵会红⼀辈⼦。养成这个习惯了。以后有困难了。也会更困难。" } ]
10.5281/zenodo.5371628
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b6", "b5", "b1", "b2", "b20" ], "table_ref": [], "text": "Recently, Brown et al. (2020) have shown the impressive performance of using handcrafted prompts with a frozen language model in zero-shot and fewshot learning, leading to a surge of interest and increased activity in prompt engineering within the NLP community (Schick and Schütze, 2020;Gao et al., 2020;Li and Liang, 2021). Prompting (a.k.a prompt-based learning (Liu et al., 2021a)) aims to reformat an NLP problem to match as closely as possible the format used in the original training task of the pre-trained language model used. To apply prompt-based learning method effectively, a critical step involves the creation of a prompt template that maximizes performance on the downstream task.\nIn many previous works, it is common to manually pre-define a template while keeping the prompt position fixed (e.g. prepend the prompt to the input (Lester et al., 2021)). These studies often concentrate more on either prompt vocabulary searching * Corresponding author (Gao et al., 2020;Shin et al., 2020;Ben-David et al., 2021) or prompt embedding initialization (Liu et al., 2021c;Gu et al., 2022). However, there has been limited research exploring how different approaches to positioning the prompt sequences can affect models' behaviour, despite indications that varying prompt positions may lead to bias (Mao et al., 2022;Wu et al., 2022).\nHence, in this paper, we quantify how much prompt position matters by evaluating various accessible models on different NLP tasks under fewshot and zero-shot settings. We comprehensively test a range of prompt position options with many widely used prompt styles (e.g. cloze and prefix) and methods (e.g. gradient-based and gradientfree). Our findings reveal unexpected performance variations among different prompt positions in both zero-shot and few-shot settings. Interestingly, we observe that in many cases, the prompt positions used in previously published work show a suboptimal performance compared to other prompt position choices. We also find that the instructiontuned models exhibit a certain degree of robustness with respect to prompt positioning. We focus in this paper on zero and few-shot tasks that can be trained with medium-sized models using relatively low computational resources (i.e. a single GPU card for fine-tuning), but expect our results will also apply to models trained with the latest Large Language Models (LLMs). Our choice of zero and few-shot tasks was motivated by observations that prompting methods are particularly useful when training data is limited (Liu et al., 2021a), and this hypothesis is born out by our results which show prompt position matters most when labelled datasets are smaller.\nThe key contributions of this paper are:\n• To the best of our knowledge, we are the first comprehensive analysis to date looking at the impact of prompt position across different methods and prompt types in both few-shot arXiv:2305.14493v3 [cs.CL] 15 Nov 2023 and zero-shot settings for a variety of NLP tasks.\n• Empirical results show that the prompt positions used in many published works are suboptimal choices, and no universally superior prompt position across all tasks, suggesting prompt position optimisation as a novel avenue to the existing gap in the field of prompt engineering.\nIn section 2 we review related work and discuss the methods we use for our study in section 3. We then provide detailed results in section 4 and discussion in section 5 before summarising our findings in the conclusions section in 6. We also include a detailed set of appendices with full details of our results and our prompt patterns, model code and results are freely available on GitHub 1 ." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b1", "b5", "b9", "b2", "b11", "b22", "b23", "b12", "b20", "b5", "b21" ], "table_ref": [], "text": "Prompt-based learning. Many prior work has concentrated on gradient-based methods within discrete spaces (Schick and Schütze, 2020;Gao et al., 2020;Shin et al., 2020) as well as prompting directly in the embedding space. This latter approach uses tunable prompt tokens that are not limited to natural language, which can be either prepended to the input (Lester et al., 2021;Liu et al., 2021b;Gu et al., 2022) or be inserted in a hybrid template (Liu et al., 2021c). Sun et al. (2022) aims to optimise continuous tokens without using gradients, however, their technique is not compatible with APIs that restrict modifications to text rather than token embeddings (e.g. . There are more gradient-free works that focus on in-context learning (Brown et al., 2020;Lu et al., 2021), chain-of-thoughts (Wei et al., 2022b;Yao et al., 2023), and instruction generation (Prasad et al., 2022;Zhou et al., 2022) especially when instruction tuning (Sanh et al., 2022;Wei et al., 2022a;Chung et al., 2022) play a key role in the steering process of Large PLMs. Our paper includes experiments examining prompt positions from both gradient-based and gradient-free perspectives.\nPrompt position. There are few works that involve prompt positions. Mao et al. (2022) mention the biases of prompt-based models by placing a handcrafted prompt before or after an original input.\n1 URI.if.paper.accepted Wu et al. (2022) propose an instance-dependent prompt generation method; meanwhile, they study the effect of inserting a sequence of prompt tokens in different positions based on their proposed method and prompt tuning (Lester et al., 2021). Recently, Yang et al. (2023) have proposed a dynamic position method that can significantly improve the performance of prompt tuning. They both point out that different positions of prompts will deliver different results with the consideration of only one specific approach to creating the prompt. In this paper, we present the most comprehensive analysis of prompt positions to date and take into account various types of prompts under both zero-shot and few-shot settings." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Prompt Style", "publication_ref": [], "table_ref": [], "text": "Two common styles of prompts are explored in our experiments: Cloze style aims to let LMs fill in the blanks. For example, the input of sentiment classification \"I love this movie\" can be formulated as \"I love this movie. Overall, it was a [Mask] movie.\", and the model will be asked to predict the masked token. Prefix style aims to let LMs generate an answer given a prefix, which means the entire input comes before the final prediction. For example, the input \"I love this movie\" will be formulated into \"I love this movie. Is this review positive or negative?\", and the model will be asked to generate the answer." }, { "figure_ref": [], "heading": "Prompt Position", "publication_ref": [ "b20" ], "table_ref": [], "text": "Prompt position is the variable of interest in our study. We take into account the position where prompt tokens can be inserted and enumerate a broad range of permutations to test.\nConcretely, for Cloze style prompts, as shown in Figure 1, we consider the relative position of the [mask] token to the input. There are m types of input-[MASK] concatenations (m = 2 for singlesentence and m = 3 for sentence-pair tasks2 ), each with n potential locations that could insert prompt sequences (n = 3 and n = 4, respectively). In contrast to Wu et al. (2022) who inserts a single sequence of prompt tokens at different positions, we insert at least one and at most n prompt series per concatenation, yielding a total of m • (2 n -1) prompt positions. For the Prefix style prompts, depicted in Figure 2, we explore n insertion points without the [mask] token (m = 1), which results in 2 n -1 different prompt positions. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Our experiments span gradient-based and gradientfree approaches as discussed in Section 2, examining the influence of prompt positions across varying scenarios." }, { "figure_ref": [], "heading": "Gradient-based Prompting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b1" ], "table_ref": [], "text": "For gradient-based approaches, both discrete 3 and continuous methods are investigated. To focus on studying the effect of prompt position themselves, we implement two vanilla approaches:\nPrompt-based fine-tuning: For discrete prompt, we fine-tune all the LM's parameters with the input restructured within a manual prompt template as per (Schick and Schütze, 2020;Gao et al., 2020).\nPrompt tuning: For continuous prompt, we instantiate standard prompt tuning (Lester et al., 3 We do not take the automated search approach in discrete space, as this may result in different vocabulary in templates and potentially obfuscate the impact prompt position has in our results. 2021), which only tunes the continuous prompts tokens prepended to the input layer with the language model frozen. Besides, we incorporate both cloze and prefix styles, leading to four types of prompts for empirical investigation: cloze manual prompt, cloze continuous prompt, prefix manual prompt and prefix continuous prompt." }, { "figure_ref": [], "heading": "Models:", "publication_ref": [ "b1", "b5", "b0", "b1", "b5", "b3", "b16", "b15", "b15", "b1", "b1" ], "table_ref": [ "tab_7" ], "text": "We choose language models which are popular in the NLU research literature. As per (Gao et al., 2020) we use Roberta-large (Liu et al., 2019) to predict the masked token based on the clozestyle prompt. To generate answers from prefixstyle prompts, we use T5-large language model adaption as per (Lester et al., 2021), where the T5 model is pre-trained for 10K steps with language modelling objectives without mixing downstream tasks. We use the OpenPrompt4 (Ding et al., 2021) framework to implement all our few-shot experiments.\nDatasets: We examine the above approaches on five commonly used natural language understanding datasets as per (Gao et al., 2020;Lester et al., 2021). The datasets span various tasks: sentiment analysis (CR (Hu and Liu, 2004) and SST-2 (Wang et al., 2018)), question classification (TREC (Voorhees and Tice, 2000)), question-answering (Boolq (Wang et al., 2019)), and natural language inference (RTE (Wang et al., 2019)), broadly classified into single-sentence (SST-2, CR, TREC) and sentence-pair categories (RTE, Boolq). Consistent with Gao et al. (2020), we use the official test sets for TREC and the original development sets for testing the datasets from GLUE and SuperGlue. For the CR dataset, we adhere to Gao et al. (2020) and use their provided random sample of 2,000 examples from the training data as the designated testing set for our evaluation. See Table 6 in Appendix for details.\nWe measure the effect of the prompt position in a few-shot setting by the model's k-shot performance. We construct D train and D dev with K samples per label from the original training data, with K ranging from 16 to 128. We calculate the average across five randomly sampled D train and D dev splits, using a fixed set of seeds denoted as S seed .\nPositions: As explained in Section 3.2, we experiment with distinct prompt positions tailored to different prompt-style: 14 positions for clozestyle prompts in single-sentence tasks and 45 in sentence-pair tasks; for prefix-style prompts, 3 positions in single-sentence tasks and 7 in sentence-pair tasks. It is worth noting, that there would be different situations for each prompt position, particularly with discrete prompts. For instance, the sequence of two prompt series might affect the outcome depending on their order of insertion. We prioritize templates that maintain grammatical coherence to underscore the impact of prompt position, for each position, we chose one template. Additionally, to mitigate the influence of vocabulary in the manual prompt, we employ a reference template from prior research and only the position where these words are inserted. With regards to continuous prompts, when inserting multi-prompt series, we simply separate continuous tokens used in the single prompt sequence equally to mitigate the effect of prompt length. All templates and verbalizers we used are described in Appendix C, along with their respective prompt positions options for different tasks in Appendix D." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "As demonstrated in the summary Table 1, for singlesentence tasks -SST-2, CR and TREC, the influence of prompt position is relatively small when using manual prompts, while significant perfor-mance variations arise when continuous prompts are employed. With the K size increases, the differences between all methods tend to diminish. In the CR dataset, the default prompt position, which prepends prompt tokens to the input, consistently yields the best results in the prefix continuous prompt (PM) method. Nevertheless, in other methods, the optimal prompt position may not always align with the reference position. For the SST-2 and TREC datasets, the optimal prompt position does not consistently match the reference position across all methods and K size.\nFor sentence-pair tasks, RTE and Boolq, a substantial performance variation is observed across all methods. As the K increases, the variance between different prompt positions persists except for the case of the prefix manual prompt in Boolq. Similar to single-sentence tasks, the reference prompt position does not consistently produce optimal results across all methods. Notably, even when the K is set to 128, there are instances where a noticeable difference exists between the best-performing prompt position and the reference position. For instance, the prefix manual prompt in RTE shows a performance difference of 3.82 percentage points, while the cloze continuous prompt in Boolq exhibits a performance difference of 4.75 percentage points, as illustrated in Table 1. In general, sentence-pair tasks are more susceptible to the influence of prompt position compared to single-sentence tasks, whereas continuous prompts exhibit higher sensitivity to position compared to manual prompts." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b1" ], "table_ref": [ "tab_3", "tab_9", "tab_3", "tab_1" ], "text": "In the main paper, we provide a comprehensive list of optimal prompt positions for sentence-pair tasks in Table 2, while the details for single-sentence tasks can be found in Table 8 in Appendix.\nAs demonstrated in Table 2, we have observed that the optimal prompt position is not consistently shared across different datasets, even when employing the same prompt method. Also, we have noticed that the optimal position also varies depending on K size, indicating that the distribution of input samples holds an influence. Additionally, there is no clear superiority between inserting multiple prompt sequences and using a single prompt sequence. However, we find that in a cloze-style prompt, the [mask] token's relative position does indeed affect the model performance, which is consistent with the findings of Gao et al. (2020). For example, putting the [mask] token in the middle of the two inputs is often favoured in RTE. We conduct supplementary experiments with a null template which will be further elaborated in Appendix A.\nIt is worth noting that grammar doesn't always dictate the performance of manual prompts. This can be observed where grammatically incorrect prompts often achieve the best performance, and the performance difference between grammatically correct and incorrect prompts is not always negligible. For example, in the Boolq dataset with a clozestyle prompt and a K size of 32, \"{text_a} . Question: ? Answer: . [mask] {text_b}\" (shown in Table 2) outperforms the reference prompt \"{text_a} . Question: {text_b} ? Answer: [mask] .\" (Schick and Schütze, 2020) by 3.84, as indicated in Table 1. This suggests that prompts considered reasonable by humans may not necessarily be effective for language models, as also discussed in Liu et al. (2021c). It implies that factors beyond grammar contribute to performance outcomes." }, { "figure_ref": [], "heading": "Gradient-free Prompting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [], "table_ref": [], "text": "We explore classic gradient-free prompting methods within both zero-shot and few-shot paradigms. For few-shot settings, in-context learning is investigated via direct prompting (Brown et al., 2020) as well as chain-of-thought (CoT) prompting, where models provide a reasoning step prior to the final response (Wei et al., 2022b). For zero-shot settings, we only consider the direct prompting." }, { "figure_ref": [], "heading": "Models:", "publication_ref": [ "b13", "b17", "b12" ], "table_ref": [], "text": "We employ two relatively large models, T5-XXL (Raffel et al., 2023) and llama-13B (Touvron et al., 2023), which have sufficient capability to assess the impact of prompt position without any fine-tuning. We additionally experiment with their instruction-tuned variants, Flan-T5-XXL (Chung et al., 2022) and Flan-LLaMA (Wang et al., 2023). These variants have been pre-trained on a diverse set of data sources utilizing an array of instruction template types that incorporate a wide spectrum of vocabulary and positional variations.\nDatasets: We evaluate the sub-tasks of BIG-Bench Hard (BBH), a challenging benchmark from BIG-Bench (bench authors, 2023), for the fact that instruction-tuned models were not exposed to it during training. The tasks involve not only NLU (e.g. Disambiguation question-answer) but also reasoning (e.g. navigate) and the Use of World Knowledge (e.g. sports understanding). Following Suzgun et al. (2022), we employ the officially provided prompts, each accompanied by three fewshot examplers, for both Chain-of-Thought and Direct prompting. For the CoT setup, we extract the first word after the phrase 'So the answer is', or capture the full response if there is no such pattern present.\nPositions: This experiment is constrained to one prompt type, the prefix manual prompt, a choice informed by the nature of the models and methods we employed here. Given the single input structure of the BBH benchmark, we investigate three prompt positions for each sub-task: insertion at the front, the rear, or on both sides of the input. Be-sides, for in-context learning, we play the relative positioning of input and prompt within the exemplar delimiters. All templates and their variants of positions are detailed in Appendix C. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "Few-shot As illustrated in Table 3, LLaMAbased models demonstrate a higher sensitivity to prompt positions for tasks such as causal judgment and disambiguation question answering, while T5based models exhibit greater sensitivity in the context of sports understanding and navigation tasks. When employing direct prompting, we observe that the variance between prompt positions is relatively larger in the plain models compared to the instruction-tuned models. For instance, in the case of Disambiguation QA, the prompt position difference for LLaMa is 24.29 per cent, which reduces to 6.75 per cent when the instruction-tuned method is applied. Regarding Chain-of-Thought prompting, in most cases, the prompt position variance in instruction-tuned models is higher compared to their plain counterparts. This phenomenon can be attributed to the fact that some plain models may lack the capacity to reason the answer step by step, unless they undergo training through instruction tuning which includes chain-of-thought examplers. It's important to note that the best prompt position may not always align with the default position, whether in the context of direct prompting or COT prompting.\nZero-shot As shown in Table 4, with a few exceptions (such as plain models in Disambiguation QA and T5-XXL in Navigate), other models exhibit varying degrees of sensitivity to prompt positions. Notably, LLaMA-based models show a higher sensitivity to prompt position, especially in the navigate task. The instruction-tuning process helps mitigate position bias in some cases (e.g., the causal judgment task) but not for all tasks. While most of the best prompt positions tend to align with the default setting, there are still some instances where alternative positions outperform the default." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b12" ], "table_ref": [], "text": "We found there is no single prompt position that universally outperforms others across all tasks and models, whether in zero-shot or few-shot settings. Nevertheless, the default prompt positions are typically set to \"both\" (namely multiple prompt sequences are inserted both in the front and rear), but this doesn't always yield the best performance.\nWhen we experiment with different positions, the grammar tends to be better-preserved compared to the cases discussed in Section 4.1. However, prompts with the same semantic essence, such as the case in Disambiguation QA: \"In the following sentences, explain the antecedent of the pronoun (which thing the pronoun refers to), or state that it is ambiguous.{Sentence} {Options}\" and \"In the following sentences, explain the antecedent of the pronoun (which thing the pronoun refers to), or state that it is ambiguous.{Options} {Sentence},\" can still result in a significant 24.29 per cent performance difference for LLaMa due to the change in prompt position. We do see instruction-tuned process can improve the model's sensitivity to prompt position in the few-shot setting. This improvement is attributed to the inclusion of ten templates per dataset, as discussed in (Chung et al., 2022), which cover both vocabulary and position variety. Nevertheless, it's not always effective in other scenes." }, { "figure_ref": [], "heading": "General Discussion", "publication_ref": [], "table_ref": [], "text": "Our main research question is whether prompt positions matter. In Section 4.1, we observe that continuous prompts exhibit a higher degree of sensitivity to prompt positions. In Section 4.2, we find that plain models using direct prompting are more susceptible to prompt position variations. Both 4.1 and 4.2 point out that the optimal prompt position is not shared across tasks, and sometimes even differs among items of data. Furthermore, in Section 4.2, we uncover that the impact of instruction-tuning is mixed; it can partially mitigate prompt position bias, although it may not be suitable for all scenarios (e.g., chain-of-thought).\nIn general, it shows the choice of prompt position can have a varying impact on performance depending on different methods, tasks and models. This observation suggests that considering prompt position optimization as a valuable new direction for prompt engineering, with the potential to enhance model performance, rather than solely focusing on the prompt choice itself and following a default position." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this study, we evaluated various prompt positions using both gradient-based and gradient-free approaches. Our findings reveal significant variations in performance based on prompt position, influenced by different methods, tasks, and models. We observed that instruction-tuned models show less sensitivity to prompt position in certain contexts. Additionally, our research indicates that prompt positions commonly adopted in existing literature often result in sub-optimal performance, with no single prompt position universally excelling across all tasks. These findings suggest prompt position optimisation as a valuable new research direction for the area of prompt engineering research." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b1", "b5", "b12" ], "table_ref": [], "text": "Due to the extensive workload of experiments, we only test our hypothesis for 5 sub-tasks from (Gao et al. (2020); Lester et al. (2021)) in gradient-based approaches and 4 sub-tasks from Suzgun et al. (2022) in gradient-free approaches. For gradientbased methods, our models use medium-sized language models trained in a low-computational resource environment, so although we suspect the results will be the same in the latest Large Language Models this needs to be confirmed empirically in future experiments." }, { "figure_ref": [], "heading": "A Effect of the position [mask] token", "publication_ref": [], "table_ref": [ "tab_6", "tab_26" ], "text": "We have discovered that the position of the [mask] token has an impact on the cloze-style prompt, namely within Masked Language Models. To investigate this further, we conduct null template experiments with a K size of 16, where we simply concatenate the inputs and the [mask] token without a prompt. By analyzing the results of the null template as presented in Table 5, we observe that in single-sentence tasks, placing text before the [mask] token generally leads to better performance. For sentence-pair tasks, placing [mask] before the text is relatively sub-optimal. Notably, for the RTE task, positioning [mask] token in the middle of the two original inputs proves to be more advantageous. This observation also aligns with the overall performance demonstrated in the complete set of results, which can be found in Appendix D. Interestingly, even when the [mask] token is placed in the same relative position to the task inputs, the performance exhibits a noticeable difference depending on how to insert the prompt sequences. For instance, in the cloze continuous prompt experiment on RTE dataset with a K size of 128, when the [mask] token is placed in the middle of two task inputs, \"{text_a} [mask] {text_b} P\" achieves a performance that is 6.28 higher compared to \"{text_a} P [mask] {text_b}\" (detailed in Table 23 in Appendix D.4). One possible reason is that the [mask] position impact may be derived from the language patterns that the model learns during pretraining. During the pretraining phase within masked language modelling objectives, the model attempts to predict missing tokens or spans based on the surrounding context. When the position of the [mask] token changes, the model may need to consider different positional information in the context to make the prediction." }, { "figure_ref": [], "heading": "B Experimental Details", "publication_ref": [ "b11" ], "table_ref": [], "text": "For prompt-based fine-tuning, we employ an AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of 2e-5 and a batch size of 8 for 1000 steps, validating the performance every 100 steps. For prompt tuning on the Roberta model, we follow the setting in Sun et al. (2022), using AdamW with a learning rate of 5e-4 and a batch size of 16 for 1000 epochs, with model performance validation every 100 steps. For prompt tuning on the T5 model, we adopt Adafactor (Shazeer and Stern, 2018) with a learning rate of 0.3 and a batch size of 16 for 1000 steps, evaluating the performance every 8 steps. The prompt length for all experiments is set to 50, initialized from the first 50 tokens' embeddings of the pre-trained language model. To mitigate overfitting, we employ the strategy of early stopping across all experiments. Besides, all our all models are trained on a single RTX8000 with 48GB of memory." }, { "figure_ref": [], "heading": "C Dataset, reference prompt and positions", "publication_ref": [ "b1", "b5", "b2", "b12" ], "table_ref": [ "tab_7", "tab_8", "tab_9", "tab_10" ], "text": "For gradient-based approaches, the dataset statistics used are shown in Table 6, the reference prompt positions are shown in Table 7, and the bestperforming prompt positions for single-tasks are shown in Table 8. The CR templates employed here are consistent with that of SST-2 for all methods, following the setting outlined in Gao et al. (2020).\nRegarding the prefix continuous prompt (PC) applied to the TREC dataset, we follow the prompt position setting provided by Lester et al. (2021), which is commonly used as the default prompt position for most continuous prompt methods. Besides, Gu et al. (2022) use \"P {text_b} [mask] {text_b}\" as the prompt position in cloze continuous prompt method. However, as discussed in the main paper, to narrow our focus on the prompt position and ensure consistency with the expected task input sequence order in other methods, we modify their input orders in this specific case. For gradient-free approaches, the reference prompt positions used in Suzgun et al. (2022) and their variants are shown in Table 9. " }, { "figure_ref": [], "heading": "One prompt sequence", "publication_ref": [], "table_ref": [], "text": "Question: ? the Answer: . {text_a} {text_b} [mask] 60.43(9.12) 65.99(3.36) 74.01(2.17) 78.19(1.85) {text_a} Question: ? the Answer: . {text_b} [mask] 60.87(7.46) 64.33(3.92) 73.00(2.39) 78.84(3.53) {text_a} {text_b} Question: ? the Answer: . [mask] 55. 74(4.38) 59.86(5.55) 64.84(3.06) 76.39(1.95) {text_a} {text_b} [mask] Question: ? the Answer: . 57. 04(5.36) 58.84(3.81) 68.16(1.50) 75.60(2.97) Question: ? the Answer: . {text_a} [mask] [mask] Question: ? the Answer: . {text_b} 63.75(3.33) 66.93(4.05) 71.99(1.74) 77.62(1.89) {text_a} [mask] {text_b} Question: ? the Answer: . 67.36(5.40) 69.75(5.91) 73.36(1.76) 78.34(2.12) Question: ? the Answer: . [mask] {text_a} {text_b} 56.90(0.91) 54. 95(3.14) 57.26(3.56) 64.12(5.24) [mask] Question: ? the Answer: . {text_a} {text_b} 53.72(3.25) 56.61(5.85) 62.89(5.08) 68.81(4.12) [mask] {text_a} Question: ? the Answer: . {text_b} 55. 74(5.11) " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Economic and Social Research Council References Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021. PADA: A prompt-based autoregressive approach for adaptation to unseen domains. CoRR, abs/2102.12206. BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. CoRR, abs/2005.14165. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models." } ]
Prompt-based models have gathered a lot of attention from researchers due to their remarkable advancements in the fields of zero-shot and few-shot learning. Developing an effective prompt template plays a critical role. However, prior studies have mainly focused on prompt vocabulary selection or embedding initialization within a predefined template with the prompt position fixed. In this empirical study, we conduct the most comprehensive analysis to date of prompt position for diverse natural language process tasks. Our findings quantify the substantial impact prompt position has on model performance. We observe that the prompt position used in prior studies is often sub-optimal. These findings suggest prompt position optimisation as a valuable research direction to fill the gap in existing prompt engineering methodologies.
Do prompt positions really matter?
[ { "figure_caption": "Figure 1: Insertion positions for cloze-style prompts", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "This table describes the average accuracy of different prompt positions under both zero-shot and few-shot settings across 5 different random K samples per label. The minimum and maximum mean accuracy (Min-Max) among different prompt positions is reported so the impact of prompt position can be seen clearly. The accuracy of the reference prompt position is reported in Ref. column as well (position used in previous works, listed in the Appendix C). We use the following abbreviations. CM: Cloze manual prompt; CC: Cloze continuous prompt; PM: Prefix manual prompt; PC: Prefix continuous prompt. Bold results indicate that the best prompt position surpasses the reference one. We show a full table of results with stdev's for each task in Appendix D.", "figure_data": "62.09", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Best-performing prompt position for sentence-pair tasks with the RTE and Boolq datasets. P denotes the soft prompt tokens.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "XXL 60.43 54.55 60.96 54.01 60.43 58.82 0.53 4.81 LLaMA 52.94 49.73 48.13 50.27 48.13 48.13 4.81 2.14 Flan-LLaMA 59.89 51.87 59.36 55.08 58.29 50.80 1.60", "figure_data": "FrontBothRearVarDatasetModelDirect COT Direct COT Direct COT Direct COTCJT5-XXL51.34 48.66 51.87 47.06 51.87 48.13 0.531.60Flan-T5-4.28DA-QA T5-XXL32.55 031.37 031.76 01.180.00Flan-T5-XXL 68.63 64.40 66.40 63.20 65.10 62.80 3.531.60LLaMA34.51 058.80 055.69 024.29 0.00Flan-LLaMA 57.65 064.40 061.18 06.750.00SUT5-XXL50.00 55.20 50.80 55.20 59.20 59.20 9.204.00Flan-T5-XXL 69.20 61.20 68.40 59.60 66.80 64.00 2.404.40LLaMA64.00 76.80 66.00 78.40 64.80 78.00 2.001.60Flan-LLaMA 66.80 77.20 66.80 74.40 63.20 76.00 3.602.80Navigate T5-XXL57.20 16.80 40.40 25.60 42.00 26.40 16.80 9.60Flan-T5-XXL 60.40 57.20 58.80 61.20 61.60 57.20 2.804.00LLaMA58.00 44.40 58.00 45.20 58.00 44.00 0.001.20Flan-LLaMA 57.60 39.60 57.60 40.40 56.80 40.80 0.801.20", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "This table describes the accuracy of different prompt positions (Front, Both and Rear) for BBH sub-tasks under zero-shot settings. Similarly, Bold results indicate the best position and cells with a grey background indicate the default prompt position used inSuzgun et al. (2022).", "figure_data": "DatasetModelFront Both Rear VarCJT5-XXL53.48 54.55 49.20 5.35Flan-T5-XXL 60.96 61.50 60.96 0.53LLaMA51.87 48.13 53.48 5.35Flan-LLaMA 53.48 55.08 51.87 3.21DA-QAT5-XXL31.20 31.20 31.20 0.00Flan-T5-XXL 66.00 67.60 64.80 2.80LLaMA31.20 31.20 31.20 0.00Flan-LLaMA 38.00 32.40 31.60 6.40SUT5-XXL56.00 55.20 53.20 2.80Flan-T5-XXL 61.20 57.20 58.80 4.00LLaMA48.80 46.00 50.40 4.40Flan-LLaMA 58.40 55.20 49.60 8.80Navigate T5-XXL58.00 58.00 58.00 0.00Flan-T5-XXL 60.00 59.20 58.40 1.60LLaMA55.60 42.00 58.00 16.0Flan-LLaMA 54.80 42.00 42.00 12.8", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Null template results for all datasets on K size of 16 for cloze manual prompt.", "figure_data": "Dataset Null templateMean (std)SST-2{text_a} [mask]89.40 (1.44)[mask] {text_a}82.80 (5.56)CR{text_a} [mask]90.66 (1.19)[mask] {text_a}89.01 (1.76)TREC{text_a} [mask]85.99 (1.82)[mask] {text_a}83.58 (1.58)RTE{text_a} {text_b} [mask] 55.02 (7.16){text_a} [mask] {text_b} 65.49 (3.58)[mask] {text_a} {text_b} 53.86 (5.58)Boolq{text_a} {text_b} [mask] 64.70 (3.09){text_a} [mask] {text_b} 64.05 (4.13)[mask] {text_a} {text_b} 59.05 (1.57)", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The datasets evaluated in our work. |Y | represents the number of classes within each task. We only sample D train and D dev of K × |Y | examples from the original training set in the few-shot experiments.", "figure_data": "Corpus|Y | Train Validation TaskEvaluation MetricsSingle Sentence TasksCR21775 2000sentimentaccuracySST-22 67349 1821sentimentaccuracyTREC65452 500question cls.accuracySentence Pair TasksRTE22491 278NLIaccuracyboolq29427 3270QAaccuracyMethod TaskTemplateVerbalizerReferenceCMSST2 {text_a} It was [mask].positive: great, negative: terrible(Gao et al., 2020)CR{text_a} It was [mask].positive: great, negative: terrible(Gao et al., 2020)TREC {text_a} This question is related toabbr.: Expression, entity: Entity, de-(Köksal et al., 2023)[mask] category.scription: Description, human: Human,location: Location, numeric: NumberRTE{text_a} Question: {text_b}? the An-entailment: yes, not_entailment: no(Liu et al., 2021c)swer: [mask].Boolq {text_a}. Question: {text_b}? Answer:entailment: yes, not_entailment: no(Schick and Schütze, 2020)[mask].CCSST2 prompt {text_a} [mask]positive: great, negative: terrible(Gu et al., 2022)CRP {text_a} [mask]positive: great, negative: terribleTREC P [mask] {text_a}abbr.: Expression, entity: Entity, de-(Liu et al., 2022)scription: Description, human: Human,location: Location, numeric: NumberRTEP {text_a} [mask] {text_b}entailment: yes, not_entailment: no(Gu et al., 2022)Boolq P {text_a} [mask] {text_b}true: yes, false: no(Gu et al., 2022)PMSST2 {text_a} Question: Is this sentence pos-positive: positive, negative: negative(Gao et al., 2021)itive or negative? Answer:CR{text_a} Question: Is this sentence pos-positive: positive, negative: negativeitive or negative? Answer:TREC Categories: { ′ , ′ .join(label_words)}.abbr.: Abbreviation, entity: Entity, de-(Sanh et al., 2022)What category best describes: {text_a}scription: Description, human: Person,Answer:location: Location, numeric: QuantityRTE{text_a} Question: {text_b} True orentailment: true, not_entailment: false (Brown et al., 2020)False? Answer:Boolq {text_a} Question: {text_b}? Answer: true: yes, false: no(Brown et al., 2020)PCSST2 P {text_a}positive: positive, negative: negative(Lester et al., 2021)CRP {text_a}positive: positive, negative: negativeTREC P {text_a}abbr.: Abbreviation, entity: Entity, de-scription: Description, human: Person,location: Location, numeric: QuantityRTEP {text_a} {text_b}entailment: entailment, not_entailment:(Lester et al., 2021)contradictionBoolq P {text_a} {text_b}true: true, false: false(Lester et al., 2021)", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "All reference prompt positions used in the main text of the paper. P denotes the continuous prompt tokens.", "figure_data": "K-Shot Method SST-2CRTRECK=0PMQ: Is this sentence positive or negative?\\n {text_a}\\n A:K=16CM{text_a} It was [mask].[mask] It was {text_a}.{text_a} [mask] This question is related to categoryCCP [mask] P {text_a}P {text_a} P [mask]P {text_a} [mask]PM{text_a} Q: Is this sentence positive orQ: Is this sentence positive or negative?A: C: { ′ , ′ .join(label_words)}. What category bestnegative? A:{text_a} A:describes: {text_a}PC{text_a} PP {text_a}P {text_a} PK=32CMIt was [mask]. {text_a}{text_a}. It was [mask]This question is related to category. {text_a} [mask]CCP {text_a} [mask][mask] P {text_a}P [mask] {text_a} PPM{text_a} Q: Is this sentence positive ornegative? A:PC{text_a} PP {text_a}P {text_a} PK=64CM[mask] It was {text_a}.It was. {text_a} [mask]{text_a} [mask] This question is related to categoryCCP [mask] P {text_a}{text_a} [mask] PP [mask] {text_a}PMQ: Is this sentence positive or negative?{text_a}Q: Is this sentence positive orC: { ′ , ′ .join(label_words)}. What category best de-A: {text_a}negative? A:scribes: {text_a} A:PC{text_a} PP {text_a}{text_a} PK=128 CMIt was [mask]. {text_a}{text_a} It was [mask].{text_a} [mask] This question is related to categoryCCP [mask] P {text_a}P {text_a} P [mask]P {text_a} [mask] PPMQ: Is this sentence positive or negative?Q: Is this sentence positive or negative?A: C: { ′ , ′ .join(label_words)}. What category bestA: {text_a}A: {text_a}describes: {text_a}PCP {text_a}P {text_a}P {text_a} P", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Zero-shotFew-shotInfrontHow would a typical person answer each of the{Task_description} Q: How would a typical personCJfollowing questions about causation? {Options}answer each of the following questions about causa-{Test_input}tion? {Options} {Train_input} A:{Target} Q: Howwould a typical person answer each of the followingquestions about causation? {Options} {Test_input}A:Both*How would a typical person answer each of the{Task_description} Q: How would a typical personfollowing questions about causation? {Test_input}answer each of the following questions about causa-{Options}tion? {Train_input} {Options} A: {Target} Q: Howwould a typical person answer each of the followingquestions about causation? {Test_input} {Options}A:Rear{Test_input} How would a typical person answer{Task_description} Q: {Train_input} How would aeach of the following questions about causation?typical person answer each of the following ques-{Options}tions about causation? {Options} A: {Target} Q:{Test_input} How would a typical person answereach of the following questions about causation?{Options} A:InfrontIn the following sentences, explain the antecedent of{Task_description} Q: In the following sentences,DA_QAthe pronoun (which thing the pronoun refers to), orexplain the antecedent of the pronoun (which thingstate that it is ambiguous. {Options} {Test_input}the pronoun refers to), or state that it is ambiguous.{Options} {Train_input} A: { Target} Q: In thefollowing sentences, explain the antecedent of thepronoun (which thing the pronoun refers to), or statethat it is ambiguous. {Options} {Test_input} A:Both*In the following sentences, explain the antecedent of{Task_description} Q: In the following sentences,the pronoun (which thing the pronoun refers to), orexplain the antecedent of the pronoun (which thingstate that it is ambiguous. {Test_input} {Options}the pronoun refers to), or state that it is ambigu-ous. {Train_input} {Options} A: {Target} Q: Inthe following sentences, explain the antecedent ofthe pronoun (which thing the pronoun refers to), orstate that it is ambiguous. {Test_input} {Options}A:Rear{Test_input} In the following sentences, explain the{Task_description} Q: {Train_input} In the follow-antecedent of the pronoun (which thing the pronouning sentences, explain the antecedent of the pronounrefers to), or state that it is ambiguous. {Options}(which thing the pronoun refers to), or state that it isambiguous. {Options} A: {Target} Q: {Test_input}In the following sentences, explain the antecedentof the pronoun (which thing the pronoun refers to),or state that it is ambiguous. {Options} A:InfrontIf you follow these instructions, do you return to{Task_description} Q: If you follow these instruc-Navigatethe starting point? {Options} {Test_input}tions, do you return to the starting point? {Options}{Train_input} A: {Target} Q: If you follow theseinstructions, do you return to the starting point?{Options} {Test_input} A:Both*If you follow these instructions, do you return to{Task_description} Q: If you follow these in-the starting point? {Test_input} {Options}structions, do you return to the starting point?{Train_input} {Options} A: {Target} Q: If you fol-low these instructions, do you return to the startingpoint? {Test_input} {Options} A:Rear{Test_input} If you follow these instructions, do{Task_description} Q: {Train_input} If you fol-you return to the starting point? {Options}low these instructions, do you return to the startingpoint? {Options} A: {Target} Q: {Test_input} Ifyou follow these instructions, do you return to thestarting point? {Options} A:Infront*Is the following sentence plausible? {Test_input}{Task_description} Q: Is the following sentenceSUplausible? {Train_input} A: {Target} Q: Is the fol-lowing sentence plausible? {Test_input} A:BothIs the following sentence {Test_input} plausible?{Task_description} Q: Is the following sentence{Train_input} plausible? A: {Target} Q: Is the fol-lowing sentence {Test_input} plausible? A:Rear{Test_input} Is the following sentence plausible?{Task_description} Q: {Train_input} Is the follow-ing sentence plausible? A: {Target} Q: {Test_input}Is the following sentence plausible? A:", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "All reference prompt positions and their variants. * indicates the default setting used inSuzgun et al. (2022).", "figure_data": "D All resultsD.1 SST-2K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceIt was. {text_a} [mask]90.02(1.44) 91.15(1.95) 90.92(1.90) 91.81(0.99){text_a} [mask] It was.88.62(1.69) 90.71(1.60) 91.31(1.69) 91.56(0.52){text_a}. It was [mask]90.21(1.58) 89.63(1.84) 91.74(1.37) 92.87(0.93)It was. [mask] {text_a}85.67(5.36) 89.15(1.79) 89.95(2.16) 91.28(1.04)[mask] {text_a} It was.85.92(3.55) 87.04(1.35) 90.30(1.34) 91.35(1.36)[mask]. It was {text_a}89.84(1.42) 89.61(3.55) 91.54(1.29) 91.86(1.13)variance4.544.111.791.59Two prompt sequencesIt was {text_a}.87.78(2.91) 89.70(0.68) 91.19(1.28) 92.64(0.82)It was {text_a} [mask].89.86(1.28) 89.72(1.85) 91.38(1.36) 91.95(0.91){text_a} It was [mask].90.80(1.74) 90.71(2.01) 91.49(0.73) 92.96(0.72)It was [mask]. {text_a}90.09(1.26) 91.42(0.91) 91.54(1.81) 92.96(0.33)It was [mask] {text_a}.87.75(1.39) 88.60(0.76) 91.86(0.48) 92.20(1.14)[mask] It was {text_a}.89.20(1.46) 90.53(1.58) 92.39(0.64) 92.27(1.03)variance3.052.821.21.01Three prompt sequencesIt {text_a} was [mask].88.97(2.97) 90.30(1.04) 91.93(1.01) 91.74(0.98)It [mask] was {text_a}.88.14(2.66) 88.67(2.05) 91.19(0.53) 92.73(0.54)variance0.831.630.740.99variance all5.134.382.441.68", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Cloze manual prompt on SST-2. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a} [mask]86.12(3.28) 89.72(0.59) 90.18(0.80) 90.32(0.85){text_a} [mask] P82.64(4.62) 89.11(2.68) 90.50(0.68) 91.10(0.96){text_a} P [mask]72.09(6.69) 76.93(9.09) 83.23(3.27) 89.17(0.91)P [mask] {text_a}80.99(1.39) 82.00(4.65) 85.00(1.72) 86.90(2.75)[mask] {text_a} P72.02(5.58) 71.83(5.71) 80.50(2.63) 84.98(1.31)[mask] P {text_a}75.07(7.10) 85.99(2.85) 89.91(1.68) 90.46(2.14)variance14.117.89106.12Two prompt sequencesP {text_a} P [mask]79.93(4.31) 83.62(6.26) 89.01(1.64) 92.09(0.49)P {text_a} [mask] P75.32(5.68) 82.06(9.97) 89.86(2.07) 91.38(1.50){text_a} P [mask] P69.75(8.36) 77.48(7.96) 84.91(3.15) 91.01(1.02)P [mask] P {text_a}86.31(2.59) 89.27(1.70) 91.42(0.88) 92.57(0.38)P [mask] {text_a} P74.79(5.98) 79.36(2.03) 84.93(2.56) 88.44(1.12)[mask] P {text_a} P63.17(7.58) 74.66(6.49) 86.24(3.59) 90.67(2.38)variance23.1414.616.514.13Three prompt sequencesP {text_a} P [mask] P70.55(9.44) 76.12(8.93) 83.67(4.82) 89.22(3.18)P [mask] P {text_a} P85.11(0.73) 85.73(6.58) 90.05(2.05) 91.95(0.82)variance14.569.616.382.73variance all23.1417.8910.927.59", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Cloze continuous prompt on SST-2. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std)mean(std) mean(std) mean(std)One prompt sequenceQuestion: Is this sentence positive or85.73(6.94)91.93(0.45) 93.05(0.60) 93.39(0.39)negative? Answer: {text_a}{text_a} Question: Is this sentence posi-89.45(4.16)91.97(0.78) 92.41(0.94) 93.17(0.45)tive or negative? Answer:variance3.720.040.640.22Two prompt sequencesQuestion: Is this sentence positive or85.94(10.00) 91.06(0.96) 92.41(0.94) 92.84(0.98)negative? {text_a} Answer:variance all3.720.910.640.55", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Prefix manual prompt on SST-2. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std)mean(std)mean(std) mean(std)One prompt sequenceP {text_a}57.20(6.23)67.11(15.21) 83.76(4.90) 88.53(3.67){text_a} P70.64(16.85) 75.48(14.73) 89.11(2.46) 86.81(8.46)variance13.448.375.351.72Two prompt sequencesP {text_a} P56.90(3.14)62.71(7.22)75.05(9.16) 80.80(3.13)variance all13.7412.7714.067.73", "figure_id": "tab_13", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Prefix continuous prompt on SST-2. The italic row indicates the reference prompt position.", "figure_data": "D.2 CRK=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceIt was. {text_a} [mask]89.86(1.63) 89.54(0.74) 92.25(0.48) 92.10(0.91){text_a} [mask] It was.90.64(2.22) 90.76(0.35) 91.54(0.62) 92.19(0.83){text_a}. It was [mask]90.93(1.74) 91.98(0.69) 91.90(0.86) 92.06(0.88)It was. [mask] {text_a}90.24(1.65) 91.25(1.65) 91.91(0.84) 91.36(0.82)[mask] {text_a} It was.89.29(4.11) 89.96(2.39) 91.13(1.43) 91.78(0.42)[mask]. It was {text_a}91.07(1.97) 91.41(0.96) 91.55(1.11) 92.05(0.39)variance1.782.441.120.83Two prompt sequencesIt was {text_a}. [mask]89.29(1.76) 90.59(1.07) 91.73(0.43) 91.85(1.08)It was {text_a} [mask].90.11(1.51) 91.33(0.80) 91.95(0.18) 91.76(0.74){text_a} It was [mask].91.17(0.81) 91.13(0.79) 92.12(0.74) 92.55(0.75)It was [mask]. {text_a}91.17(0.61) 90.93(1.78) 92.03(0.44) 91.93(0.74)It was [mask] {text_a}.88.06(4.32) 90.20(1.00) 90.73(0.86) 91.55(0.40)[mask] It was {text_a}.91.22(0.53) 90.48(1.16) 91.32(0.47) 92.29(0.85)variance3.111.131.391Three prompt sequencesIt {text_a} was [mask].87.86(4.30) 90.31(0.74) 92.08(0.98) 91.94(0.49)It [mask] was {text_a}.88.86(1.23) 90.85(1.25) 91.54(0.72) 91.66(0.74)variance10.540.540.28variance all:3.362.441.521.19", "figure_id": "tab_14", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Cloze manual prompt on CR. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std)mean(std) mean(std) mean(std)One prompt sequenceP {text_a} [mask]83.81(3.18)88.97(0.99) 89.82(0.60) 90.74(0.94){text_a} [mask] P83.66(4.42)88.92(0.93) 91.10(0.93) 91.72(0.79){text_a} P [mask]73.95(10.58) 83.60(3.08) 85.76(2.82) 88.37(1.10)P [mask] {text_a}80.18(2.87)82.96(0.96) 84.24(2.03) 87.96(0.92)[mask] {text_a} P71.81(4.54)74.41(7.62) 86.29(2.33) 89.79(1.33)[mask] P {text_a}82.37(5.49)89.22(0.95) 90.20(1.41) 91.62(0.40)variance1214.816.863.76Two prompt sequencesP {text_a} P [mask]84.59(5.68)89.16(2.08) 91.02(0.91) 92.08(0.51)P {text_a} [mask] P74.00(4.60)81.96(5.45) 85.82(3.29) 89.61(1.17){text_a} P [mask] P71.84(6.77)82.54(3.23) 86.39(2.09) 89.30(0.74)P [mask] P {text_a}84.45(0.98)86.77(1.06) 89.53(1.04) 90.62(0.84)P [mask] {text_a} P73.89(8.87)79.94(4.03) 83.34(1.15) 87.29(0.63)[mask] P {text_a} P65.85(12.25) 77.06(5.30) 86.42(3.46) 88.66(2.96)variance18.7412.17.684.79Three prompt sequencesP {text_a} P [mask] P71.64(7.78)83.36(2.65) 87.37(2.89) 91.31(0.82)P [mask] P {text_a} P74.16(4.65)81.75(2.94) 86.38(2.01) 89.65(0.71)variance2.521.610.991.66variance all18.7414.817.764.79", "figure_id": "tab_15", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Cloze continuous prompt on CR. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceQuestion: Is this sentence positive or89.98(0.38) 91.13(1.87) 93.06(1.03) 93.91(0.38)negative? Answer: {text_a}{text_a} Question: Is this sentence posi-92.52(0.92) 92.87(0.45) 93.70(0.34) 93.70(0.63)tive or negative? Answer:variance2.541.740.640.21Two prompt sequencesQuestion: Is this sentence positive or92.71(0.53) 92.97(0.58) 93.52(0.67) 93.49(0.63)negative? {text_a} Answer:variance all2.731.840.640.42", "figure_id": "tab_16", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Prefix manual prompt on CR. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a}86.77(1.97) 89.36(1.29) 92.02(0.81) 91.63(1.08){text_a} P77.57(4.27) 86.50(2.71) 90.60(0.96) 91.47(0.85)variance9.22.861.420.16Two prompt sequencesP {text_a} P60.37(6.49) 72.43(4.72) 78.68(5.92) 85.87(4.93)variance all26.416.9313.345.76", "figure_id": "tab_17", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "Prefix continuous prompt on CR. The italic row indicates the reference prompt position.", "figure_data": "D.3 TRECK=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceThis question is related to category. {text_a} [mask] 83.64(2.67) 88.80(1.80) 91.68(0.85) 93.98(0.52){text_a} [mask] This question is related to category. 85.68(2.46) 88.19(1.68) 92.29(0.33) 94.55(0.32){text_a} This question is related to category. [mask] 84.57(4.38) 88.66(1.85) 91.69(0.29) 94.07(0.41)This question is related to category. [mask] {text_a} 83.37(2.10) 87.22(1.18) 91.64(1.23) 93.86(0.83)[mask] {text_a} This question is related to category. 82.50(4.32) 87.77(2.42) 91.57(0.75) 93.52(0.83)[mask] This question is related to category. {text_a} 83.93(0.77) 88.32(1.48) 91.88(1.04) 94.34(0.45)variance3.181.580.721.03Two prompt sequencesThis question is {text_a} related to category. [mask] 84.33(2.49) 88.52(1.88) 91.30(1.04) 94.43(0.37)This question is {text_a} [mask] related to category. 84.00(2.69) 88.12(0.44) 91.56(0.78) 94.23(0.34){text_a} This question is related to [mask] category. 82.76(2.16) 87.53(2.47) 91.24(1.85) 93.90(0.57)This question is related to [mask] category. {text_a} 82.40(2.65) 88.29(1.52) 91.78(0.98) 94.02(0.50)This question is [mask] {text_a} related to category. 82.52(3.74) 88.10(2.01) 91.47(1.00) 94.24(0.56)[mask] This question is {text_a} related to category. 83.32(1.13) 87.79(0.68) 91.70(0.78) 94.31(0.32)variance1.930.990.540.53Three prompt sequencesThis question is {text_a} related to [mask] category. 83.64(0.76) 87.95(1.87) 91.04(0.63) 94.30(0.11)This question is [mask] related to {text_a} category. 82.72(1.46) 87.53(1.82) 91.33(0.72) 93.95(0.47)variance0.920.420.290.35variance all3.281.581.251.03", "figure_id": "tab_18", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "Cloze manual prompt on TREC. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a} [mask]65.06(3.84) 71.89(1.63) 78.25(2.26) 86.76(2.18){text_a} [mask] P56.29(3.20) 69.63(3.14) 78.39(4.26) 87.21(1.67){text_a} P [mask]49.28(8.77) 66.09(4.44) 75.69(5.29) 84.76(0.87)P [mask] {text_a}58.26(3.28) 73.10(2.08) 81.76(2.65) 87.44(1.36)[mask] {text_a} P53.05(3.70) 64.84(2.61) 75.37(3.27) 83.96(4.43)[mask] P {text_a}56.02(4.26) 66.88(4.61) 77.01(1.33) 83.79(3.10)variance15.788.266.393.65Two prompt sequencesP {text_a} P [mask]51.02(9.15) 69.77(2.46) 81.05(1.89) 87.09(1.63)P {text_a} [mask] P51.78(6.78) 70.13(3.97) 78.10(5.3)88.05(2.59){text_a} P [mask] P48.28(9.31) 63.35(6.52) 77.27(3.78) 85.39(1.32)P [mask] P {text_a}49.40(4.58) 66.53(2.97) 74.16(0.63) 81.92(3.23)P [mask] {text_a} P57.57(6.47) 74.75(3.11) 79.78(2.18) 87.97(0.99)[mask] P {text_a} P47.71(7.40) 63.58(4.77) 71.71(4.37) 84.24(1.16)variance9.8611.49.346.13Three prompt sequencesP {text_a} P [mask] P48.84(2.11) 61.94(4.26) 72.97(5.01) 84.25(1.17)P [mask] P {text_a} P53.02(5.07) 69.27(3.30) 74.46(3.37) 84.55(1.06)variance4.187.331.490.3variance all17.3512.8110.056.13", "figure_id": "tab_19", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "Cloze continuous prompt on TREC. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)", "figure_id": "tab_20", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "Prefix manual prompt on TREC. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a}69.43(1.71) 74.79(2.28) 81.98(0.99) 87.75(1.06){text_a} P67.88(5.22) 77.84(2.24) 85.51(1.09) 89.16(1.4)variance1.553.053.531.41Two prompt sequencesP {text_a} P71.78(2.34) 80.44(1.73) 85.46(1.53) 89.41(1.24)variance all3.95.653.531.66", "figure_id": "tab_21", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "Prefix continuous prompt on TREC. The italic row indicates the reference prompt position.", "figure_data": "D.4 RTEK=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)", "figure_id": "tab_22", "figure_label": "21", "figure_type": "table" }, { "figure_caption": "Cloze manual prompt on RTE. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a} {text_b} [mask]52.13(4.78) 54.15(2.71) 50.04(3.32) 57.40(2.23){text_a} P {text_b} [mask]55.31(4.48) 57.69(1.54) 57.98(3.29) 58.12(2.27){text_a} {text_b} P [mask]51.12(2.38) 49.17(3.12) 51.05(2.40) 51.34(1.97){text_a} {text_b} [mask] P49.39(2.69) 51.19(2.52) 51.62(2.99) 55.02(1.07)P {text_a} [mask] {text_b}55.60(4.18) 59.13(1.00) 59.21(1.89) 60.14(1.01){text_a} P [mask] {text_b}53.86(1.81) 55.38(2.96) 54.15(1.37) 55.45(2.13){text_a} [mask] P {text_b}54.01(0.83) 53.57(2.38) 52.42(3.22) 55.96(2.78){text_a} [mask] {text_b} P54.08(4.76) 57.62(2.28) 63.47(1.56) 61.73(2.81)P [mask] {text_a} {text_b}52.20(1.16) 51.26(4.86) 51.05(4.52) 55.74(1.70)[mask] P {text_a} {text_b}48.45(3.87) 50.11(2.55) 50.97(2.47) 53.14(0.86)[mask] {text_a} P {text_b}51.91(2.20) 53.07(2.55) 51.19(1.64) 54.87(2.68)[mask] {text_a} {text_b} P53.36(1.72) 54.30(2.96) 54.44(1.83) 55.02(4.21)variance7.159.9613.4310.39Two prompt sequencesP {text_a} P {text_b} [mask]52.85(4.28) 55.31(4.85) 52.49(4.34) 58.77(1.06)P {text_a} {text_b} P [mask]49.60(3.98) 51.55(2.98) 49.10(1.17) 52.78(3.44)P {text_a} {text_b} [mask] P50.25(5.46) 53.72(2.81) 51.48(3.29) 55.38(3.29){text_a} P {text_b} P [mask]49.75(2.26) 51.84(2.92) 52.27(3.16) 54.87(2.18){text_a} P {text_b} [mask] P54.95(1.68) 53.29(0.98) 51.12(2.00) 55.81(1.61){text_a} {text_b} P [mask] P48.66(1.92) 50.25(3.05) 53.14(2.47) 53.50(2.27)P {text_a} P [mask] {text_b}55.02(1.69) 56.90(3.05) 54.37(3.30) 58.34(1.52)P {text_a} [mask] P {text_b}51.91(4.40) 53.79(3.32) 54.44(2.95) 54.95(5.15)P {text_a} [mask] {text_b} P56.90(4.31) 59.57(1.71) 59.42(1.87) 63.10(1.31){text_a} P [mask] P {text_b}50.69(3.22) 53.00(2.68) 51.91(3.91) 54.22(2.25){text_a} P [mask] {text_b} P54.66(2.92) 57.62(1.63) 56.10(1.57) 57.11(1.34){text_a} [mask] P {text_b} P50.18(2.33) 53.72(3.00) 52.20(2.80) 54.30(2.16)P [mask] P {text_a} {text_b}50.54(3.14) 52.35(1.44) 50.61(2.51) 53.07(2.94)P [mask] {text_a} P {text_b}49.89(3.26) 50.18(3.58) 51.55(3.63) 53.14(1.36)P [mask] {text_a} {text_b} P48.81(5.45) 51.05(3.51) 50.76(2.48) 51.70(3.32)[mask] P {text_a} P {text_b}51.19(1.93) 50.54(2.14) 49.82(1.42) 53.50(1.88)[mask] P {text_a} {text_b} P48.52(4.23) 50.40(1.70) 49.46(1.98) 51.55(3.44)[mask] {text_a} P {text_b} P52.35(5.07) 54.44(3.08) 50.83(3.54) 55.23(3.07)variance8.389.3910.3211.55Three prompt sequencesP {text_a} P {text_b} P [mask]46.79(1.61) 50.61(2.77) 49.75(2.48) 52.49(2.48)P {text_a} P {text_b} [mask] P51.41(6.85) 55.52(2.43) 54.51(3.44) 56.25(1.64)P {text_a} {text_b} P [mask] P49.17(4.59) 51.91(2.02) 52.56(1.83) 53.72(1.18){text_a} P {text_b} P [mask] P49.10(3.75) 51.99(1.28) 52.71(1.86) 54.44(2.33)P {text_a} P [mask] P {text_b}50.97(2.06) 53.14(2.16) 51.70(3.86) 50.97(2.55)P {text_a} P [mask] {text_b} P57.04(4.73) 57.69(3.97) 58.19(3.12) 59.57(1.42)P {text_a} [mask] P {text_b} P54.37(2.70) 50.11(2.00) 50.97(3.49) 53.65(4.29){text_a} P [mask] P {text_b} P52.64(5.38) 54.37(2.92) 54.95(3.51) 53.86(2.39)P [mask] P {text_a} P {text_b}52.64(3.41) 51.84(1.27) 51.62(2.41) 54.51(2.35)P [mask] P {text_a} {text_b} P50.47(4.61) 51.12(3.27) 53.29(3.16) 50.90(2.42)P [mask] {text_a} P {text_b} P52.78(2.79) 49.39(3.70) 54.30(2.76) 55.52(4.47)[mask] P {text_a} P {text_b} P48.74(3.23) 49.46(1.20) 50.40(1.37) 53.65(2.13)variance10.258.38.448.67Four prompt sequencesP {text_a} P {text_b} P [mask] P 50.97(5.24) 53.07(4.01) 51.41(2.76) 52.06(3.12)P {text_a} P [mask] P {text_b} P 48.52(5.16) 52.13(2.05) 54.58(1.21) 51.55(3.17)P [mask] P {text_a} P {text_b} P 48.30(3.37) 49.10(2.30) 50.54(2.74) 50.97(4.97)variance2.673.974.041.09variance all10.2510.4714.3712.2", "figure_id": "tab_25", "figure_label": "22", "figure_type": "table" }, { "figure_caption": "Cloze continuous prompt on RTE. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)", "figure_id": "tab_26", "figure_label": "23", "figure_type": "table" }, { "figure_caption": "Prefix manual prompt on RTE. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a} {text_b}52.06(4.80) 52.35(3.22) 57.33(2.88) 59.13(1.18){text_a} P {text_b}48.16(1.95) 50.25(3.19) 50.47(3.02) 54.66(3.92){text_a} {text_b} P48.45(2.44) 52.13(4.53) 54.44(4.13) 61.23(3.40)variance3.92.16.866.57Two prompt sequencesP {text_a} P {text_b}47.73(2.95) 52.64(3.15) 53.50(4.07) 55.45(2.86)P {text_a} {text_b} P48.52(3.09) 51.26(2.57) 52.71(1.53) 51.55(2.29){text_a} P {text_b} P50.69(2.43) 51.19(4.50) 53.36(4.14) 58.48(4.90)variance2.961.450.796.93Three prompt sequencesP {text_a} P {text_b} P50.32(2.67) 50.11(3.96) 53.94(2.86) 56.61(2.75)variance all4.332.536.869.68", "figure_id": "tab_27", "figure_label": "24", "figure_type": "table" }, { "figure_caption": "Prefix continuous prompt on RTE. The italic row indicates the reference prompt position.", "figure_data": "Prompt", "figure_id": "tab_28", "figure_label": "25", "figure_type": "table" }, { "figure_caption": "Zero-shot performance on RTE. The italic row indicates the reference prompt position.Three prompt sequences. Question: {text_a} . {text_b} ? Answer:[mask] 66.70(4.72) 69.93(3.68) 73.34(1.96) 74.36(1.24) Answer: {text_a} . Question: ? {text_b}[mask] .64.91(2.59) 69.96(2.55) 72.42(1.74) 74.50(1.68) . Question: {text_a} {text_b} ? Answer:[mask] .65.73(2.93) 67.97(8.36) 72.70(2.41) 74.87(1.45) {text_a} . Question: {text_b} ? Answer: [mask] . 67.49(2.46) 68.28(4.74) 73.23(2.12) 75.65(1.08) Answer: {text_a} . [mask] . Question: ? {text_b} 57.60(3.55) 64.86(4.36) 69.87(2.13) 73.76(2.05) Answer: . {text_a} . [mask] {text_b} Question: ? 68.92(1.94) 69.66(2.05) 71.95(1.44) 73.98(1.57) Answer: . {text_a} [mask] . Question: {text_b} ? 56.65(4.96) 64.43(3.70) 69.98(2.70) 73.24(1.18) {text_a} . Answer: [mask] . Question: {text_b} ? 66.60(5.04) 67.90(3.47) 71.39(1.68) 75.18(2.31) Answer: [mask] . {text_a} . Question: ? {text_b} 66.21(1.31) 67.47(2.37) 70.48(0.59) 73.25(2.04) Answer: . [mask] . Question: {text_a} {text_b} ? 65.77(1.74) 62.34(5.68) 69.94(2.24) 70.63(1.41) Answer: {text_a} . Question: {text_b} ? [mask] . 69.46(1.42) 71.42(2.22) 72.19(2.20) 75.49(2.23) Answer: {text_a} . [mask] . Question: {text_b} ? 54.68(3.22) 55.12(9.41) 68.04(3.41) 71.11(0.89) Answer:[mask] . {text_a} .Question: {text_b} ? 68.99(1.46) 69.44(1.15) 72.39(2.01) 73.03(2.16) ", "figure_data": "D.5 BoolqK=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequence. Question: ? Answer: . {text_a} {text_b} [mask] 64.14(4.13) 65.99(6.23) 71.93(2.04) 73.48(0.68){text_a} . Question: ? Answer: . {text_b} [mask] 69.37(2.11) 68.48(3.12) 72.61(1.23) 74.07(1.13){text_a} {text_b} . Question: ? Answer: . [mask] 67.30(3.53) 66.90(4.11) 71.31(3.61) 74.06(1.50){text_a} {text_b} [mask] . Question: ? Answer: . 62.56(5.09) 64.80(4.51) 69.76(3.61) 72.98(2.08). Question: ? Answer: . {text_a} [mask] {text_b} 64.86(5.11) 68.75(3.15) 71.43(1.90) 74.97(0.36){text_a} . Question: ? Answer: . [mask] {text_b} 65.39(5.79) 72.12(1.40) 72.42(3.03) 74.16(2.49){text_a} [mask] . Question: ? Answer: . {text_b} 64.26(4.09) 67.94(4.23) 71.62(2.26) 74.29(1.77){text_a} [mask] {text_b} . Question: ? Answer: . 63.52(2.83) 69.79(2.10) 71.76(2.29) 74.84(1.03). Question: ? Answer: . [mask] {text_a} {text_b} 56.67(5.35) 59.06(4.48) 63.20(4.61) 68.89(2.50)[mask]. Question: ? Answer: . {text_a} {text_b} 59.82(3.17) 59.47(9.03) 65.98(4.82) 70.39(2.29)[mask] {text_a} . Question: ? Answer: . {text_b} 58.37(3.86) 56.96(3.00) 62.09(4.14) 67.78(1.94)[mask] {text_a} {text_b} . Question: ? Answer: . 56.59(2.26) 57.32(4.75) 61.56(3.89) 62.83(2.69)variance12.7815.1611.0512.14Two prompt sequencesAnswer: . {text_a} . Question: ? {text_b} [mask] 66.61(2.46) 70.02(1.81) 71.09(3.09) 74.42(1.28). Question: {text_a} {text_b} ? Answer: . [mask] 66.26(3.62) 68.76(1.33) 71.92(2.56) 73.88(1.99). Question: ? {text_a} {text_b} [mask] Answer: . 64.43(3.58) 67.76(3.21) 71.30(2.33) 74.04(1.44){text_a} . Question: {text_b} ? Answer: . [mask] 67.68(1.95) 68.93(4.44) 71.54(2.95) 75.03(2.41){text_a} . Question: ? {text_b} [mask] Answer: . 68.18(2.56) 68.54(2.59) 73.84(1.06) 73.72(1.06){text_a} {text_b} . Question: ? Answer: [mask] . 63.47(7.07) 70.29(1.97) 71.39(2.52) 74.59(2.40)Answer: . {text_a} . Question: ? [mask] {text_b} 66.84(4.91) 71.11(1.05) 71.74(2.30) 73.99(2.69)Answer: . {text_a} [mask] . Question: ? {text_b} 62.07(5.23) 65.19(4.27) 71.58(2.14) 73.87(1.74)Answer: . {text_a} [mask] {text_b} . Question: ? 65.32(4.20) 69.06(3.19) 71.55(2.87) 74.86(1.00){text_a} . Answer: [mask] . Question: ? {text_b} 65.27(4.81) 69.63(3.25) 72.77(1.29) 75.48(2.06){text_a} . Answer: . [mask] {text_b} Question: ? 64.50(6.13) 67.54(5.74) 72.49(2.85) 74.95(2.00){text_a} [mask] . Answer: . Question: {text_b} ? 57.57(4.51) 59.29(5.66) 66.83(1.51) 70.25(1.39)Answer: . [mask] . Question: ? {text_a} {text_b} 59.82(4.13) 64.00(6.26) 66.42(6.23) 69.41(3.76)Answer: . [mask] {text_a} . Question: ? {text_b} 59.79(5.01) 61.61(3.10) 64.56(3.22) 69.17(1.91)Answer: . [mask] {text_a} {text_b} . Question: ? 58.06(3.52) 56.52(4.00) 61.82(4.59) 66.37(3.78)[mask] . Answer: {text_a} . Question: ? {text_b} 61.56(4.01) 61.05(7.08) 68.43(2.61) 71.81(1.61)[mask] . Answer: {text_a} {text_b} . Question: ? 57.58(4.58) 61.47(4.39) 64.89(5.08) 70.18(2.23)[mask] {text_a} . Answer: . Question: {text_b} ? 56.76(4.13) 57.11(3.27) 59.77(3.33) 62.35(5.54)variance11.4214.5914.0713.13Answer: . [mask] {text_a} . Question: {text_b} ? 57.08(4.94) 58.40(1.47) 62.88(3.48) 66.24(3.75)[mask] . Answer: {text_a} . Question: {text_b} ? 57.28(6.89) 61.52(4.14) 66.12(4.08) 71.95(2.99)variance12.2711.5610.469.41Four prompt sequencesvariance14.7816.34.354.38variance all14.781714.0713.3", "figure_id": "tab_29", "figure_label": "26", "figure_type": "table" }, { "figure_caption": "Cloze manual prompt on Boolq. The italic row indicates the reference prompt position.", "figure_data": "variance16.3411.858.689.78Three prompt sequencesP {text_a} P {text_b} P [mask]44.91(8.33)52.66(8.46)54.88(2.38) 57.62(0.95)P {text_a} P {text_b} [mask] P52.89(7.92)57.20(1.44)57.41(0.88) 57.03(2.61)P {text_a} {text_b} P [mask] P49.19(10.53) 57.72(1.61)53.82(3.63) 57.88(2.28){text_a} P {text_b} P [mask] P46.28(9.74)51.93(8.34)56.25(2.97) 53.09(5.21)P {text_a} P [mask] P {text_b}49.65(10.02) 56.10(3.56)56.29(1.91) 57.11(1.57)P {text_a} P [mask] {text_b} P53.44(8.11)54.65(2.56)58.20(2.15) 56.97(2.87)P {text_a} [mask] P {text_b} P54.61(3.56)53.43(9.14)54.72(4.42) 57.48(2.53){text_a} P [mask] P {text_b} P51.54(8.84)48.56(9.84)54.60(3.88) 54.53(6.72)P [mask] P {text_a} P {text_b}50.64(7.92)53.40(8.94)51.20(3.49) 56.51(1.61)P [mask] P {text_a} {text_b} P56.37(3.10)49.08(7.66)54.18(5.17) 52.57(4.13)P [mask] {text_a} P {text_b} P54.62(4.49)55.46(4.34)56.02(5.96) 56.97(3.54)[mask] P {text_a} P {text_b} P53.72(9.03)57.06(1.81)54.49(4.46) 56.18(5.20)variance11.469.1675.31Four prompt sequencesP {text_a} P {text_b} P [mask] P 46.72(6.13)58.24(2.64)55.65(3.42) 57.34(2.63)P {text_a} P [mask] P {text_b} P 49.65(7.86)54.62(2.99)53.91(3.55) 57.90(1.74)P [mask] P {text_a} P {text_b} P 54.38(4.40)54.55(7.92)56.24(2.81) 56.04(2.06)variance7.663.692.331.86variance all17.1314.9211.4213.93", "figure_id": "tab_30", "figure_label": "27", "figure_type": "table" }, { "figure_caption": "Cloze continuous prompt on Boolq. The italic row indicates the reference prompt position. Question: ? Answer: {text_a} {text_b} 57.54(7.27) 67.91(1.42) 72.61(0.69) 75.59(1.02) {text_a} Question: ? Answer: {text_b} 60.91(2.83) 65.65(2.54) 72.29(0.66) 76.54(1.30) {text_a} {text_b} Question: ? Answer: 61.33(4.71) 64.77(3.08) 70.73(1.78) 73.80(1.52)", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequencevariance3.793.141.882.74Two prompt sequencesAnswer: {text_a} Question: ? {text_b} 60.76(2.21) 65.27(2.51) 72.29(0.72) 76.18(0.97)Question: {text_a} {text_b} ? Answer: 64.23(4.43) 69.84(1.76) 72.42(3.81) 75.18(1.18){text_a} Question: {text_b} ? Answer: 61.17(5.10) 71.13(2.33) 73.83(1.58) 75.51(1.69)variance3.475.861.541Three prompt sequencesAnswer: {text_a} Question: {text_b} ? 57.49(4.07) 70.12(2.31) 71.99(4.09) 75.82(1.13)variance all6.746.363.12.74", "figure_id": "tab_31", "figure_label": "28", "figure_type": "table" }, { "figure_caption": "Prefix manual prompt on Boolq. The italic row indicates the reference prompt position.", "figure_data": "K=16K=32K=64K=128mean(std) mean(std) mean(std) mean(std)One prompt sequenceP {text_a} {text_b}49.21(8.54) 52.98(4.21) 55.00(4.42) 51.44(6.17){text_a} P {text_b}42.43(4.78) 49.25(5.79) 52.86(2.27) 52.75(1.39){text_a} {text_b} P50.53(4.48) 56.28(2.38) 52.01(4.15) 48.92(5.37)variance8.17.032.993.83Two prompt sequencesP {text_a} P {text_b}50.78(5.18) 55.25(4.69) 50.08(4.38) 53.77(5.17)P {text_a} {text_b} P51.48(3.98) 53.71(2.52) 53.85(4.60) 53.82(5.49){text_a} P {text_b} P49.87(2.49) 50.71(1.03) 52.13(0.58) 52.80(3.46)variance1.614.543.771.02Three prompt sequencesP {text_a} P {text_b} P50.50(6.68) 55.14(4.47) 52.92(3.29) 54.84(5.67)variance all9.057.034.925.92", "figure_id": "tab_32", "figure_label": "29", "figure_type": "table" }, { "figure_caption": "Prefix continuous prompt on Boolq. The italic row indicates the reference prompt position.", "figure_data": "PromptAccQuestion: ? Answer: {text_a}\\n {text_b}\\n 61.90{text_a}\\n Question: ? Answer: {text_b}\\n 62.11{text_a}\\n {text_b}\\n Question: ? Answer: 64.46Answer: {text_a}\\n Question: ? {text_b}\\n 62.17Question: {text_a}\\n {text_b}?\\n Answer: 63.64{text_a}\\n Question: {text_b}?\\n Answer:66.67Answer: {text_a}\\n Question: {text_b}?\\n 62.17variance4.77", "figure_id": "tab_33", "figure_label": "30", "figure_type": "table" }, { "figure_caption": "Zero-shot performance on Boolq. The italic row indicates the reference prompt position.", "figure_data": "", "figure_id": "tab_34", "figure_label": "31", "figure_type": "table" } ]
Junyu Mao; Stuart E Middleton; Mahesan Niranjan; Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Man- Dar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin 2019 Stoyanov; Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J 2023 Liu; Victor Sanh; Albert Webson; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Teven Le Scao; Arun Raja; Manan Dey; Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; De- Bajyoti Datta; Jonathan Chang; Tian-Jian Jiang; Han Wang; Matteo Manica; Sheng Shen; Zheng Xin Yong; Harshit Pandey; Rachel Bawden; Thomas Wang; Trishala Neeraj; Jos Rozen; Abheesht Sharma; Andrea Santilli; Thibault Fevry; Jason Alan Fries; 2020 Autoprompt
[ { "authors": "Ning Ding; Shengding Hu; Weilin Zhao; Yulin Chen; Zhiyuan Liu; Hai-Tao Zheng; Maosong Sun", "journal": "", "ref_id": "b0", "title": "Openprompt: An open-source framework for prompt-learning", "year": "2021" }, { "authors": "Leo Gao; Jonathan Tow; Stella Biderman; Sid Black; Anthony Dipofi; Charles Foster; Laurence Golding; Jeffrey Hsu; Kyle Mcdonell; Niklas Muennighoff; Jason Phang; Laria Reynolds; Eric Tang; Anish Thite; Ben Wang; Kevin Wang; Andy Zou ; Tianyu; Adam Gao; Danqi Fisch; Chen", "journal": "", "ref_id": "b1", "title": "A framework for few-shot language model evaluation", "year": "2020" }, { "authors": "Yuxian Gu; Xu Han; Zhiyuan Liu; Minlie Huang", "journal": "", "ref_id": "b2", "title": "Ppt: Pre-trained prompt tuning for few-shot learning", "year": "2022" }, { "authors": "Minqing Hu; Bing Liu", "journal": "", "ref_id": "b3", "title": "Mining and summarizing customer reviews", "year": "2004" }, { "authors": "Abdullatif Köksal; Timo Schick; Hinrich Schütze", "journal": "", "ref_id": "b4", "title": "Meal: Stable and active learning for few-shot prompting", "year": "2023" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "", "ref_id": "b5", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "", "ref_id": "b7", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "Xiangyang Liu; Tianxiang Sun; Xuanjing Huang; Xipeng Qiu", "journal": "", "ref_id": "b8", "title": "Late prompt tuning: A late prompt could be better than many prompts", "year": "2022" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b9", "title": "P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks", "year": "2021" }, { "authors": "Xiao Liu; Yanan Zheng; Zhengxiao Du; Ming Ding; Yujie Qian; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b10", "title": "GPT understands, too", "year": "2021" }, { "authors": "Tianxiang Sun; Yunfan Shao; Hong Qian; Xuanjing Huang; Xipeng Qiu", "journal": "", "ref_id": "b11", "title": "Black-box tuning for language-model-as-a-service", "year": "2022" }, { "authors": "Mirac Suzgun; Nathan Scales; Nathanael Schärli; Sebastian Gehrmann; Yi Tay; Hyung Won Chung; Aakanksha Chowdhery; Quoc V Le; Ed H Chi; Denny Zhou; Jason Wei", "journal": "", "ref_id": "b12", "title": "Challenging big-bench tasks and whether chain-of-thought can solve them", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b13", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "M Ellen; Dawn M Voorhees; Tice", "journal": "", "ref_id": "b14", "title": "Building a question answering test collection", "year": "2000" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b16", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Yizhong Wang; Hamish Ivison; Pradeep Dasigi; Jack Hessel; Tushar Khot; Raghavi Khyathi; David Chandu; Kelsey Wadden; Noah A Macmillan; Iz Smith; Hannaneh Beltagy; Hajishirzi", "journal": "", "ref_id": "b17", "title": "How far can camels go? exploring the state of instruction tuning on open resources", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b18", "title": "Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed Chi; V Quoc; Denny Le; Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Zhuofeng Wu; Sinong Wang; Jiatao Gu; Rui Hou; Yuxiao Dong; Hao Vg Vydiswaran; Ma", "journal": "", "ref_id": "b20", "title": "Idpg: An instance-dependent prompt generation method", "year": "2022" }, { "authors": "Xianjun Yang; Wei Cheng; Xujiang Zhao; Linda Petzold; Haifeng Chen", "journal": "", "ref_id": "b21", "title": "Dynamic prompting: A unified framework for prompt tuning", "year": "2023" }, { "authors": "Shunyu Yao; Dian Yu; Jeffrey Zhao; Izhak Shafran; Thomas L Griffiths; Yuan Cao; Karthik Narasimhan", "journal": "", "ref_id": "b22", "title": "Tree of thoughts: Deliberate problem solving with large language models", "year": "2023" }, { "authors": "Yongchao Zhou; Andrei Ioan Muresanu; Ziwen Han; Keiran Paster; Silviu Pitis; Harris Chan; Jimmy Ba", "journal": "", "ref_id": "b23", "title": "Large language models are human-level prompt engineers", "year": "2022" }, { "authors": "", "journal": "P {text_a} {text_b} P", "ref_id": "b24", "title": "", "year": "" }, { "authors": "", "journal": "P {text_a} {text_b}", "ref_id": "b25", "title": "", "year": "" }, { "authors": "", "journal": "text_a} P {text_b} P", "ref_id": "b26", "title": "", "year": "" }, { "authors": "P ", "journal": "{text_b}", "ref_id": "b27", "title": "", "year": "" }, { "authors": "", "journal": "P {text_a}", "ref_id": "b28", "title": "", "year": "" }, { "authors": "", "journal": "P {text_a}", "ref_id": "b29", "title": "", "year": "" }, { "authors": "", "journal": "mask", "ref_id": "b30", "title": "", "year": "" }, { "authors": "", "journal": "P {text_b} P", "ref_id": "b31", "title": "", "year": "" }, { "authors": "P ", "journal": "P {text_a} {text_b}", "ref_id": "b32", "title": "", "year": "" }, { "authors": "P ", "journal": "{text_a} P {text_b}", "ref_id": "b33", "title": "", "year": "" }, { "authors": "P ", "journal": "{text_a} {text_b} P", "ref_id": "b34", "title": "", "year": "" }, { "authors": "", "journal": "P {text_a} P {text_b}", "ref_id": "b35", "title": "", "year": "" }, { "authors": "", "journal": "P {text_a} {text_b} P", "ref_id": "b36", "title": "", "year": "" }, { "authors": "", "journal": "text_a} P {text_b} P", "ref_id": "b37", "title": "", "year": "" } ]
[]
2023-05-23
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b17", "b5", "b9", "b1", "b12", "b11", "b15", "b19", "b2", "b18", "b3", "b14", "b4", "b13", "b8", "b6" ], "table_ref": [], "text": "Over the last several years, Transformer models have played a significant role in shaping the field of Natural Language Processing (NLP) (Vaswani et al., 2017;Devlin et al., 2018;Liu et al., 2019;Brown et al., 2020;Ouyang et al., 2022;OpenAI, 2023). Their exceptional ability to reason across a broad range of NLP tasks (Shi et al., 2022;Zhou et al., 2022;Bubeck et al., 2023) has been a key factor contributing to their success. The success of LLMs on challenging datasets like HellaSwag (Zellers et al., 2019), AI2 Reasoning Challenge (ARC) (Clark et al., 2018), WinoGrande (Sakaguchi et al., 2021), and GSM-8K (Cobbe et al., 2021) is a testament to their advanced reasoning skills and their potential to address challenging NLP tasks.\nIn this paper, we investigate the reasoning abilities of LLMs models under a novel paradigm we dub Deduction under Perturbed Evidence (DUPE for short). By testing LLMs' capacity to reason with flawed or perturbed evidence, we aim to determine whether LLMs can generate logically sound yet erroneous conclusions when presented with misleading information. Strong DUPE skills are critical in NLP applications like student simulations (Piech et al., 2015;Liu et al., 2022), where models simulate student responses to understand how they may respond in certain scenarios. As student responses often contain inaccuracies and misconceptions, it is important for a model to analyze and utilize these inaccuracies and misconceptions as evidence to arrive at the same conclusion as the student. For instance, a student may have the misconception that the heavier an object is, the faster it falls, leading them to conclude that a bowling ball will fall faster than a ball bearing. If we provide LLMs with evidence that a heavier object falls faster, would LLMs also arrive at the conclusion that a bowling ball will fall faster than a ball bearing? We introduce DUPE as our approach to investigate this question.\nContributions: This paper develops a novel reasoning paradigm -Deduction under Perturbed Evidence (DUPE) -to examine whether LLMs arrive at different conclusions when presented with distorted initial facts. To test the DUPE capabilities of LLMs, we create a DUPEd version of StrategyQA dataset (Figures 1,2). StrategyQA (Geva et al., 2021) is an open-domain QA dataset that is characterized by its explicit provision of the necessary facts required to answer each yes-no question. In the DUPEd version of the dataset, we manipulate the facts provided in a way that results in a different answer to the original question.\nOur findings reveal that state-of-the-art LLMs, , including GPT3.5 and GPT4, struggle significantly on the newly introduced DUPEd-StrategyQA dataset. The accuracy of these models dropped drastically by approximately 45%, falling from an impressive 91.9% on the original dataset to only 46.7% on the DUPEd-StrategyQA dataset. In addition, we conduct an ablation study on the DUPEd-StrategyQA dataset by categorizing it into two distinct parts based on the type of manipulation usedone involving language perturbations and the other involving mathematical manipulations. Furthermore, our results demonstrate that the accuracy drop can be mitigated by using prompt settings inspired by student simulation models. This approach reduced the accuracy drop to 29%, with the models achieving an accuracy of 62.7% on the DUPEd-StrategyQA dataset. Our findings carry crucial implications for practical LLMs applications, particularly in the realm of student simulation models that demand reasoning over erroneous information." }, { "figure_ref": [], "heading": "Methodology, Dataset, and Prompting", "publication_ref": [], "table_ref": [], "text": "In this section, we overview the DUPE reasoning framework, provide details on the DUPEd version of AllenAI's StrategyQA dataset, and then explore customized prompt settings designed to assess the DUPE skills of LLMs." }, { "figure_ref": [ "fig_1", "fig_0" ], "heading": "DUPE", "publication_ref": [], "table_ref": [], "text": "Given a true-false question q, the correct response r q ∈ {true, f alse} and facts F q that determine the truth or falsehood of Q (r q ), we change F q to F ′ q s.t. the correct response to q flips to ¬r q under altered facts\nF ′ q , DUPE (q, F q , r) = (q, F ′ q , r ′ ) s.t. r ′ = ¬r , edit dist (F q , F ′ q ) < τ,(1)\nwhere edit dist ensures that the edit distance between the fact strings F q and F ′ q is less than a threshold τ . The threshold τ is generally set to two to three words to ensure minimal changes to underlying facts (examples in figure 2). The new DUPEd-tuple (q, F ′ q , r ′ ) can be used to probe the DUPE capabilities of LLMs as shown in Figure 1." }, { "figure_ref": [ "fig_1" ], "heading": "DUPEd-StrategyQA", "publication_ref": [ "b6" ], "table_ref": [], "text": "We use AllenAI's StrategyQA dataset (Geva et al., 2021) to assess the DUPE skills of LLMs. Strate-gyQA dataset provides explicit facts for answering open-domain questions. We create a DUPEd version of StrategyQA dataset composed of a total of 325 examples, of which 173 introduce natural language perturbations, while the remainder introduce mathematical errors (refer to examples in figure 2).\nWhile designing the DUPEd version, we were careful to modify the facts in the most minimal way possible As a result, we made a conscious effort to only alter one or two words in the original facts whenever possible, in order to preserve the overall meaning and context of the original question. Additionally, we refrained from using explicit negation, such as the word not, to modify the facts, since our intent is not to evaluate the reasoning proficiency of LLMs in handling negation." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Student Simulation and Prompt Design", "publication_ref": [ "b13", "b16", "b8", "b19", "b0" ], "table_ref": [], "text": "DUPE is highly relevant to student simulation models (Piech et al., 2015;Sonkar et al., 2020;Liu et al., 2022), which are widely used in education and cognitive psychology research. These models help in predicting and understanding student responses to various tasks, and thus their ability to reason over false information is critical to their success. Given this strong connection between simulation models and DUPE, these models can inspire innovative approaches to prompt design, which can be used to probe DUPE skills of LLMs (Zhou et al., 2022;Bommarito II and Katz, 2022). An example of such a prompt is illustrated in figure 1 and section 3.\nDUPE and Counterfactual Reasoning: Counterfactual reasoning and student simulation models require different types of reasoning. In counterfactual reasoning, the focus is on exploring hypothetical scenarios that may or may not correspond to actual reality. The fact that the information being considered is hypothetical or counterfactual is usually known beforehand.\nIn contrast, a student simulation model needs to reason about both true and false information, and may not know beforehand whether the information being considered is true or false. For example, in figure 2, the model lacks prior knowledge about which facts are true and which ones are perturbed. The model must identify incorrect answers from the student to make inferences about future questions, which requires robust and nuanced reasoning capabilities beyond those needed for counterfactual reasoning." }, { "figure_ref": [ "fig_0" ], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate the DUPE capabilities of the two largest GPT models -GPT3.5 (version gpt-3. 1: We evaluate the DUPE capabilities of the two largest GPT models under two different prompt settings using the DUPEd-StrategyQA dataset. Prompt P1 asks GPT models to answer a question based on provided evidence. Under Prompt P1 setting, both GPT3.5 and GPT4 perform poorly on DUPEd version of the dataset with around 45% accuracy drop. We also find that both models are more robust to mathematical perturbation compared to natural language perturbations. Prompt P2 is inspired from student simulation settings. P2 primes the models that evidence provided may be incorrect. We find that prompt P2 achieves better accuracy than Prompt P1 by 16.0 points for GPT4, but we still see a substantial 29.2% drop in accuracy compared to GPT4's accuracy on original dataset.\nto answer a YES or NO question\", and P2) \"You are a student simulation model. Your task is reason on student's responses to accurately measure the student's current knowledge state and predict the student's response to a YES or NO question based on the student's current knowledge state\" from section 2.3. An example is illustrated in Figure 1." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "In the prompt setting P1, both GPT3.5 and GPT4 performed poorly on the DUPEd version of the dataset, with a decrease in accuracy by 46.0%. and 45.2% respectively. As expected, the latest GPT4 model demonstrates superior performance to GPT3.5 on both the original and the DUPEd StrategyQA dataset." }, { "figure_ref": [], "heading": "Student Simulation Prompt", "publication_ref": [], "table_ref": [], "text": "Prompt P2 inspired by student simulation setting informs/ primes the models that the provided evidence may be incorrect since the evidence reflects the erroneous nature of students' responses. We found that prompt setting P2 performs significantly better than P1 by a margin of 16.0% for the GPT4 model. However, there was still a significant 29.2% drop in accuracy compared to GPT4's performance on the original dataset." }, { "figure_ref": [ "fig_1" ], "heading": "Language vs. Math Perturbations", "publication_ref": [], "table_ref": [], "text": "While curating the DUPEd-StrategyQA dataset, we divided the perturbations introduced into two distinct categories -one that involved language perturbations, while the other manipulated mathematical information (see figure 2). Our finding suggest that both GPT models are more resilient to math perturbations compared to language perturbations. E.g. for GPT3.5 there was accuracy drop of 58.7% and 32.4 for language and math Perturbations respec-tively, while for GPT4 the accuracy drops were 50.3% and 39.4." }, { "figure_ref": [], "heading": "Root Cause of Poor DUPE Skills", "publication_ref": [ "b7", "b10" ], "table_ref": [], "text": "To explain the GPT models' poor performance on the DUPEd dataset, we need to identify the main factor influencing their reasoning process, i.e., whether it is the encoded information in parameters or the manipulated evidence in prompts. Recent studies have shed light on this issue, suggesting that factual information encoded in the parameters of LLMs plays a dominant role in governing the generated output. For instance, the feed-forward layers in transformer models function as key-value memories, which implies that they encode factual information, as noted by Geva et al. (2020). Moreover, Meng et al. (2022) demonstrated that localized computations, such as Rank-One Model Editing (ROME), can modify these factual associations, leading to alternative conclusions. These findings suggest that the encoded information in parameters has a significant impact on LLMs' reasoning process; further investigation is left for future work." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we have introduced a new reasoning paradigm we call Deduction under Perturbed Evidence (DUPE for short). Through DUPE , we have assessed the ability of LLMs models to arrive at logically sound yet erroneous conclusions when faced with distorted initial facts. Our study, which used a carefully curated dataset to evaluate DUPE abilities, has revealed that even the most advanced GPT models struggle with logical reasoning in the presence of falsified information. Moving forward, we plan to investigate into the performance of different LLMs with our dataset in varied prompt settings.\nDue to limitations in both financial and computational resources, we had to limit our testing to only the most advanced LLMs -the GPT models. Consequently, we directed our attention towards developing a dataset for evaluating proposed reasoning scenarios. As a result of these limitations, we chose to focus specifically on the evaluation of the two largest models offered by OpenAI. While we recognize that other LLMs may produce different outcomes, we believe that our dataset could serve as a valuable resource for further research into the capabilities and limitations of LLMs ." } ]
We explore whether Large Language Models (LLMs ) are capable of logical reasoning with distorted facts, which we call Deduction under Perturbed Evidence (DUPE). DUPE presents a unique challenge to LLMs since they typically rely on their parameters, which encode mostly accurate information, to reason and make inferences. However, in DUPE, LLMs must reason over manipulated or falsified evidence present in their prompts, which can result in false conclusions that are valid only under the manipulated evidence. Our goal with DUPE is to determine whether LLMs can arrive at these false conclusions and identify whether the dominant factor influencing the deduction process is the encoded data in the parameters or the manipulated evidence in the prompts. To evaluate the DUPE capabilities of LLMs, we create a DUPEd version of the StrategyQA dataset, where facts are manipulated to reverse the answer to the question. Our findings show that even the most advanced GPT models struggle to reason on manipulated facts -showcasing poor DUPE skills -with accuracy dropping by 45% compared to the original dataset. We also investigate prompt settings inspired from student simulation models, which mitigate the accuracy drop to some extent. Our findings have practical implications for understanding the performance of LLMs in real-world applications such as student simulation models that involve reasoning over inaccurate information.
Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: Setup of the Deduction under Perturbed Evidence (DUPE) reasoning framework. On the left is a questionfact pair in StrategyQA dataset. To test DUPE skills of a model, we change facts provided with each question such that the response to the question flips. On the right is a prompting setup to probe DUPE skills of LLMs. We use a custom prompt tailored to student simulation setting that takes in the input question, perturbed (DUPEd) facts, and requests a yes/no response from LLMs. Perturbed facts represent a realistic student simulation setting since they mirror the inaccurate nature/ misconceptions of students' responses.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Six examples from our DUPEd-StrategyQA dataset.We flip the answer to a yes-no question by altering facts provided with each question. First three questions on the top are examples of natural language perturbations, while the bottom three questions involves manipulating numerical digits. The DUPEd version was designed with minimal modifications to the facts, usually involving only one to two word changes in the original facts. Additionally, we refrained from using explicit negation words like not.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" } ]
Shashank Sonkar; Richard G Baraniuk
[ { "authors": "Michael Bommarito; I I ; Daniel Martin Katz", "journal": "", "ref_id": "b0", "title": "GPT takes the Bar Exam", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances In Neural Information Processing Systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b2", "title": "Sparks of Artificial General Intelligence: Early experiments with GPT-4", "year": "2023" }, { "authors": "Peter Clark; Isaac Cowhey; Oren Etzioni; Tushar Khot; Ashish Sabharwal; Carissa Schoenick; Oyvind Tafjord", "journal": "", "ref_id": "b3", "title": "Think you have solved Question Answering? Try ARC", "year": "2018" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Mark Chen; Heewoo Jun; Lukasz Kaiser; Matthias Plappert; Jerry Tworek; Jacob Hilton; Reiichiro Nakano", "journal": "", "ref_id": "b4", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "Bert: Pre-training of Deep Bidirectional Transformers for language understanding", "year": "2018" }, { "authors": "Mor Geva; Daniel Khashabi; Elad Segal; Tushar Khot; Dan Roth; Jonathan Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "Did Aristotle use a laptop? A Question Answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "Mor Geva; Roei Schuster; Jonathan Berant; Omer Levy", "journal": "", "ref_id": "b7", "title": "Transformer feed-forward layers are keyvalue memories", "year": "2020" }, { "authors": "Naiming Liu; Zichao Wang; Richard Baraniuk; Andrew Lan", "journal": "", "ref_id": "b8", "title": "Open-ended knowledge tracing for computer science education", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b9", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Kevin Meng; David Bau; Alex Andonian; Yonatan Belinkov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "Locating and editing factual associations in gpt", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b11", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Chris Piech; Jonathan Bassen; Jonathan Huang; Surya Ganguli; Mehran Sahami; Leonidas J Guibas; Jascha Sohl-Dickstein", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Deep Knowledge Tracing", "year": "2015" }, { "authors": "Keisuke Sakaguchi; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "Communications of the ACM", "ref_id": "b14", "title": "Winogrande: An adversarial winograd schema challenge at scale", "year": "2021" }, { "authors": "Freda Shi; Mirac Suzgun; Markus Freitag; Xuezhi Wang; Suraj Srivats; Soroush Vosoughi; Hyung Won Chung; Yi Tay; Sebastian Ruder; Denny Zhou", "journal": "", "ref_id": "b15", "title": "Language models are multilingual chain-of-thought reasoners", "year": "2022" }, { "authors": "Shashank Sonkar; Andrew E Waters; Andrew S Lan; Phillip J Grimaldi; Richard G Baraniuk", "journal": "", "ref_id": "b16", "title": "qdkt: Question-centric Deep Knowledge Tracing", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Attention is all you need", "year": "2017" }, { "authors": "Rowan Zellers; Ari Holtzman; Yonatan Bisk; Ali Farhadi; Yejin Choi", "journal": "", "ref_id": "b18", "title": "Hellaswag: Can a machine really finish your sentence?", "year": "2019" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Olivier Bousquet; Quoc Le; Ed Chi", "journal": "", "ref_id": "b19", "title": "Least-to-most prompting enables complex reasoning in Large Language Models", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 329.36, 387.47, 195.78, 55.26 ], "formula_id": "formula_0", "formula_text": "F ′ q , DUPE (q, F q , r) = (q, F ′ q , r ′ ) s.t. r ′ = ¬r , edit dist (F q , F ′ q ) < τ,(1)" } ]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b16", "b29", "b6", "b13", "b11", "b23", "b3", "b5", "b30", "b22", "b24", "b10", "b26" ], "table_ref": [], "text": "Every car has a windscreen. The number of newly produced windscreens therefore ranges in the millions every year. Following quality processes for automotive mass production established since the 1960ies -like the outdated ISO/TS 16949 [18] or the more recent VDA6.3 [31] -these windscreens are tested end-of-line (EOL) at the suppliers (Tier 1) production line using well-defined optical measurements. Importantly, the windscreen quality is measured at the production site alone, independent of any production tolerances that may arise during assembly of the whole car at the site of the car manufacturer (original equipment manufacturer, OEM). Economically, this is mandatory, as a thorough testing of the whole windscreen after assembly by the OEM is prohibitively expensive.\nFor several decades the optical quality of these windscreens has been judged acceptable if humans could look through it with low impact on the perception of the driver. With the rise of advanced driver assistance systems (ADAS) and the future promise of autonomous driving (AD) many cars nowadays are equipped with several camera systems, many of which are situated behind the windscreen. A camera is not a human observer, and it is now not enough to qualify a windscreen using human perception, especially as the quality and resolution of the cameras are steadily increasing. The influence of the optical quality on the image quality and further on the computer vision algorithms evaluating these images has to be precisely determined.\nIn theory, the working limits of the computer vision algorithms are determined, and production tolerance limits are derived from these algorithmic working limits through a number of processes defined in the above mentioned quality norms. Opto-mechanical tolerance calculations, numerical simulations and test campaigns in the real world form three important pillars of these studies accompanied by environmental stress tests and aging simulations [7], [15].\nIn practice, though, modern camera-based ADAS applications are based on artificial intelligence (AI) and employ deep convolutional neural networks, due to their superior performance in comparison to traditional, rule-based computer vision algorithms. The difference in performance is such that currently there is no alternative to using AI algorithms. As these AI algorithms are 'black boxes' in nature [13], i.e. the output cannot be predicted, the link between optical quality and AI algorithm performance cannot be easily established [25]. And due to the lack of quantitative working limits w.r.t. the AI algorithms, production tolerance limits for the windscreens can not be straightforwardly deduced [4].\nIn this article we are evaluating the two main measurement processes that are currently used in the automotive industry to qualify windscreen optical quality: refractive power and the modulation transfer function (MTF). While refractive power is the established measurement method and has been standardized already in the 1990ies [6], [9], the MTF -or equivalently the spatial frequency response (SFR) -has gained recent attention as automotive researchers [32] look for alternatives to refractive power because of the increasing ADAS camera performances in terms of the number of pixels per field angle. Novel startups are even forming around the promise of using MTF to characterize windscreens.\nWe find and mathematically demonstrate in this work that both refractive power and MTF are not sufficient to quantify windscreen quality for AI algorithm performance. This is a fundamental finding in that our results are derived from first principles of optics, and apply very generally. First, we recapitulate the optical basics in Sec. II. Importantly, the optical quality is described in terms of wavefront aberrations, using the Zernike formalism to mathematically decompose the nature of the optical perturbations. Then in Sec. III, using these basics we show how refractive power is fundamentally not capable of accounting for a distinct number of wavefront aberrations, while at the same time these aberrations have a demonstrable effect on AI algorithm performance [24], [26].\nIn Sec. IV we then show how the windscreen and the camera system form a joint optical system, that -again fundamentally -cannot be separated into two distinct optical systems. This separation, though, is a necessary requirement in linear system theory for the multiplicativity of the system MTF w.r.t. the individual optical elements [12]. Therefore, this prohibits any MTF measurement on the windscreen alone, and thus from using MTF as a qualifying measurement at the production site of the Tier 1.\nOptical quality has many different aspects. For this article, we concentrate solely on the 'sharpness' of the camera image, which is deteriorated by optical path variations across the windshield plane and is typically quantified by the MTF in optical linear system theory. In general, lens distortions, which describe the failure of a lens to map lines into lines and represent a curvilinear mapping [28], might also deteriorate the performance of ADAS functionalities. Effects of optical distortions will not be considered in the following.\nIn summary, we will show how the two only current measurement techniques in the automotive industry are not sufficient to measure the sharpness of the windscreen alone. These results have far reaching implications for the automotive industry, which needs to focus more effort on finding alternatives. We finally propose a concept on how to find a novel measurement process, combining optical modeling, numerical simulation and AI algorithms to link the optical quality of windscreens to the performance of AI algorithms." }, { "figure_ref": [], "heading": "II. OPTICAL QUALITY AND MATHEMATICAL MODELS", "publication_ref": [ "b4", "b18", "b8" ], "table_ref": [], "text": "Maxwell's equations are the fundamental physical model of electromagnetic radiation, and the wave equation forms the basis for the technological application of light. If all elements in the optical system are large compared to the wavelength of the light, geometrical optics may be used. It plays an important role in the development of optical systems as well, in the form of raytracing simulations. A windscreen is large in mechanical dimensions, both laterally as well as axially, but previous work has shown that the aberrations originating inside the windscreen cannot be neglected [5], [20]. Thus, it is not sufficient to take only the geometry of the windscreen into account -which would allow for a raytracing approachbut a comprehensive optical model needs to be based on the wave description of light. This is why in the following we use the fundamental Zernike approach [10] to model wavefront aberrations, where the optical path difference mathematically models the aberrations present in the windscreen." }, { "figure_ref": [], "heading": "A. Wavefront Modelling with Zernike Polynomials", "publication_ref": [ "b2", "b15" ], "table_ref": [], "text": "The optical path difference W , defined on the principle plane, is usually expressed as a decomposition into Zernike polynomials Z n with corresponding Zernike coefficients c n (in units of meters) as [3]:\nW (ρ, ϕ) = ∞ n=0 c n Z n (ρ, ϕ) , c n : (2) = ⟨W, Z n ⟩ .(1)\nHere, the domain of the principle plane of the optical element is parameterized by normalized polar coordinates with radius ρ and polar angle ϕ. There are different numbering schemes for Zernike polynomials, i.a. a linear numbering scheme according to the American National Standards Institute (ANSI) which has been adopted within this work. The Zernike polynomials reproduce the aberration pattern on the unit circle and correspond to different, independent optical perturbations like defocus or astigmatism. The independence of the perturbations is mathematically reflected by the orthogonality relation of the scalar product:\n⟨Z i , Z j ⟩ := 2π 0 1 0 Z i (ρ, ϕ)•Z j (ρ, ϕ)•ρ dρ dϕ = π •δ ij .(2)\nThis is important, because we will demonstrate how certain Zernike polynomials are simply not present in the refractive power measurement, and the orthogonality fundamentally implies that this information can not be recovered. Table I indicates the normalized Zernike polynomials defined by ISO 24157 [17] up to the third order." }, { "figure_ref": [], "heading": "B. Refractive Power", "publication_ref": [ "b31", "b5", "b21", "b20", "b31" ], "table_ref": [], "text": "Refractive power measures how much focusing power a lens has. It is given in units of diopters, i.e. in inverse distance of the focal length of the lens. A comprehensible way to visualize refractive power is two parallel light rays entering the optical system -here: the windscreen -and upon exit are not parallel anymore, but either divergent or convergent. In the convergent case, the focal length is the distance from the refractive element to the intersection of the two rays, and its inverse is the\nZ i Zernike polynomial Harmonic Polar coordinates Cartesian coordinates Z 0 1 1 ✓ Z 1 2ρ sin ϕ 2y ✓ Z 2 2ρ cos ϕ 2x ✓ Z 3 √ 6ρ 2 sin 2ϕ 2 √ 6xy ✓ Z 4 √ 3(2ρ 2 -1) √ 3(2x 2 + 2y 2 -1) × Z 5 √ 6ρ 2 cos 2ϕ √ 6(x 2 -y 2 ) ✓ Z 6 √ 8ρ 3 sin 3ϕ √ 8(3x 2 y -y 3 ) ✓ Z 7 √ 8(3ρ 3 -2ρ) sin ϕ √ 8(3x 2 y + 3y 3 -2y) × Z 8 √ 8(3ρ 3 -2ρ) cos ϕ √ 8(3x 3 + 3xy 2 -2x) × Z 9 √ 8ρ 3 cos 3ϕ √ 8(x 3 + 3xy 2 ) ✓\nTABLE I: Zernike polynomials up to the third order.\nFig. 1: Refractive power measurement of the ADAS camera area of a VW series production windshield under an inclination angle of ϵ = 63°. The difference in magnitude between horizontal and vertical direction originates from the inclination angle, which amplifies the refractive power according to the Kerkhof model [33].\nnumerical value of the refractive power. For concave lenses, the diverging rays are extended in the negative direction until these two rays intersect, and the negative distance now forms the focal length.\nFor windscreens, the refractive power is not a single number for the whole glass, but the measurement has a spatial resolution as depicted by Fig. 1. In the early days, two actual parallel laser beams were deflected, and the whole setup was laterally moved to achieve a certain spatial resolution [6]. More modern systems such as the one produced by ISRA use the Moiré effect to spatially resolve the refractive power over a limited area by observing the location dependency of the perturbed grid spacing between Moiré interferences [23]. In addition, new refractive power measurement systems like the one produced by LaVision [22] use the Background Oriented Schlieren (BOS) imaging method to overcome the resolution limitation of the Moiré approach [33].\nImportantly, the refractive power depends on the direction, as the two parallel lines form a plane together with the principal plane of the optical element. In principle, this direction can be rotated full circle by 360 • , but in practice, the refractive power is determined and specified only in the horizontal and vertical direction." }, { "figure_ref": [], "heading": "C. Modulation Transfer Function", "publication_ref": [ "b1", "b8", "b14", "b12", "b8", "b0", "b17", "b25" ], "table_ref": [], "text": "The modulation transfer function (MTF) -and its nonharmonic equivalent, the spatial frequency response (SFR)are established metrics to characterize optical systems, based on linear system theory and scalar diffraction theory [2], [10]. In image space, the transfer function of the system under test is called the point spread function (PSF), and in frequency space, it is denoted as the optical transfer function (OTF). The MTF is given by the absolute value of the OTF and is of particular importance if the intensity distribution is the matter of interest. The PSF and the MTF are highly non-linear functions over the image field (radius, azimuth), and they also depend on the defocus ∆z, due to the refractions on different lens element surfaces. Hence, the input space of the PSF is in general three dimensional.\nThe MTF is measured by using either harmonic input signals (MTF, e.g. sinusoidal Siemens star) or a step function type input (SFR, e.g. slanted edge). ISO12233 defines a norm to measure the MTF [16], and IEEE P2020 is currently finalizing an automotive extension of this norm [14]. In this article, we will use slanted edge measurements.\nAccording to scalar diffraction theory, the MTF is proportional to the absolute value of the Fourier transform of the wavefront in the aperture plane of the lens (more general: the optical element). The wavefront is transformed, normalized, and the absolute value is taken to yield the MTF. This allows for an analytical relationship between the MTF and the wavefront aberrations, which can be parameterized by Zernike coefficients c n :\nMTF( ⃗ k|λ) = ‚ P+∩P- exp 2πi λ ∞ n=0 c n Z n ( ⃗ ξ + ⃗ ∆) -Z n ( ⃗ ξ -⃗ ∆) dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 .(3)\nEq. ( 3) is motivated by Goodman [10] and the domain of integration is determined by the aperture stop of the camera.\nIn detail, P describes the 2D aperture stop function, which is given by a circular top hat function with magnitude one and baseline zero. The displaced aperture stop function P + is shifted by:\n⃗ ∆ := λz a →o ⃗ k 2 ≈ λf ⃗ k 2 . (4\n)\nThe intersection of P + and P -(shift by -⃗ ∆) determines the domain of integration in Eq. ( 3). In addition, λ characterizes the wavelength under consideration and z a →o quantifies the distance on the optical axis from the aperture stop to the observer plane, which roughly equals the focal length f of the camera lens if the Gaussian lens equation is approximated. This simplification holds if the working distance is by magnitudes larger than the image distance z a →o . Finally, ⃗ k denotes the spatial frequency vector.\nAs a side note, the MTF depends on the wavelength λ according to Eq. ( 3). The polychromatic MTF can be retrieved by integrating the conditioned, monochromatic MTF over the normalized power spectral density PSD of the light source. The area under the PSD curve quantifies the likelihood of emitting a photon in the wavelength range [λ, λ + ∆λ] by the light source. As a result, the polychromatic MTF is given by:\nMTF( ⃗ k) = ∞ 0 MTF( ⃗ k|λ) • PSD(λ) dλ .(5)\nConsequently, chromatic aberrations will potentially also influence the performance of AI-based algorithms for autonomous driving but they are not discussed in detail in this paper.\nAt this point, it has to be emphasized, that the first three Zernike coefficients for piston and tilt do not represent optical aberrations in the classical sense. Even though there is a wavefront perturbation of the light beam, the image quality is not influenced by those terms because the curvature of the wavefront is not affected. Instead, tilts induce image distortions and generate a non-conformal mapping. As we focus in this article on the sharpness of the optical system we will not further investigate this influence.\nIf the Zernike polynomials of Table I are transformed into Cartesian coordinates it becomes obvious that the difference in Eq. ( 3) vanishes for Zernike polynomials of zeroth order. For the y-tilt Z 1 and the x-tilt Z 2 , the integrand evaluates to a constant phasor. Hence, it holds that:\nMTF( ⃗ k|λ) (3) = e (2πi•c1,2•za →o • ⃗ k) • ‚ P+∩P- 1 dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 ⇔ MTF( ⃗ k|λ) = e (2πi•c1,2•za →o• ⃗ k) • ‚ P+∩P- 1 dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 ⇔ MTF( ⃗ k|λ) := e (2πi•c1,2•za →o • ⃗ k) • MTF diff ( ⃗ k|λ) ⇔ MTF( ⃗ k|λ) = MTF diff ( ⃗ k|λ) . (6\n)\nAs a consequence, the diffraction limited MTF is not modulated by the Zernike coefficients c 0 up to c 2 . Therefore, the second order Zernike coefficients are those of main interest for studying the effect of optical aberrations in terms of sharpness degradation on convolutional neural networks for autonomous driving. Nonetheless, the optical distortion does influence the rectilinear mapping, but as stated before this is an independent effect we do not study in this work. Finally, note that the MTF is unfortunately currently not traceable to fundamental physical quantities, and therefore a calibration chain to national metrological institutes like the PTB in Germany or the NIST in the US can not be established at the moment. For the automotive industry, this is a major source of discussion, as the implementations of ISO12233 are very sensitive to many diverse influences: stable lighting conditions (spectrum, intensity, direction, homogeneity), a reproducible mechanical setup (target distance, field-of-view) and well-defined camera settings (gain, exposure, HDR, ISP . . . ) are the goal, which are not met in practice. Instead, comparability of two camera systems is only possible to a (relatively) good accuracy within measurements from the very same experimental setup. Comparison between two different measurement sites -even with the same nominal setup -is quite difficult and error-prone [1], [19], [27]." }, { "figure_ref": [], "heading": "III. REFRACTIVE POWER", "publication_ref": [ "b27", "b32" ], "table_ref": [], "text": "In this section, we will demonstrate how relevant Zernike polynomials are not captured by refractive power measurements. Our argument is based on the theory that the refractive power is given by the second derivative of the wavefront modulation of a plane wave passing a refractive element [29]. Using this relationship, we demonstrate how several Zernike polynomials are simply not covered by a refractive power measurement, as zeroth and first order polynomials in x and y vanish if the second derivative is considered. We start by introducing the measurement principle of a Shack-Hartmann sensor in Sec. III-A to motivate how the Zernike coefficients c i are retrieved. In Sec. III-C, we then mathematically derive Fig. 2: Sketch of the measurement principle of a Shack-Hartmann sensor [34].\nwhich Zernike polynomials are not present in the refractive power measurement. As the Zernike polynomials form an orthogonal function basis, this proves that there are optical aberrations that cannot be captured by the refractive power. Finally, we experimentally demonstrate the validity of our assumption in Sec. III-D by using a Shack-Hartmann sensor to measure the wavefront modulation induced by a high-quality convex lens of well-known refractive power." }, { "figure_ref": [], "heading": "A. From Shack-Hartmann Measurements to the Wavefront Aberration Map", "publication_ref": [], "table_ref": [], "text": "Fig. 2 demonstrates the method of operation of a Shack-Hartmann wavefront sensor. If a collimated light beam is transmitting a refractive optical element then the wavefront gets modulated. A Shack-Hartmann sensor consists of a microlens array, which resolves the local wavefront perturbations by focusing a wavefront snippet on a CCD or CMOS sensor. Without any aberrations, the wavefront sensor will capture the light in the center of each pixel. If aberrations are present, then the focusing spot will be displaced locally by d x and d y , respectively. The resulting local gradient of the optical path difference (W ) is given by:\n⃗ β := β x ⃗ x1 . . . β x ⃗ xm β y ⃗ x1 . . . β y ⃗ xm T ⇔ ⃗ β = dx 1 f 2 sh + d 2 x1 . . . dx m f 2 sh + d 2 xm dy 1 f 2 sh + d 2 y1 . . . dy m f 2 sh + d 2 ym T .(7)\nHere, f sh denotes the focal length of the microlenses and m specifies the number of microlenses within the array. With the Shack-Hartmann measurement of the local wavefront gradients β i , the Zernike coefficients c i of the wavefront aberration map are determined by:\n⃗ β = 1 ρ a •              ∂Z 4 ∂ x ⃗ x1 . . . ∂Z n ∂ x ⃗ x1 . . . . . . . . . ∂Z 4 ∂ x ⃗ xm . . . ∂Z n ∂ x ⃗ xm ∂Z 4 ∂ ỹ ⃗ x1 . . . ∂Z n ∂ ỹ ⃗ x1 . . . . . . . . . ∂Z 4 ∂ ỹ ⃗ xm . . . ∂Z n ∂ ỹ ⃗ xm              •    c 4 . . . c n    =: 1 ρ a • M • ⃗ c .(8)\nThe Zernike decomposition coefficients c i are uniquely determined if |M T M| ̸ = 0. In other words, the Gramian matrix M T M has to be invertible, wherefore M T M needs to have full rank. If this condition is fulfilled, then the Zernike coefficient vector ⃗ c can be retrieved from the measured local wavefront gradient vector ⃗ β by: ⃗ c\n(8) = ρ a • M T M -1 • M T • ⃗ β .(9)" }, { "figure_ref": [], "heading": "B. From Wavefront Aberration Maps to local Refractive Power", "publication_ref": [ "b27", "b28" ], "table_ref": [], "text": "From Sec. III-A we know how to determine the Zernike coefficients c i , wherefore we can reconstruct the wavefront aberration map according to Eq. ( 1). If the reference wavefront has been characterized by a plane wave, then the local refractive power of an optical element is given by the second derivative of the wavefront aberration map W with respect to the axis of interest [29], [30]. Hence, the refractive power D xi along the axis x i is given by:\nD xi (⃗ x a ) = ∂ 2 ∂x 2 i W (⃗ x a ) .(10)\nHere, the input vector ⃗ x a ∈ R 2 is restricted to the principal plane of the refractive element. The validity of Equation ( 10) can be proven for the special case of a spherical thin lens:\nf 2 xa 1 = x 2 a1 + f xa 1 -W (x a1 ) 2 , w.l.o.g.: x a2 ! = 0 ⇒ W (x a1 ) = f xa 1   1 -1 - x a1 f xa 1 2   ⇔ W (x a1 ) = f xa 1   1 -   1 - 1 2 x a1 f xa 1 2 + O    x a1 f xa 1 4        ⇒ W (x a1 ) ≈ x 2 a1 2f xa 1 =: D xa 1 2 • x 2 a1 ⇒ D xa 1 (10) = ∂ 2 ∂x 2 a1 W (x a1 ) = D xa 1 . ■(11)" }, { "figure_ref": [], "heading": "C. Information Content of Refractive Power Measurements", "publication_ref": [ "b19", "b7", "b27", "b9", "b22", "b24" ], "table_ref": [], "text": "Eq. ( 10) determines the relationship between refractive power measurements D and wavefront aberration measurements W via the curvature of the optical path difference map. In Sec. II-C we introduced the concept of the PSF as a Fourier optical merit function, which serves as the impulse response function or the Green's function of an optical system [21]. In addition to the Fourier optical approach there is also a ray optics approximation to describe the PSF in terms of the area of a blurring ellipse, which encloses a certain amount of light around the focusing spot in relation to the total amount of energy entering the system through the aperture stop. The area of this blurring ellipse is proportional to the Gaussian curvature [8] of the wavefront aberration map or equivalently speaking, proportional to the determinant of the Hessian matrix of the wavefront aberration function [29]:\n‹ C PSF(⃗ x o ) ẑo d 2 x o ∝     ∂ 2 ∂x 2 1 W (⃗ x a ) ∂ ∂x 1 ∂ ∂x 2 W (⃗ x a ) ∂ ∂x 1 ∂ ∂x 2 W (⃗ x a ) ∂ 2 ∂x 2 2 W (⃗ x a )     . (12)\nHere, C denotes the contour confining the domain of integration, which is given by the blurring ellipse. Due to the relationship presented in Eq. ( 10), this matrix is also known as the dioptric power matrix D [11]. The determinant can be rewritten in terms of the traces of the dioptric power matrix:\n‹ C PSF(⃗ x o ) ẑo d 2 x o (12) ∝ 1 2 (tr (D)) 2 -tr D 2 . (13\n)\nSo far, the automotive industry exclusively specifies requirements in terms of the refractive power w.r.t. the horizontal and vertical directions. Consequently, only the trace of D is measured and off-diagonal elements in the Hessian matrix are not investigated. This demonstrates that there is a blind spot in the quality assurance chain at the moment. This conclusion can be further underpinned by an mathematical argument. The trace of D is given by:\ntr (D) = d i=1 D xi (⃗ x a ) = △W (⃗ x a ) .(14)\nConsequently, the trace of D is unaffected by wavefront aberration fields, which fulfill the Laplace equation:\n△Γ(⃗ x a ) ! = 0 . (15\n)\nAs a result, the trace of D is Gauge invariant under aberration fields Γ(⃗ x a ) that are composed of harmonic functions. Hence, Zernike polynomials in Table I that are harmonic functions (like astigmatism or trefoil) will not alter the trace of D.\nIn a nutshell, refractive power measurements are not sensitive for optical distortions quantified by c 1 and c 2 . Furthermore, the refractive power is invariant under oblique astigmatism given by c 3 if the refractive power requirements are specified exclusively along the horizontal and vertical axis, as it is the current governing standard in the automotive industry. Finally, those quality standards are insufficient for extracting more fundamental information about the optical system in terms of the PSF. Nonetheless, the aberrations associated with these polynomials have been proven to have an influence on the performance of AI algorithms [24], [26]." }, { "figure_ref": [ "fig_0" ], "heading": "D. Experimental Verification", "publication_ref": [ "b8", "b22", "b24" ], "table_ref": [], "text": "Since Eq. ( 10) is not well established in the automotive industry, we experimentally demonstrate the validity of the relationship by a Shack-Hartmann wavefront measurement of a calibration lens. The lens under test was produced by Zeiss and is traced back to national standards by an accredited calibration authority. The local wavefront gradients β i are measured by a Shack-Hartmann sensor and the refractive power is retrieved by utilizing Eq. (10). As demonstrated by Eq. 8 the Shack-Hartmanm measurement yields the first derivative of the wavefront. Here, we measure the lens and numerically determine the second derivative by a simple central difference scheme, which should result in the specified refractive power. Fig. 3 illustrates the outcome w.r.t. the refractive power map over the lens aperture. From the frequency distribution of the local refractive power across the entire principal plane, the expectation value for the global refractive power of the optical element in the xand y-plane can be deduced. The expectation values meet the certified refractive power values of the calibration lens within the uncertainty intervals. Hence, the validity of Eq. ( 10) has also been experimentally confirmed.\nSummarizing, we have demonstrated that fundamentally several optical aberrations are not captured by a refractive power measurement. The image quality can be deteriorated even though the refractive power measurement indicates a compliant windscreen sample. From previous studies on the effect of oblique astigmatism (c 3 ) on road sign classification [24], [26] it becomes evident, that refractive power measurements are insufficient for specifying the quality of a windshield in order to ensure reliable computer vision for autonomous driving vehicles." }, { "figure_ref": [], "heading": "IV. MODULATION TRANSFER FUNCTION", "publication_ref": [], "table_ref": [], "text": "In this section, we will demonstrate why the windscreen and the camera form a joint optical system that cannot be separated into two independent constituents, such that the MTF cannot be determined for the two systems separately. First, we argue how the refractive power of the windscreen interacts with the focal length of the camera system. In a second step, this is experimentally verified using a MTF measurement with and without a windscreen. A discussion elaborates on several implications for the production and testing process." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "A. Field Curvature", "publication_ref": [], "table_ref": [], "text": "The focal length of an imaging system varies over the field of view, with the so-called field curvature being a prominent optimization goal for any lens designer. The semiconductor production processes produce completely flat image sensors, which for the imaging optics is a challenge, as the field curvature needs to be flat as well to minimize aberrations. This field curvature, as a design property of the lens, is given by the offset ∆z fc over field in units of length, typically on the micrometer range. A symbolic field curvature is visualized in Fig. 4. As explained above, the refractive power of the windscreen leads to parallel rays converging or diverging. Taking the two elements windscreen and lens together yields a second focus offset ∆z ws for the camera system, as the converging (diverging) rays will shorten (prolong) the focal length of the camera system. Fig. 4 depicts this situation. The two offsets are added for the system offset, such that:\n∆z = ∆z ws + ∆z fc .(16)\nImportantly, both ∆z ws and ∆z fc can have positive or negative values, and thus the overall offset may vanish when these terms cancel. A vanishing offset value implies a sharpening of the system. Here, a MTF measurement of the camera alone would yield a certain number, while putting a windscreen in front of the camera would act like glasses and the image would become sharper. That this is indeed the case in practice is presented in the following section." }, { "figure_ref": [], "heading": "B. Experimental Validation", "publication_ref": [ "b14", "b14", "b10" ], "table_ref": [], "text": "Fig. 5 depicts two slanted edge measurements, one without a windscreen (5a), and one with a windscreen placed in front of the camera system (5b). The insets indicate the MTF values derived from the numerical evaluation for all four edges, using an ISO12233-compliant algorithm [16]. There are two horizontal and two vertical values. The two vertical values (top and bottom) distinctly decrease from 52 ± 1.5 % [95 %] to 39 ± 1.7 % [95 %] when a windscreen is placed in front of the camera. However, for the horizontal direction (left and right) the MTF values both significantly increase from 45/47 ± 1.5 % [95 %] to 52/54 ± 1.6 % [95 %] when the windscreen is placed in front of the camera. The results experimentally confirm that the defocus ∆z ws and ∆z fc may Fig. 5: MTF measurement for an ADAS system based on the Slanted Edge method according to ISO 12233 [16].\ncancel to a certain degree, increasing the sharpness like glasses would do for a myopic person. This conclusion is well established in physics [12] for decades but the implications for the quality assurance testing procedure of ADAS systems in the automotive industry are not well prevalent." }, { "figure_ref": [], "heading": "C. Discussion", "publication_ref": [ "b31" ], "table_ref": [], "text": "Both the field curvature of the lens and the refractive power of the windscreen are spatially variant. The field curvature not only varies over field (radially) as the name implies, but due to production tolerances the rotation symmetry of the lens is usually broken to a certain degree. The field of view of the lens projected on the windscreen yields a trapezoidal cutout (cf. Fig. 1). I.e., the (almost) rotational symmetry of the lens projected on the windscreen combines with the local refractive power variation of the windscreen in this cutout. Not only that, but the windscreen also has distinctly different refractive power for the horizontal and the vertical direction as given by the Kerkhof model [33].\nTaken together, it is apparent that a windscreen cannot be qualified by a MTF measurement if both the windscreen and the camera are measured separately. The experimental sharpening unambiguously demonstrates a non-linear process, proving that the two elements cannot be separated using linear system theory (read: the individual MTFs cannot be multiplied). The way individual production tolerances will add is not predictable. In brief, it is not possible to determine individual MTF limits, as the combination of individual tolerances may hold both good and bad surprises, either sending a good system to scrap or a bad system to the field.\nTherefore, any solution using MTF would have to measure the MTF on the combined system of produced windscreen and camera system with their individual production tolerances. This could be either at the production site of the Tier 1 or the OEM. But there are still several important open questions that make this an unattractive proposal: if an assembly is non-compliant, is it worth finding a compliant combination, does it make economically sense? How big are the assembly tolerances fitting the windscreen into the car body? If the OEM wants the measurement system at the site of the Tier 1, one should be aware that the assembly of the windscreen into the car produces distinct mechanical tolerances, changing the shape and internal tension of the windscreen. As we are looking for subtle differences in optical quality, this may affect the pre-assembled camera system as well. Finally, from an automotive process view it is clear that an independent measurement of the camera and the windscreen is much preferred.\nSummarizing, the MTF is a measure of 'sharpness' based on linear system theory. The windscreen and the camera form a combined optical system that cannot be separated, which prohibits its use for windscreen characterization without the actual, produced camera system in place. Taken together with the possibility of finding a better metric we are skeptical that the MTF should be prioritized for windscreen characterization going forward." }, { "figure_ref": [], "heading": "V. SIMULATING OPTICAL PROPERTIES", "publication_ref": [ "b18" ], "table_ref": [], "text": "Having shown that basically no current measurement system in the automotive windscreen industry is capable of a meaningful characterization of the windscreen optical quality for downstream AI algorithm consumption, what could be a way forward? A comprehensive experimental study using thousands of actual cameras and windscreens is out of the question. Therefore, the AI performance needs to be linked to the windscreen optical quality by simulation, using physicalrealistic optic models. These simulations need to model the production tolerances of both windscreens and cameras. Then, the performance requirements of the AI-based ADAS functionalities can be translated to optical quality specifications for windscreen production.\nThe waveform description is fundamental and includes all optical effects and aberrations, and can be measured by a Shack-Hartmann sensor. Currently, this is not a viable approach to windscreen characterization at the site of the Tier 1, as it is too expensive and more importantly, too slow for a 100 % part check. Nonetheless, we believe it is possible to use special laboratory-grade equipment to create the physicalrealistic optical models necessary for the simulations, and then derive from these simulations an understanding of the optical properties that are really necessary for the AI performance. Finally, from this we can deduce a simplified form of measurement that captures this newfound knowledge of the required optical properties. A first example of this process is published in [20].\nTherefore, the challenge is to understand those optical properties that are really necessary for a robust AI algorithm performance. We believe that this is a necessary step, and without it, the move from ADAS to AD will be prohibitively difficult, as production tolerances combined with the complexity of the world create an unmanageable number of combinations." }, { "figure_ref": [], "heading": "VI. SUMMARY", "publication_ref": [], "table_ref": [], "text": "In automotive mass production, the inspection systems at the suppliers need to measure the quality of windscreens in a meaningful way for the final device performance. Modern ADAS and future AD camera systems are based on AI algorithms, wherefore the windscreen quality needs to be linked to the performance of these algorithms. Currently, there are two types of measurements established in the industry to measure the optical quality of a windscreen: refractive power and MTF. In this article, we demonstrated how both these measurements are fundamentally not capable of capturing relevant optical properties of the windscreen.\nThe refractive power measurement does not include several aberrations -given by Zernike polynomials, e.g. oblique astigmatism -while these aberrations obviously affect the performance of the AI algorithms: oblique astigmatism causes a directional blurring of the scene, and blurring causes a degradation of the performance. Because of the orthogonality of the Zernike polynomials, it is clear that this information is simply lacking in refractive power measurements.\nMTF is based on linear system theory, where independent optical systems might be multiplied in frequency space to yield the system MTF. This is, for example, the case for the lens and the imager. Here, we demonstrated mathematically and experimentally that the windscreen forms a novel optical system together with the lens of the camera system, which cannot be separated into individual components. Therefore, measuring the MTF on the windscreen alone will not yield the performance of the combined system. Thus, the final assembly of the windscreen and camera system in the car may be both better or worse than the EOL measurement at the windscreen production site, either sending good parts to scrap or bad parts into the field.\nEvery car has a windscreen. Using the knowledge presented in this article we believe that the automotive industry needs to focus their efforts on finding novel measurement methods that qualify the optical quality of windscreens in a meaningful way for the downstream AI algorithms. We propose a concept using fundamental wave (and Fourier) optics to characterize the windscreens and combine wavefront measurements and physical-realistic simulations to reach an understanding of what optical properties are really important for AI computer vision algorithms. We believe that it cannot be said generally that the optical quality of windscreens is too low -what is currently lacking is not optical quality, but understanding about the robustness of the algorithms against optical aberrations. We simply do not know what optical quality is needed exactly. Taking these elements together we believe that a novel metric can be found that contains the relevant information, while at the same time the measurement is practical enough to be used stand-alone at the windscreen production site. This is the great windscreen challenge the automotive industry currently faces." } ]
Windscreen optical quality is an important aspect of any advanced driver assistance system, and also for future autonomous driving, as today at least some cameras of the sensor suite are situated behind the windscreen. Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way, which for modern perception stacks implies meaningful for artificial intelligence (AI) algorithms. The measured optical quality needs to be linked to the performance of these algorithms, such that performance limits -and thus production tolerance limits -can be defined. In this article we demonstrate that the main metric established in the industry -refractive power -is fundamentally not capable of capturing relevant optical properties of windscreens. Further, as the industry is moving towards the modulation transfer function (MTF) as an alternative, we mathematically show that this metric cannot be used on windscreens alone, but that the windscreen forms a novel optical system together with the optics of the camera system. Hence, the required goal of a qualification system that is installed at the windscreen supplier and independently measures the optical quality cannot be achieved using MTF. We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms, which can hopefully lead to novel inspection systems.
Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF not Sufficient 1 st
[ { "figure_caption": "Fig. 3 :3Fig. 3: Wavefront measurement performed on a ⟨D⟩ = 100.3 mdpt reference lens. In order to cover the entire aperture of the lens, several Shack-Hartmann measurements have been stitched together. This procedure has introduced artifacts, which are visible in the measurement data by strongly pronounced vertical and horizontal lines. In total, 15 measurements have been performed over the calibration lens aperture of d = 10 cm.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Windscreen and lens form a joint optical system. H and H ′ are the principle planes of the lens, f is the nominal focal length. The blue line visualizes the field curvature (not to scale). Normally, parallel rays are focused onto the field curvature (yellow line). Windscreen refractive power shortens or prolongs the effective focal length of the lens (red line). There are two different focus offsets ∆z fc and ∆z ws which may add or even cancel at different fields of view.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" } ]
Dominik Werner Wolf; Markus Ulrich; Alexander Braun
[ { "authors": "Uwe Artmann", "journal": "Electronic Imaging", "ref_id": "b0", "title": "Quantify aliasing a new approach to make resolution measurement more robust", "year": "2019" }, { "authors": "Glenn D Boreman", "journal": "SPIE Press", "ref_id": "b1", "title": "Modulation Transfer Function in Optical and Electro-Optical Systems", "year": "2001" }, { "authors": "Max Born", "journal": "Cambridge University Press", "ref_id": "b2", "title": "Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light", "year": "1999" }, { "authors": "Alexander Braun", "journal": "tm -Technisches Messen", "ref_id": "b3", "title": "Automotive mass production of camera systems: Linking image quality to AI performance", "year": "2022" }, { "authors": "David Compertore", "journal": "", "ref_id": "b4", "title": "Adas windshield measurements -white paper", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Determination of the optical deviation and refractive power of safety glass for vehicle glazing", "year": "1995" }, { "authors": "", "journal": "", "ref_id": "b6", "title": "Prüfung der abriebfestigkeit von fahrzeugverglasung mit dem wischer-test", "year": "2019" }, { "authors": "Manfredo Perdigão; Carmo", "journal": "Prentice-Hall", "ref_id": "b7", "title": "Differential geometry of curves and surfaces", "year": "1976" }, { "authors": "Joseph W Goodman", "journal": "Stanford University / McGraw-Hill", "ref_id": "b8", "title": "Introduction to Fourier optics", "year": "1968" }, { "authors": "William Harris", "journal": "South African Optometrist", "ref_id": "b9", "title": "The matrix representation of dioptric power. part 1: An introduction", "year": "1988-01" }, { "authors": "Eugene Hecht; Optik", "journal": "De Gruyter", "ref_id": "b10", "title": "", "year": "2018" }, { "authors": " Michael Heizmann", "journal": "at -Automatisierungstechnik", "ref_id": "b11", "title": "Implementing machine learning: chances and challenges", "year": "2022" }, { "authors": "", "journal": "IEEE Standards Association", "ref_id": "b12", "title": "Automotive imaging -white paper", "year": "2018" }, { "authors": "M J Irland", "journal": "Applied Optics", "ref_id": "b13", "title": "Windshield optics", "year": "1969" }, { "authors": "", "journal": "International Organization for Standardization", "ref_id": "b14", "title": "Photography -electronic still picture imaging -resolution and spatial frequency responses", "year": "2023" }, { "authors": "", "journal": "International Organization for Standardization", "ref_id": "b15", "title": "Ophthalmic optics and instruments -reporting aberrations of the human eye", "year": "2008" }, { "authors": "", "journal": "International Organization for Standardization", "ref_id": "b16", "title": "Quality management systems -particular requirements for the application of iso 9001:2008 for automotive production and relevant service part organizations", "year": "2009" }, { "authors": "Norman Koren", "journal": "Electronic Imaging", "ref_id": "b17", "title": "Measuring mtf with wedges: pitfalls and best practices", "year": "2017" }, { "authors": "Christian Krebs", "journal": "", "ref_id": "b18", "title": "Impact of windshield optical aberrations on visual range camera based classification tasks performed by cnns", "year": "2021" }, { "authors": "K Prem; Kythe", "journal": "CRC Press", "ref_id": "b19", "title": "Green's Functions and Linear Differential Equations: Theory, Applications, and Computation", "year": "2011" }, { "authors": " Lavision", "journal": "LaVision", "ref_id": "b20", "title": "Automotive imaging for safer driving", "year": "2023" }, { "authors": "Thomas Mitra", "journal": "ISRA Vision", "ref_id": "b21", "title": "Benefits of optical distortion measurement", "year": "2022" }, { "authors": "Patrick Müller", "journal": "", "ref_id": "b22", "title": "Simulating optical properties to access novel metrological parameter ranges and the impact of different model approximations", "year": "2022" }, { "authors": "Patrick Müller", "journal": "Electronic Imaging", "ref_id": "b23", "title": "Mtf as a performance indicator for ai algorithms?", "year": "2023" }, { "authors": "Patrick Müller", "journal": "", "ref_id": "b24", "title": "Impact of realistic properties of the point spread function on classification tasks to reveal a possible distribution shift", "year": "2022" }, { "authors": "J Roland", "journal": "", "ref_id": "b25", "title": "A study of slanted-edge mtf stability and repeatability", "year": "2015" }, { "authors": "Carsten Steger", "journal": "Wiley-VCH", "ref_id": "b26", "title": "Machine Vision Algorithms and Applications", "year": "2018" }, { "authors": "Larry N Thibos", "journal": "Ophthalmic and Physiological Optics", "ref_id": "b27", "title": "Calculation of the geometrical point-spread function from wavefront aberrations", "year": "2019-06" }, { "authors": "L N Thibos", "journal": "Journal of Vision", "ref_id": "b28", "title": "Accuracy and precision of objective refraction from wavefront aberrations", "year": "2004-04" }, { "authors": "", "journal": "Verband der Deutschen Automobilindustrie (VDA", "ref_id": "b29", "title": "Qualitätsmanagement in der automobilindustrie -prozessaudit", "year": "2023" }, { "authors": "Korbinian Weik", "journal": "", "ref_id": "b30", "title": "Imaging through curved glass: windshield optical impact on automotive cameras", "year": "2022" }, { "authors": "Dominik W Wolf", "journal": "Currently under review at Metrologia", "ref_id": "b31", "title": "Optical power measurement in the automotive world", "year": "2023" }, { "authors": "Z Yang", "journal": "Nature Commun", "ref_id": "b32", "title": "Generalized hartmann-shack array of dielectric metalens sub-arrays for polarimetric beam profiling", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 328.9, 148.6, 234.13, 30.2 ], "formula_id": "formula_0", "formula_text": "W (ρ, ϕ) = ∞ n=0 c n Z n (ρ, ϕ) , c n : (2) = ⟨W, Z n ⟩ .(1)" }, { "formula_coordinates": [ 2, 316.96, 320.63, 246.08, 23.88 ], "formula_id": "formula_1", "formula_text": "⟨Z i , Z j ⟩ := 2π 0 1 0 Z i (ρ, ϕ)•Z j (ρ, ϕ)•ρ dρ dϕ = π •δ ij .(2)" }, { "formula_coordinates": [ 2, 321.12, 570.54, 235.68, 121.45 ], "formula_id": "formula_2", "formula_text": "Z i Zernike polynomial Harmonic Polar coordinates Cartesian coordinates Z 0 1 1 ✓ Z 1 2ρ sin ϕ 2y ✓ Z 2 2ρ cos ϕ 2x ✓ Z 3 √ 6ρ 2 sin 2ϕ 2 √ 6xy ✓ Z 4 √ 3(2ρ 2 -1) √ 3(2x 2 + 2y 2 -1) × Z 5 √ 6ρ 2 cos 2ϕ √ 6(x 2 -y 2 ) ✓ Z 6 √ 8ρ 3 sin 3ϕ √ 8(3x 2 y -y 3 ) ✓ Z 7 √ 8(3ρ 3 -2ρ) sin ϕ √ 8(3x 2 y + 3y 3 -2y) × Z 8 √ 8(3ρ 3 -2ρ) cos ϕ √ 8(3x 3 + 3xy 2 -2x) × Z 9 √ 8ρ 3 cos 3ϕ √ 8(x 3 + 3xy 2 ) ✓" }, { "formula_coordinates": [ 3, 318.72, 181.58, 244.32, 38.75 ], "formula_id": "formula_3", "formula_text": "MTF( ⃗ k|λ) = ‚ P+∩P- exp 2πi λ ∞ n=0 c n Z n ( ⃗ ξ + ⃗ ∆) -Z n ( ⃗ ξ -⃗ ∆) dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 .(3)" }, { "formula_coordinates": [ 3, 388.91, 308.73, 170.25, 24.94 ], "formula_id": "formula_4", "formula_text": "⃗ ∆ := λz a →o ⃗ k 2 ≈ λf ⃗ k 2 . (4" }, { "formula_coordinates": [ 3, 559.16, 318.42, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 3, 354.52, 534.72, 208.52, 23.18 ], "formula_id": "formula_6", "formula_text": "MTF( ⃗ k) = ∞ 0 MTF( ⃗ k|λ) • PSD(λ) dλ .(5)" }, { "formula_coordinates": [ 4, 57.16, 108.9, 238.99, 149.3 ], "formula_id": "formula_7", "formula_text": "MTF( ⃗ k|λ) (3) = e (2πi•c1,2•za →o • ⃗ k) • ‚ P+∩P- 1 dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 ⇔ MTF( ⃗ k|λ) = e (2πi•c1,2•za →o• ⃗ k) • ‚ P+∩P- 1 dξ 2 ˜R2 | P ( ⃗ ξ) | 2 dξ 2 ⇔ MTF( ⃗ k|λ) := e (2πi•c1,2•za →o • ⃗ k) • MTF diff ( ⃗ k|λ) ⇔ MTF( ⃗ k|λ) = MTF diff ( ⃗ k|λ) . (6" }, { "formula_coordinates": [ 4, 296.15, 249.56, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 4, 312.19, 469.8, 250.85, 64.94 ], "formula_id": "formula_9", "formula_text": "⃗ β := β x ⃗ x1 . . . β x ⃗ xm β y ⃗ x1 . . . β y ⃗ xm T ⇔ ⃗ β = dx 1 f 2 sh + d 2 x1 . . . dx m f 2 sh + d 2 xm dy 1 f 2 sh + d 2 y1 . . . dy m f 2 sh + d 2 ym T .(7)" }, { "formula_coordinates": [ 4, 311.98, 605.94, 251.58, 112.66 ], "formula_id": "formula_10", "formula_text": "⃗ β = 1 ρ a •              ∂Z 4 ∂ x ⃗ x1 . . . ∂Z n ∂ x ⃗ x1 . . . . . . . . . ∂Z 4 ∂ x ⃗ xm . . . ∂Z n ∂ x ⃗ xm ∂Z 4 ∂ ỹ ⃗ x1 . . . ∂Z n ∂ ỹ ⃗ x1 . . . . . . . . . ∂Z 4 ∂ ỹ ⃗ xm . . . ∂Z n ∂ ỹ ⃗ xm              •    c 4 . . . c n    =: 1 ρ a • M • ⃗ c .(8)" }, { "formula_coordinates": [ 5, 113.01, 130.85, 187.02, 16.61 ], "formula_id": "formula_11", "formula_text": "(8) = ρ a • M T M -1 • M T • ⃗ β .(9)" }, { "formula_coordinates": [ 5, 124.02, 282.72, 176.01, 26.08 ], "formula_id": "formula_12", "formula_text": "D xi (⃗ x a ) = ∂ 2 ∂x 2 i W (⃗ x a ) .(10)" }, { "formula_coordinates": [ 5, 48.96, 362.91, 253.09, 172.85 ], "formula_id": "formula_13", "formula_text": "f 2 xa 1 = x 2 a1 + f xa 1 -W (x a1 ) 2 , w.l.o.g.: x a2 ! = 0 ⇒ W (x a1 ) = f xa 1   1 -1 - x a1 f xa 1 2   ⇔ W (x a1 ) = f xa 1   1 -   1 - 1 2 x a1 f xa 1 2 + O    x a1 f xa 1 4        ⇒ W (x a1 ) ≈ x 2 a1 2f xa 1 =: D xa 1 2 • x 2 a1 ⇒ D xa 1 (10) = ∂ 2 ∂x 2 a1 W (x a1 ) = D xa 1 . ■(11)" }, { "formula_coordinates": [ 5, 317.6, 77.7, 245.44, 38.35 ], "formula_id": "formula_14", "formula_text": "‹ C PSF(⃗ x o ) ẑo d 2 x o ∝     ∂ 2 ∂x 2 1 W (⃗ x a ) ∂ ∂x 1 ∂ ∂x 2 W (⃗ x a ) ∂ ∂x 1 ∂ ∂x 2 W (⃗ x a ) ∂ 2 ∂x 2 2 W (⃗ x a )     . (12)" }, { "formula_coordinates": [ 5, 322.82, 177.43, 236.07, 31.46 ], "formula_id": "formula_15", "formula_text": "‹ C PSF(⃗ x o ) ẑo d 2 x o (12) ∝ 1 2 (tr (D)) 2 -tr D 2 . (13" }, { "formula_coordinates": [ 5, 558.89, 192.23, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 5, 363.94, 314.25, 199.1, 30.32 ], "formula_id": "formula_17", "formula_text": "tr (D) = d i=1 D xi (⃗ x a ) = △W (⃗ x a ) .(14)" }, { "formula_coordinates": [ 5, 408.94, 379.24, 149.95, 13.26 ], "formula_id": "formula_18", "formula_text": "△Γ(⃗ x a ) ! = 0 . (15" }, { "formula_coordinates": [ 5, 558.89, 383.17, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 395.31, 167.61, 167.73, 9.81 ], "formula_id": "formula_20", "formula_text": "∆z = ∆z ws + ∆z fc .(16)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b8", "b8" ], "table_ref": [], "text": "With the increasing need for portable meals as people had to leave their homes for hours, the bento box has become a popular and celebrated food culture that emphasizes flavor, visual aesthetics, nutritional balance, and convenience. Despite advances in robotics applications in food production, creating visually appealing box lunch presentations and organizing multiple food items in an orderly manner still remain a challenge for robotics. This raises a question: What kind of composite image can provide comprehensive robotics guidance, standard placement compliance, and visually appealing presentation?\nTo resolve the above issue, (1) We propose a cyclic generative adversarial network [8] for text-to-image generation and image captioning. (2) We introduce Bento800 [9], the first manually annotated synthetic box lunch dataset. (3) We propose an aesthetic box lunch design model [9] with pre-trained placement ordering recovery and a generative adversarial network (GAN) [2] for layout generation & ingredients composition. This paper effectively combines our prior works and additionally expands upon the following aspects (Details are shown in Sec. 2.3 and Sec. 3): (4) We rephrase text descriptions in Bento800 dataset to increase text diversity for text-to-image generation. (5) We conduct experiments to demonstrate the effectiveness of our method in synthesizing and designing novel lunchbox images." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Text-to-Image Synthesis", "publication_ref": [ "b0" ], "table_ref": [], "text": "Based on our previous dual attention architecture [7], we propose a cyclic GAN (as in Fig. 1) and adopt a pre-trained BERT base model [1] for text embedding. For image captioning, we employ the Inception-v3 model [6] as a CNN image encoder for extracting image features and the BERT model to generate contextualized word vectors for each caption. The full objective function as below:\nL G = M m=1 (L Gm + L id ) + L cycle(1)\nL D = M m=1 L Dm (2)\nwhere L Gm and L Dm denote the adversarial loss at mth stage (m=0, 1, 2, 3). The adversarial loss consists of two components: unconditional loss, which evaluates the realism of the generated image, and conditional loss, which measures the compatibility between the generated image and the input text description. L id denotes the proposed identity-preserving loss which quantifies the dissimilarity between the generated image and the ground-truth image. L cycle denotes the cycle consistency loss between predicted captions and target captions. " }, { "figure_ref": [], "heading": "Layout Generation and Image Composition", "publication_ref": [], "table_ref": [], "text": "Text-guided generation models often have imprecise specifications for producing the intended target image, necessitating the use of additional controls for generating images. To achieve a precise depiction of scenes with multiple objects, we propose the following method, as shown in Fig. 2. The pre-processing model in Fig. 2a Conditional GAN loss functions (L layout and L image ) are applied for Layout Generation and Ingredient composition network. We also adopt a differentiable grid generator and bilinear sampling to transform food items to proper positions and compute the L ST N (L1 loss) between input single-item food and transformed single-item food." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Extend Information: Paraphrasing", "publication_ref": [ "b8", "b8" ], "table_ref": [], "text": "We propose a novel dataset Bento800 [9], which comprises 800 lunchbox images with three different types of food presentations: (1) Place fried chicken on rice; (2) Place salt-grilled salmon and tamagoyaki on rice; (3) Place croquette and fried shrimp on rice, fried shrimp is on croquette. To produce parallel and diverse data, we use OpenAI chatGPT 1 to rephrase the above sentences with the template \"Rephrase: [Input] with 25 examples\" and \"Rephrase: [Input]\"+\"Please try again\"×24 and also perform data cleaning. Then, we randomly select 8 sentences from the rephrased results for each lunchbox image and merge them with the original sentences to expand our dataset for text-to-image generation. 4 demonstrates more accurate and high-quality box lunch presentations organized by our proposed method [9] and human-designed. Three groups of results show different types of food presentations, which are also present in Bento800. Figure 5 shows various lunchboxes from single food items by randomly sampling noise. Experimental results show that our model produces diverse lunchboxes that conform to popular aesthetic preferences." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents a cyclic GAN for text-to-image synthesis and a layout generation & food composition network for innovative lunchbox presentation design. We also introduce Bento800, the first manually annotated dataset with extensive annotations. For future work, we plan to create a metric that aligns with human aesthetic judgment. This exploration may encourage the potential of more varied applications for bento box presentation design in robotics." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Bento800 dataset is available at https://github. com/" } ]
We propose a cyclic generative adversarial network with spatial-wise and channel-wise attention modules for textto-image synthesis [8]. To accurately depict and design scenes with multiple occluded objects, we design a pretrained ordering recovery model and a generative adversarial network to predict layout and composite novel box lunch presentations [9]. In the experiments, we devise the Bento800 dataset to evaluate the performance of the textto-image synthesis model and the layout generation & image composition model. This paper is the continuation of our previous paper works [8] and [9]. We also present additional experiments and qualitative performance comparisons to verify the effectiveness of our proposed method.
Design a Delicious Lunchbox in Style
[ { "figure_caption": "Figure 1 .1Figure 1. An overview of our proposed approach in [8].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure2. An overview of our proposed approach in[9].", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "takes synthetic or real images as input and builds a placement ordering list. During training in Fig. 2a, the layout generation model predicts layout bounding boxes while the ingredients composition model transforms food items to their corresponding transposed targets. During the testing in Fig. 2b, the weights of the layout generation model, Spatial Transformer Network (STN) [3] model and ingredients composition model are frozen. The training objective function is computed as: L = L layout (G, D) + L image (G, D) + L ST N (3)", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1https://chat.openai.com/", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Qualitative comparison: Ours [8] (left), Composable-Diffusion [4] (middle) and Stable Diffusion [5] (right).", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Our synthesized box lunch presentations [9] (\"Ours\") and human-designed ground-truth (\"GT\").", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Diverse box lunch presentations [9].", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 illustrates a qualitative comparison between box lunch presentations with state-of-the-art methods for textto-image. However, they can only synthesize poor image quality results [8] or images corresponding to part of the text prompt [4,5].Figure4demonstrates more accurate and high-quality box lunch presentations organized by our proposed method[9] and human-designed. Three groups of results show different types of food presentations, which are also present in Bento800. Figure5shows various lunchboxes from single food items by randomly sampling noise. Experimental results show that our model produces diverse lunchboxes that conform to popular aesthetic preferences.", "figure_data": "", "figure_id": "fig_7", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 3 illustrates a qualitative comparison between box lunch presentations with state-of-the-art methods for textto-image. However, they can only synthesize poor image quality results [8] or images corresponding to part of the text prompt [4,5].Figure4demonstrates more accurate and high-quality box lunch presentations organized by our proposed method[9] and human-designed. Three groups of results show different types of food presentations, which are also present in Bento800. Figure5shows various lunchboxes from single food items by randomly sampling noise. Experimental results show that our model produces diverse lunchboxes that conform to popular aesthetic preferences.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" } ]
Yutong Zhou
[ { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b0", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Max Jaderberg; Karen Simonyan; Andrew Zisserman", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Spatial transformer networks", "year": "2015" }, { "authors": "Nan Liu; Shuang Li; Yilun Du; Antonio Torralba; Joshua B Tenenbaum", "journal": "Springer", "ref_id": "b3", "title": "Compositional visual generation with composable diffusion models", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b4", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b5", "title": "Rethinking the inception architecture for computer vision", "year": "2016-06" }, { "authors": "Yutong Zhou", "journal": "", "ref_id": "b6", "title": "Generative adversarial network for text-to-face synthesis and manipulation", "year": "2021" }, { "authors": "Yutong Zhou; Nobutaka Shimada", "journal": "IEEE", "ref_id": "b7", "title": "Generative adversarial network for text-to-face synthesis and manipulation with pretrained bert model", "year": "2021" }, { "authors": "Yutong Zhou; Nobutaka Shimada", "journal": "", "ref_id": "b8", "title": "Able: Aesthetic box lunch editing", "year": "2022" } ]
[ { "formula_coordinates": [ 1, 359.29, 530.77, 185.82, 30.2 ], "formula_id": "formula_0", "formula_text": "L G = M m=1 (L Gm + L id ) + L cycle(1)" }, { "formula_coordinates": [ 1, 393.73, 561.76, 151.38, 30.2 ], "formula_id": "formula_1", "formula_text": "L D = M m=1 L Dm (2)" } ]
2024-03-12
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b11", "b0" ], "table_ref": [], "text": "While the contemporary world is developing rapidly with the advent of high-end technologies, some parts of society are out of their scope. One such part is the hard-of-hearing community, which still struggles in many situations and can be misunderstood in some extreme cases. For example, some hospitals still do not have a sign language interpreter on staff. Therefore the interaction between hardof-hearing people and healthcare providers is complex, which prevents timely assistance. A similar problem exists in structures such as banks, government institutions, airports, public places, and others, significantly complicating their everyday life. Moreover, many consequences of deaf as social isolation, an education gap with the hearing population, and difficulties in finding employment, also negatively affect the life of this community. Sign Language Recognition (SLR) systems have the potential to simplify these processes by, for example, developing a sign language learning app [3] or embedding a feature in video conferencing apps. Also, such technology can accomplish more transparent communication between people with different hearing and speaking abilities and be integrated into human-computer interaction systems [12], allowing hard-of-hearing individuals access to information and services easier and helping to overcome barriers in education [1] and employment. SLR is a field of study that should accurately convert sign language gestures from video footage into textual representation. This task is indispensable but daunting due to the tangle and rapid nature of sign language, which entails intricate hand gestures, body postures, and facial expressions. The complexity of data collection is the major problem of SLR due to a gap between hard-of-hearing and hearing communities. Adding to this the need for a different sign language for each country and significant language differences within even one country, Russian SLR system developers face the challenge of data absence. Furthermore, existing RSL datasets have few samples or must be sufficiently diverse across subjects, which is necessary to train a robust model. This paper presents two main contributions to simplify the solution of sign language recognition:\n-We provide a pipeline for creating a video dataset consisting of three main steps: video collection, validation, and time interval annotation. Crowdsourcing platforms were used throughout the pipeline to increase the number of signers and improve the dataset's quality. We apply some exam tasks to signers for the most correctly executed gestures and add a quality check to the validation step to extract inappropriate videos. In the third step, all videos were marked by the start and end time of the gesture. -We release the Russian Sign Language dataset, Slovo, which can become the basis for this area. It consists of 20,000 FullHD videos from 194 signers and is divided into 1,000 classes of glosses from the RSL without words shown by dactyl (finger-spelling) or compound gestures. Figure 1 shows the examples of gestures in our dataset. All signers recorded videos mostly in their homes or office in front of a laptop or smartphone camera. Each video, whose length varies from 1 second to 4, was cropped using two timestamps (start and end), contributing to the production of the \"no event\" class. Added class corresponds to the video's parts, where the signer is preparing to perform the gesture or has already performed it." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "This paper solely focuses on comparing our findings with the Russian Sign Language. It would be inappropriate to compare RSL datasets with SL datasets in foreign languages as they all differ significantly in structure. However, we provide an overview of datasets in other languages to show the specifics and features of this domain. Table 1 encompasses the prevalent datasets having significant data volumes. Since the collection step is the main problem of SL dataset creation and one of our contributions, we describe the main ways to collect it. " }, { "figure_ref": [], "heading": "Sign Language Datasets in Russian Domain.", "publication_ref": [ "b10", "b8", "b6" ], "table_ref": [], "text": "There are four more widespread RSL datasets. The first, TheRuSLan [11], is composed of a total of 164 gestures, primarily related to the supermarket theme. A group of 13 signers was involved in the video collection, where each signer produced a unique recording with an average duration is 36 minutes. All signers come from different parts of the country, which generates variability within a class due to various dialects. The authors also proposed subtitles for each sample, which indicate the specific signs class. The second, FluentSigners-50 [19], were created with the help of 6 natives, who chose frequently used signs, produced the templates for them, and wrote the instruction for signers. All signers came from different Kazakhstan regions, making the dataset a high degree of linguistics. Heterogeneity in the signer's age, skin color, clothes, variable background, and lighting make the dataset immensely diverse. The videos are in a total of 43 hours of labeled trimmed materials. The third, K-RSL [9], contains 4 subsets of phrases from a linguistic point of view: question-statement pairs, signs of emotion, emotional question-statement pairs, and phonologically similar signs. It was divided into 600 glosses with 28,250 examples in total. Ten signers recorded K-RSL, the first 5 are professional SL interpreters, and the other 5 are deaf native signers. The last, RSL [7], consists of two sets of gestures obtained from an online dictionary. Each gesture was repeated by the signer at least 5 times.\nAll signers are dressed in black suits against a clean background, which makes the dataset visually monotonous. All videos are marked with additional classes named \"start gesture\" and \"end gesture\"; suggestions include an additional class named \"transition\".\nThe TheRusLan and the FluentSigners-50 datasets are unsuitable for us due to the disuse of Kazakh-Russian Sign Language in Russia since the gestures are outdated. Besides, the TheRusLan dataset was created for only the supermarket domain and cannot be used for everyday life. Also, the two described datasets are not diverse in classes of signs and subjects. The RSL dataset can be used only in limited situations by us because we cannot influence the dataset's update by adding new sign classes. Furthermore, the RSL dataset was recorded by only 5 signers, which do not differ in clothing and background, complicating the training of a stable model. These reasons prompted us to create our dataset with 1,000 frequently used RSL signs received from 194 signers. We plan to extend it with new classes and increase the diversity of subjects." }, { "figure_ref": [], "heading": "Others Sign Language Datasets.", "publication_ref": [ "b3", "b7", "b19", "b9", "b13", "b20", "b12", "b20" ], "table_ref": [], "text": "Since RSL differs from other sign languages, we describe only notable SL datasets, comparing them according to the main specific features of the domain. Many of reviewed datasets are not diverse in signers: RWTH-BOSTON-400 [4] were recorded by only 5 speakers, LSE-Sign [8] -by two sign language natives, LSA64 [20] -by 10 non-expert signers, and GSLC -by 6 signers. Besides, LSE-Sign was recorded within one week to minimize the diversity of the signer's appearance. Others tried to make more heterogeneous datasets. MS-ASL [10] and WLASL2000 [14] are the most extensive publicly available ASL datasets, and their videos were produced by 222 and 119 signers, respectively. The AUTSL dataset [21] was recorded with 20 backgrounds, including indoor and outdoor scenes and with different angles. The reviewed datasets differ in the goal of creating and choosing the domain of signs. For example, The RWTH-PHOENIXWeather corpus [13] contains SL recordings from the German TV station PHOENIX. The more typical variant to choose a sign basket is to collect it from the frequently daily-used signs like in AUTSL [21]." }, { "figure_ref": [], "heading": "Sign Language Dataset Collection.", "publication_ref": [ "b10", "b20", "b19", "b13", "b9" ], "table_ref": [], "text": "The main problem of dataset creation for SLR is video collection because it is challenging to find sign language experts. The need for diversity in signers makes this task even more problematic. The choice of sign basket is the other significant problem because natural and sign languages are highly different. We reviewed ways to collect sign language videos and divided this overview into three groups by collection methods for convenience.\nManually recorded videos. One of the main ways to collect videos for sign language recognition is to produce them manually with a camera. Kagirov et al. [11] used the MS Kinect 2.0 device to record video in 3D with a depth map to create the TheRusLan dataset. The Turkish Sign Language dataset, AUTSL [21], was collected for real-life scenarios by the same camera. To make the model robust to scenes, 20 different backgrounds, including dynamic, various lighting conditions, from artificial light to sunlight, and different field-of-views were used to create AUTSL. The authors choose the frequently used signs; some are compound signs formed by simultaneously making two consecutive signs. The videos were performed by 43 different signers, where 60% are students of the TSL course, 18% are persons who know TSL (instructors and translators), 15% are trained signers by the AUTSL dataset, and others related. The Argentinian Sign Language dataset, LSA64 [20], was recorded by a Sony HDR-CX240 camera in two different scene conditions: outdoor and indoor. The authors simplified the hand segmentation task with fluorescent-colored gloves. Signers wore black clothes against a white wall's backdrop for more accurate hand extraction. Downloaded videos. Another way to collect samples is to download from news or educational video sources. It has the advantage of correctly matching video and signs since sign language experts checked the content. For the WLASL2000 dataset, the authors [14] chose multiple education SL websites as suitable video sources. They filtered samples by signs and leaved videos containing words only. Annotators of sign dialects were not native sign languages: they received training to understand SL specifics and, with a designed interface, compare signs from two videos displayed simultaneously. Samples for the MS-ASL dataset [10] were downloaded from video-sharing platforms to communicate and study ASL. Since videos are recorded and uploaded by ASL students and teachers, they differ by background, lighting, positioning, and dialect. Such platforms accompany the video with subtitles, which authors processed by OCR. Face detection and recognition are integral parts of sample preprocessing in cases where videos were taken from websites, and the authors included them in their dataset creation pipeline." }, { "figure_ref": [], "heading": "Kind of crowdsourcing. Mukushev, Medet, et al.", "publication_ref": [], "table_ref": [], "text": "[19] choose a more complicated but effective way to collect videos for SLR. Their dataset, FluentSigners-50, was created with six professional SL interpreters. They developed a sign basket including commonly useful phrases and sentences in the hard-of-hearing community. Other signers were invited by interpreters and use SL daily, and the subsequent distribution of signers by SL use was obtained: 32 deaf, 6 hard of hearing, 3 hearing SODA (Sibling of a Deaf Adult), and 9 hearing CODA (Child of Deaf Adults). They used instructions and templates from interpreters to repeat the KRSL sentences." }, { "figure_ref": [ "fig_1" ], "heading": "Dataset Creation", "publication_ref": [], "table_ref": [], "text": "The following part provides details about our data collection pipeline. It consists of 3 main stages: (1) video collection, (2) video validation, and (3) and gesture time interval annotation. We used two crowdsourcing platforms: Yandex Toloka3 for data mining and ABC Elementary4 for the validation and the annotation so that different users are involved in recording and verifying videos. In addition, before each stage, crowdworkers must pass a mandatory RSL exam5 with a score of at least 80% before being granted access to assignments. These two nuances allow us to get a better and unbiased assessment of the correctness of the videos. Figure 2 shows the main part of the dataset creation process. 1. Video Collection. The essential part of dataset creation started with designing a sign basket. We paid attention to choosing words frequently used in everyday life, and in the end, it turned out to be 1,000 glosses. We chose words related to commonly used topics such as food, animals, emotions, colors. After, we asked the crowdworkers from Yandex Toloka to record a short video of themselves singing a specific word in RSL based on the provided template. Video templates were taken from SpreadTheSign website6 , a project of the European Sign Language Center association. Participants provided informed consent for data processing, ensuring compliance with legal requirements. No discrimination or bias was present in the dataset, promoting fairness and inclusivity.\n2. Video Validation. Correctly signing the gesture can be challenging for people not fluent in sign language, so we added the validation stage on the ABC Elementary platform in our dataset-creating pipeline. Workers were asked to check if the gesture was performed correctly. Each video was checked at least by three different workers. If they disagree, another marker participated in the validation of such a video, so up to 5 markers on the video could be repeated. If most workers mark the video as invalid, it is rejected; otherwise, it is accepted and passed to the next stage. After the validation, we left videos with a short edge of at least 720 pixels and converted them to a 30 fps rate." }, { "figure_ref": [], "heading": "Time Interval Annotation.", "publication_ref": [], "table_ref": [], "text": "Collected videos may contain uninformative frames at the beginning and the end of the video, where workers turn the camera on and off and prepare to show the gesture. Therefore, annotating the gesture's start and end time on the video is necessary. The crowdworkers from ABC Elementary were asked to indicate the time interval with a gesture. Since our dataset contains glosses and phrases, some videos may have several gestures. In this case, workers should tag them together as a full gloss. Each video was annotated by three different crowdworkers.\nFigure 3 shows the developed aggregation algorithm to get the average over the responses time interval. After cutting off the gestures, we had the cuts at the beginning and the end of the video where no gesture is shown, and we decided to use them as zero-class objects in training to predict the absence of action. Fig. 3. Time intervals aggregation pipeline. First, we split the beginning and end timestamps into different groups and then independently calculated distances between all points in each group. Then, if the maximum distance is less than 30 frames, we find the average value of each group and assume them to be the final pair (begin, end). Otherwise, video with such annotations was not taken into the dataset.\nDataset Post-processing. While reviewing the collected videos, we noticed that some users show gestures significantly slower than others. This circumstance leads to inhomogeneous video length within the same class: the duration of the same gesture varied by more than five times, complicating the classification of such data. To make our dataset more homogenous, we decided to calculate the distribution of video lengths for each class and speed up those videos that are slower than the average value by more than 30 frames. As a result, 347 videos from 270 classes were sped up by an average of 1.7 times. In addition, we compared two variants -with and without this processing -and ensured that homogenous speed increases the accuracy of RSL recognition." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Dataset Description", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Dataset Content. Our dataset is approximately 16 GB -it contains 20,000 videos of 1,000 classes representing frequently used glosses and short phrases in Russian Sign Language, including alphabet and numerals. The dataset does not include fingerspelling words, i.e., words spelled letter by letter using dactylology. In addition, we expanded the Slovo by 400 extra samples of a special \"no event\" class where the subject is not signing any gestures. To the best of our knowledge, 194 crowdworkers participated in the video recording for our dataset, making it the most subject-diverse RSL dataset and the second among all sign language datasets (see Table 1 for more details). The dataset was collected mainly indoors and varied in scenes and lighting conditions.\nVideo Quality. The videos were recorded primarily in HD and FullHD formats (see Figure 4d). About 86% of the videos are oriented vertically, 13% are oriented horizontally, and 1% are in square format. The number of frames distribution is also shown in Figure 4a. The average video length is 1.67 seconds, and the overall duration of the dataset is about 19.81 hours.\nData Splitting. The data was split into training (75%) and test (25%) sets, containing 15 and 5 video samples for each class, respectively. The numbers of subjects in training and test sets are equal 112 and 174, respectively. Note that groups of subjects in these two sets intersect; however, we tried to minimize it by filling out the test set with inactive signers (see Figure 4b-c -the test set consists mainly of signers who have uploaded a small number of videos). This approach minimizes the intersection of signers in the training and test sets, reducing the risk of model overfitting. " }, { "figure_ref": [ "fig_3" ], "heading": "Experiments", "publication_ref": [ "b5", "b14", "b15", "b1", "b15", "b16" ], "table_ref": [], "text": "Models. Addressing the challenge of recognizing sign language necessitates the utilization of formidable and lightweight models endowed with the capacity to analyze video data. Multiscale Vision Transformer (MViT) [6] model was specifically designed for video recognition tasks and provides a significant performance gain over concurrent video transformers that rely on large-scale external pretraining and are several times more costly in computation and parameters. Creating a multiscale pyramid of features, MViT models effectively connect the principles of transformers with multiscale feature hierarchies. An Improved MViT architecture (MViTv2) [15] proves to be a robust general backbone for computer vision tasks in the video domain. The MViTv2 demonstrates state-of-the-art performance in various video recognition benchmarks and can accurately analyze video input. Therefore its small version was chosen as the baseline, we utilized Swin-large [16] and ResNet3D-50 [2], all pre-trained on the Kinetics dataset.\nData Pre-processing. The samples were resized on the maximum side to 300 pixels. MViTv2-small and Swin-large trained with a horizontal flip and sharpness augmentations, whereas ResNet3D-50 -with the same horizontal flip, salt random noise, and color jitter. Horizontal flip augmentation is used to bring the data distribution to the real because RSL signs do not change the meaning of mirror reflection. Finally, the videos were padded to (300, 300) and randomly cropped to 224 pixels.\nImplementation Details. Several sampling strategies were tested with a different number of frames from [16,32,48] and a frame interval from 1 to 4. We also checked models trained on 64 frames, which generated poor results. Frame intervals are limited to 4 because skipping more frames in the SLR task can miss important information about the sign. We trained all 36 models over 120 epochs with a learning rate 0.001, AdamW [18] optimizer employing betas (0.9, 0.999), and weight decay 0.05. Two schedulers -LinearLR and CosineAnnealingLR [17] -were used to optimize the Swin-large and ResNet3D-50 training processes. Only LinearLR was used for MViTv2-small. The information about their parameters is in the repository.\nResults. Since each gesture corresponds to 20 video samples, we validated the models on the test set. Figure 5 shows the results of each chosen model separately with a different number of frames and frame interval. We can observe that MViTv2-small, with 32 frames and an interval of 2, vastly outperforms other models due to its video purpose. We attribute the notably lower metrics of ResNet3D-50 to the superior performance of vision transformers compared to convolutional architectures in the domain of videos." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed the new Russian Sign Language dataset Slovo and a pipeline for creating diverse video data despite a specific domain. The Slovo is divided into 1,000 classes, each corresponding to 20 videos from 194 signers. It can favorably influence the development of sign language recognition in the Russian domain. Besides, several models were trained and evaluated on Slovo to demonstrate its teaching ability. Shortly, we plan to expand our dataset: increase the number of classes and the number of examples per class and collect not only words but also phrases, n-grams, and sentences. Furthermore, since SL is more expressive than regular ones: not only hand gestures are used to indicate words, but also facial expressions, articulation, and body posture, multimodal models can potentially improve SLR results. The current dataset, pre-trained models, and demo are publicly available in the repository." } ]
One of the main challenges of the sign language recognition task is the difficulty of collecting a suitable dataset due to the gap between hard-of-hearing and hearing societies. In addition, the sign language in each country differs significantly, which obliges the creation of new data for each of them. This paper presents the Russian Sign Language (RSL) video dataset Slovo, produced using crowdsourcing platforms. The dataset contains 20,000 FullHD recordings, divided into 1,000 classes of isolated RSL gestures received by 194 signers. We also provide the entire dataset creation pipeline, from data collection to video annotation, with the following demo application. Several neural networks are trained and evaluated on the Slovo to demonstrate its teaching ability. Proposed data and pre-trained models are publicly available 12 .
[ { "figure_caption": "Fig. 1 .1Fig. 1. RSL signs \"at eight fifteen\" (left top), \"appetite\" (left bottom), \"yellow\" (right top), and \"this\" (right bottom).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Crowdsourcing pipeline: collection, validation, and annotation. Each stage used its own rules, but the exam was the same.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Video length, resolution and user's splitting analysis. (a) Videos' number of frames distribution divided into sets, (b) distribution of recorded video by users in train, and (c) test, (d) video resolution ratio.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Mean accuracy is achieved by each model on the Slovo with different sampling strategies. Note that the graphs have various scales depending on the order of the metrics.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "The main characteristics of the reviewed SL datasets.", "figure_data": "Datasets are divided", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Kapitanov Alexander; Kvanchiani Karina; Nagaev Alexander; Petrova Elizaveta
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "The Importance of Sign Language for Deaf Education and Sign Technology", "year": "2012" }, { "authors": "J Carreira; A Zisserman", "journal": "", "ref_id": "b1", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "A Chow; G Cameron; M Sherwood; P Culliton; S D Sam Sepah; T Starner", "journal": "", "ref_id": "b2", "title": "Google -isolated sign language recognition", "year": "2023" }, { "authors": "P Dreuw; C Neidle; V Athitsos; S Sclaroff; H Ney", "journal": "", "ref_id": "b3", "title": "Benchmark databases for video-based automatic sign language recognition", "year": "2008" }, { "authors": "E Efthimiou; S E Fotinea", "journal": "Springer", "ref_id": "b4", "title": "Gslc: creation and annotation of a greek sign language corpus for hci", "year": "2007" }, { "authors": "H Fan; B Xiong; K Mangalam; Y Li; Z Yan; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b5", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "M Grif; R Elakkiya; A Prikhodko; M Bakaev; E Rajalakshmi", "journal": "Analysis and Data Processing Systems", "ref_id": "b6", "title": "Raspoznavanie recognition of russian and indian sign languages based on machine learning", "year": "2021" }, { "authors": "E Gutierrez-Sigut; B Costello; C Baus; M Carreiras", "journal": "Behavior Research Methods", "ref_id": "b7", "title": "Lse-sign: A lexical database for spanish sign language", "year": "2016" }, { "authors": "A Imashev; M Mukushev; V Kimmelman; A Sandygulova", "journal": "", "ref_id": "b8", "title": "K-rsl: a corpus for linguistic understanding, visual evaluation, and recognition of sign languages", "year": "2020" }, { "authors": "H R V Joze; O Koller", "journal": "", "ref_id": "b9", "title": "Ms-asl: A large-scale data set and benchmark for understanding american sign language", "year": "2018" }, { "authors": "I Kagirov; D Ivanko; D Ryumin; A Axyonov; A Karpov", "journal": "", "ref_id": "b10", "title": "Theruslan: Database of russian sign language", "year": "2020" }, { "authors": "C Kenshimov; Z Buribayev; Y Amirgaliyev; A Ataniyazova; A Aitimov", "journal": "Eastern-European Journal of Enterprise Technologies", "ref_id": "b11", "title": "Sign language dactyl recognition based on machine learning algorithms", "year": "2021" }, { "authors": "O Koller; J Forster; H Ney", "journal": "Computer Vision and Image Understanding", "ref_id": "b12", "title": "Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers", "year": "2015" }, { "authors": "D Li; C Rodriguez; X Yu; H Li", "journal": "", "ref_id": "b13", "title": "Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison", "year": "2020" }, { "authors": "Y Li; C Y Wu; H Fan; K Mangalam; B Xiong; J Malik; C Feichtenhofer", "journal": "", "ref_id": "b14", "title": "Mvitv2: Improved multiscale vision transformers for classification and detection", "year": "2022" }, { "authors": "Z Liu; J Ning; Y Cao; Y Wei; Z Zhang; S Lin; H Hu", "journal": "", "ref_id": "b15", "title": "Video swin transformer", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b16", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2017" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b17", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "M Mukushev; A Ubingazhibov; A Kydyrbekova; A Imashev; V Kimmelman; A Sandygulova", "journal": "Plos one", "ref_id": "b18", "title": "Fluentsigners-50: A signer independent benchmark dataset for sign language processing", "year": "2022" }, { "authors": "F Ronchetti; F Quiroga; C A Estrebou; L C Lanzarini; A Rosete", "journal": "", "ref_id": "b19", "title": "Lsa64: an argentinian sign language dataset", "year": "2016" }, { "authors": "O M Sincan; H Y Keles", "journal": "IEEE Access", "ref_id": "b20", "title": "Autsl: A large scale multi-modal turkish sign language dataset and baseline methods", "year": "2020" }, { "authors": "C Vogler; C Neidle", "journal": "", "ref_id": "b21", "title": "A new web interface to facilitate access to corpora: development of the asllrp data access interface", "year": "2012" } ]
[]
10.1162/coli_a_00418
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b1", "b13", "b29", "b27", "b14", "b26", "b28", "b3", "b31", "b12", "b7", "b28", "b25" ], "table_ref": [], "text": "Automatically evaluating social conversational agents (a.k.a. social dialogue systems or chatbots) is a challenging task that, if solved, would save time and money by making it easier to tune or evaluate such agents. There are three prevailing methods for evaluation: reference-based metrics f (û t | {r t }), reference-free metrics f (û t | u t-1 . . . , u 0 ), and perplexity f (û t ), where ût is the model generated response, {r t } are a set of references, and u t-1 is the previous utterance in the conversation. Evaluation metrics such as BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004), BERTScore (Zhang et al., 2019), and BARTScore (Yuan et al., 2021) are reported in the evaluation of open-domain chatbots models despite evidence of weak statistically significant correlation with human judgments (Liu et al., 2016;Yeh et al., 2021;Zhang et al., 2021). There is some evidence attributing the low correlation between reference-based metrics and human ‡ Work begun while at Johns Hopkins University.\n1 Pronounced like mesmerize.\njudgments to the \"one-to-many\" problem in conversational dialogue (Galley et al., 2015;Zhao et al., 2017;Gangal et al., 2021a), whereby there can be multiple appropriate responses to a given input, and only a single 'ground-truth' reference response is used. Prior work demonstrated a higher correlation between automatic metrics and human judgments when utilizing multiple references on the DailyDialog (Li et al., 2017) dataset (Gupta et al., 2019). Building upon this work, we extend the investigation to other datasets and employ a distinct methodology for gathering human annotations. A limitation of prior datasets is that the number of systems evaluated is extremely sparse (Zhang et al., 2021).\nIn order to address these limitations, we release MMSMR, a Massively Multi-System MultiReference dataset. MMSMR consists of a new conversational model evaluation dataset from a subset of the teaching English as a second language website (TESL) which includes 1000 two and three-turn conversational prompts. We also generate multiple 'ground truth' references for each prompt. Additionally, we collect multiple 'ground-truth' responses for the one-turn handcrafted dataset (NCM) made by Vinyals and Le (2015). MMSMR is designed to test the robustness of dialog evaluation metrics in a statistically robust manner.\nOur core contributions are • We create and release a new conversational evaluation dataset based on hand-crafted conversations from material for teaching English as a second language2 (ESL).3 • We collect and release multiple diverse 'ground-truth' human-generated reference responses for the ESL and NCM datasets.\n• We train and release outputs of over one thousand models on these data sets to understand how metrics perform on a wide variety of quality differences. • We release the parameters to enable research on metrics without having to train new models. • We demonstrate the utility of the above contributions through analysis." }, { "figure_ref": [], "heading": "Background & Related Work", "publication_ref": [ "b7", "b3", "b26", "b2", "b30" ], "table_ref": [], "text": "Our work uses MMSMR to analyze automatic dialog metrics. We are far from the first to evaluate metrics using multiple annotations. Both multiple human-generate references, as well as multiple automatic references, have been explored (Gupta et al., 2019;Galley et al., 2015;Gangal et al., 2021a). In particular, Gangal et al. (2021a) demonstrate that automatically expanded reference sets improve correlations between human ratings and automated metrics.\nOther related prior work explores the relationships between metrics. In Yeh et al. (2021), 23 automatic evaluation metrics are evaluated on 10 datasets which are assessed to compare their shortcomings and strengths. In contrast to our work, these datasets rarely contained multiple references and also had very few dialog systems. Similarly, Deriu et al. (2021) surveys new evaluation methods that reduce human interaction.\nWhile to the best of our knowledge large multisystem datasets do not exist for dialog evaluation, Zhang and Duh (2020) did a grid search on Machine Translation and released it for research in hyper parameter optimization." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b20", "b1", "b13", "b14", "b15", "b7", "b3", "b21", "b29", "b23", "b27", "b11", "b6", "b22" ], "table_ref": [], "text": "Automatic dialog evaluation metrics are mainly divided into two types: model based and rule based. The model based metrics measure the quality of responses that are generally trained. Rule-based metrics analyze the system response using heuristic rules based on human references and conversation context.\nSeveral string overlap metrics are borrowed from other NLP tasks. In these metrics, the model output is compared to a human reference response. Bleu (Papineni et al., 2002), and Meteor (Banerjee and Lavie, 2005) come from Machine translation, and Rouge (Lin, 2004) comes from summarization. Bleu is based on string matches using n-gram pre-cision of the responses Meteor includes synonyms and stems for computing the score. Rouge on the other hand uses n-gram recall. The effectiveness of these word overlap metrics has been a source of great debate (Liu et al., 2016;Lowe et al., 2017;Gupta et al., 2019;Galley et al., 2015).\nThe first model based metrics compute similarity between context and reference word embeddings (Mikolov et al., 2013b;Pennington et al., 2014;Mikolov et al., 2013a). BERTScore (Zhang et al., 2019) uses contextual embeddings for computing token similarity.\nPrism (Thompson and Post, 2020) and BARTScore (Yuan et al., 2021) use sequence-level model scores. sequence-to-sequence paraphraser to score the output conditioned on human references, while BARTScore uses BART (Lewis et al., 2020), a denoising model. DialoRPT (Gao et al., 2020) is based on a set of GPT-2 models which are fine-tuned on a Reddit human feedback dataset.\nUSL-H (Phy et al., 2020) is a metric that is flexible to a task where a method is proposed to compound metrics called USL-H, which is Understandability, Sensibleness, and Likability in Hierarchy which is a single metric. USL-H combines three different models valid utterance prediction (VUP), next sentence prediction (NSP), and masked language model (MLM) where each model is trained on different tasks." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "Here we describe our methods for collecting 3500 new multiturn conversations, collecting multiple references for each multiturn dataset, and collecting ratings for model generated responses." }, { "figure_ref": [], "heading": "Reference collection", "publication_ref": [], "table_ref": [], "text": "We created a HIT (human intelligence task) for Amazon's Mechanical Turk (AMT) to collect multiple references. Each worker was shown 10 one-, two-, or three-turn conversations and asked to provide 2 to 5 responses to the last turn in each conversation. 4 Further details of the data collection are available in Appendix D." }, { "figure_ref": [], "heading": "Reference quality", "publication_ref": [], "table_ref": [], "text": "Beyond our quality control filtering, we analzyed the following: the average Jaccard distance of responses both for workers against themselves and against all of the provided responses for a prompt, the average number of responses provided by workers, and the fatigue factor for each of the prompt datasets. Across each of our datasets the average Jaccard distance between each reference is high (at or near .9 across the board). Therefore, we conclude that there is high diversity among the collected references. This fact is key to the success of evaluation using multiple references (Gangal et al., 2021b). If the references are not diverse, using multiple references is barely better than using one reference. Also, we observed that as a worker completed a HIT, they provided fewer responses per prompt. This is a sign of worker fatigue. Consequently, having longer HITs can decrease the quantity and potentially the quality of collected data (Figure 7)." }, { "figure_ref": [], "heading": "Scraping new conversations", "publication_ref": [], "table_ref": [], "text": "rong-chang.com is a website that has over 3500 multiturn conversations (10+ turns) on a variety of topics that are used for instructing ESL speakers. With their explicit permission, we scrape these conversations from their website and we ask AMT workers to create references for 1000 randomly sampled snippets of 2 or 3 turns. Ultimately, we obtain a wide variety of conversation topics and conversations. With dataset we are consistently able to collect more responses per prompt, which we attribute to the naturalness of the conversations." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In order to validate the utility of our dataset, we ask a few basic questions about the popular metrics that we selected. In particular, we aim to validate or challenge relationships between well-established metrics.\nOur approach is to evaluate outputs using multiple references rather than a single reference. For multiple models' responses to the same prompts, we use multiple evaluation metrics to score each of them.\nWe perform three experiments on our data. (1) The Pearson and Spearman correlation between metric evaluations and human evaluations, (2) the Kendall rank correlation coefficient between metric evaluations and human evaluations, and (3) the relationship between output similarity and metric evaluations." }, { "figure_ref": [ "fig_0" ], "heading": "Models", "publication_ref": [ "b9", "b24", "b8", "b10", "b16", "b0" ], "table_ref": [], "text": "In order to understand how different metrics are able to distinguish between quality of different models (as compared to human judgments), and how different parameters affect performance, we train a large number of models. Following Khayrallah and Sedoc (2020), we train Transformer (Vaswani et al., 2017) chatbots in FAIRSEQ using base parameters from the FLORES benchmark for low-resource MT (Guzmán et al., 2019). In order to explore the full space of models with a variety of performance levels, we perform a hyperparameter sweep of regularization parameters, including SentencePiece (Kudo and Richardson, 2018) vocabulary size, dropout, attention & relu dropout, and label smoothing. We also use 8 different decoding strategies.5 6 Analysis Mathur et al. (2020) showed that correlating a machine translation metric with human judgments is far easier when considering all systems (including very weak ones) than when only considering top systems. Text simplification metrics also have similar behavior, where the correlation between metrics and human judgments decreases when filtered by system quality (Alva-Manchego et al., 2021). This is somewhat intuitive: truly terrible systems are easier to differentiate from good ones. Therefore, we consider how well the metrics correlate overall, and when only considering the top systems.\nWe define top scoring as any system that is in the 99th percentile of systems on any metric. Figure 2 shows that top scoring systems constitute a large percentage of systems overall, which further highlights the disagreement between metrics. 48% of the systems are in the 90th percentile or above on some metric for NCM. If the metrics were in perfect agreement, only 10% of system would be in teh 90th percentile. With so little agreement, it can be particularly hard to know which metrics to trust, highlighting the need for such a dataset for further research on metrics. Figure 1 shows Spearman correlations between the various metrics (also see additional tables in the appendix). The bottom left half of each table shows the correlation between the metrics on all systems. The top right half shows the correlation between the top scoring systems. Figure 2: The percent of data retained when thresholding on a percentile for any of the metrics. The dotted grey line shows the percentage that would be retained if all metrics were in perfect agreement.\nUnsurprisingly, correlations are much stronger overall when comparing all systems rather than only comparing the top systems.\nDialogRPT-updown does not correlate well with other metrics, even when comparing all systems. In fact, it has a negative correlation on NCM with the majority of other metrics (even the other Dialo-gRPT metrics) USL-H and nup are the next worst correlated with other metrics, but they have a positive correlation and are far better than DialogRPT-updown.\nWhen considering just the top systems, the same 3 metrics stand out as well. They all have negative correlations on NCM. USL-H also has negative correlation on ESL." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We release MMSMR, a Massively Multi-System MultiReference dataset to enable future work on metrics and evaluation for dialog. The dataset contains 1000 two and three-turn prompts with multiple human-generated references. We train 1750 systems and evaluate them on our novel test set and the DailyDialog dataset. Our analysis of the metrics shows that the correlations are lower when considering only the top systems than when considering all systems. Our findings show the utility of this novel test set, and model hyper parameters, inference outputs, and metric scores for each system on a variety of datasets." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "B Dialog Models", "publication_ref": [ "b8", "b12", "b19" ], "table_ref": [], "text": "We train Transformer conditional language models in FAIRSEQ using parameters from the FLORES 6 benchmark for low-resource machine translation (Guzmán et al., 2019).\nAs a baseline, we use a 5-layer encoder and decoder, 512 dimensional embeddings, and 2 encoder and decoder attention heads. We regularize with 0.2 label smoothing, and 0.4 dropout. We optimize using Adam with a learning rate of 10 -3 . We train 100 epochs, and select the best checkpoint based on validation set perplexity. We run inference several ways: greedy search, beam size 10, beam size 100, top p=.5 sampling, top p=.7 sampling, top p=.9 sampling, top k=10, top k=100. We do not use a length penalty.\nWe sweep SentencePiece (Kudo and Richardson, 2018) vocabulary size (1k,2k, 4k,8k,16k), dropout (0.0, 0.1, 0.2, 0.3, 0.4), attention & ReLU dropout (0.0, 0.1, 0.2, 0.3, 0.4), and label smoothing (0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6). Figure 3 shows the train command. We evaluate on the DailyDialog corpus (Li et al., 2017), as released by ParlAI (Miller et al., 2017). 7 We train both a single and multiturn model. We evalute DailyDialog and NCM on the single turn models, and ESL2/3 on the multiturn models. 3 0.34 0.34 0.17 0.17 0.52 -0.11 0.11 0.67 0.77 0.73 0.24 0.84 0.31 0.33 0.28 0.29 0.34 0.34 0.17 0.17 0.52 -0.18 0.15 0.32 0.43 0.52 0.51 0.54 0.66 0.63 0.51 0.66 0.66 0.57 0.55 0.17 -0.5 0.55 0.57 0.03 0.71 0.26 0.2 0.15 0.24 0.2 0.21 0.03 0.05 0.75 0.1 -0.04 0.11 0.93 -0.03-0.01 0 -0.05-0.01 -0 -0.12-0.12 0.31 -0.37 0.05 0.14 0.7 0.71 0.74 0.76 0.73 0.72 0.79 0.79 0.27 -0.43 0.64 0.1 0.1 0.08 0.08 0.1 0.1 -0.04-0.03 0.47 -0.21 0.09 0.95 0.85 0.97 0.91 0.91 0.88 0.9 0.54 -0.15 0.5 0.9 0.93 0.98 0.97 0.91 0.91 0.37 -0.31 0.57 0.85 0.91 0.91 0.85 0.84 0.28 -0.38 0.63 0.92 0.91 0.89 0.91 0.52 -0.18 0.51 1 0.9 0.89 0.34 -0.34 0.58 0.89 0.88 0.34 -0.33 0.58 0.99 0.28 -0.39 0.66 0.35 -0.34 0.65 0.27 0.01 -0.64 " }, { "figure_ref": [], "heading": "C Human Eval Datasheet", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D Human Annotation Details", "publication_ref": [], "table_ref": [], "text": "For the NCM dataset 3 workers responded to each conversation , and for every other dataset, 4 workers responded to each conversation . Workers were informed that they would receive an extra cent as bonus for each response provided beyond the minimum required two per conversation. The task itself paid thirty cents, which we now realize was too low for the difficulty and time requirement. The maximum a worker could receive was sixty cents(for providing every 'extra' response, thirty cents for the HIT and thirty cents in bonus). A quality control check was not included in the HIT itself but was performed after results were collected and before approving or rejecting assignments. We filtered out and rejected workers who provided responses that either: were not unique, were one character, or punctuation only. This constituted a small fraction of workers." }, { "figure_ref": [], "heading": "D.1 Dissimilarity of References", "publication_ref": [], "table_ref": [], "text": "For every conversation in each of the datasets we have anywhere from 6-20 responses. We noticed an inverse relationship between the prompt number and the average number of responses from workers. Using the Jaccard distance as for quantifying diversity in responses, we found that the ESL dataset had the greatest diversity. However, even single turn prompts from the NCM got diverse responses. For example, the prompt \"What is two plus two?\" from the NCM dataset got responses such as: \"four\", \"same as five plus three\", and \"I'm 3, how would I know?\" with each of these answers coming from a different worker. Figure 7 shows the Jaccard distance scores for each of the datasets." } ]
We release MMSMR, 1 a Massively Multi-System MultiReference dataset to enable future work on metrics and evaluation for dialog. Automatic metrics for dialogue evaluation should be robust proxies for human judgments; however, the verification of robustness is currently far from satisfactory. To quantify the robustness correlation and understand what is necessary in a test set, we create and release an 8-reference dialog dataset by extending singlereference evaluation sets and introduce this new language learning conversation dataset. We then train 1750 systems and evaluate them on our novel test set and the DailyDialog dataset. We release the novel test set, and model hyper parameters, inference outputs, and metric scores for each system on a variety of datasets (upon publication).
How to Choose How to Choose Your Chatbot: A Massively Multi-System MultiReference Data Set for Dialog Metric Evaluation
[ { "figure_caption": "Figure 1 :1Figure 1: Correlations between various metrics on the ESL3 test set. The bottom left includes all systems, the top right is the top ones.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Training command.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Correlations between various metrics on the ESL2 test set. The bottom left includes all systems, the top right is the top ones.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: Correlations between various metrics on the NCM test set. The bottom left includes all systems, the top right is the top ones.", "figure_data": "", "figure_id": "fig_3", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 5: Correlations between various metrics on the DailyDialog test set. The bottom left includes all systems, the top right is the top ones.", "figure_data": "bleu bleu0.88 0.9 0.96 0.75 0.91 0.55 0.27 0.82 -0.01 -0 0.02 -0.03 0.04 0.04 -0.29-0.16-0.05-0.25-0.19 0.5 0.66 0.76 0.54 0.75 0.06 0.42 0.29 0.74 0.59 0.58 0.69 0.56 0.55 0.36 0.48 0.61 0.06 0.281.0meteor bartscore meteor bartscore0.9 0.94 0.94 0.85 0.91 0.890.92 0.94 0.84 0.74 0.73 0.15 0.84 -0.24-0.11-0.07-0.26-0.06-0.05-0.27-0.25-0.28-0.56 -0.5 0.92 0.77 0.79 0.59 0.02 0.79 -0.26-0.17 -0.1 -0.29-0.11 -0.1 -0.41-0.35-0.26-0.42-0.33 0.55 0.79 0.55 0.53 0.51 0.13 0.65 0.19 0.26 0.34 0.16 0.28 0.29 0.24 0.22 0.5 -0.46 0.69 0.74 0.6 0.7 0.21 0.31 0.44 0.58 0.56 0.6 0.56 0.58 0.58 0.38 0.42 0.77 -0.03 0.380.8rogueL rogueL0.97 0.94 0.95 0.93 0.92 0.950.84 0.89 0.53 0.3 0.8 0 0.07 0.08 -0.03 0.12 0.13 -0.19-0.09-0.05-0.31-0.22 0.5 0.87 0.23 0.17 0.52 0.51 0.4 0.49 0.48 0.4 0.41 0.29 0.36 0.77 -0.16 0.51prism prism0.9 0.92 0.89 0.96 0.9 0.9 0.94 0.940.6 0.46 0.37 0.7 0.02 0.24 0.18 0.02 0.29 0.29 0.08 0.12 -0.06-0.47-0.28 0.31 0.11 0.62 0.18 0.6 0.81 0.74 0.6 0.81 0.81 0.68 0.68 0.45 -0.12 0.180.6bertscore bertscore0.93 0.83 0.88 0.95 0.91 0.92 0.83 0.93 0.96 0.910.38 0.3 0.72 0.17 0.12 0.17 0.14 0.15 0.16 -0.18-0.04 0.2 -0.07 0.04 0.14 0.15 0.46 0.58 0.36 0.44 0.56 0.35 0.36 0.26 0.38 0.81 -0.06 0.43USL-H vup USL-H vup0.75 0.85 0.78 0.77 0.79 0.72 0.84 0.73 0.75 0.87 0.91 0.9 0.69 0.65 0.79 0.72 0.71 0.73 0.67 0.86 0.79 0.86 0.86 0.93 0.88 0.72-0.03 0.82 -0.42-0.31-0.22-0.42-0.27-0.27-0.32 -0.4 -0.43-0.72-0.69 0.23 0.67 0.65 0.49 0.69 0.61 0.59 0.58 0.69 0.44 0.26 0.25 0.04 0.87 -0.2 -0.14-0.09-0.26-0.11-0.11-0.17-0.23 0.1 -0.7 0.82 -0.03 0.66 0.75 0.58 0.68 0.73 0.72 0.53 0.57 0.13 0.07 -00.4nup nll nup nll0.91 0.9 0.9 0.93 0.92 0.91 0.9 0.86 0.81 0.63 0.71 0.83 0.84 0.9 0.53 0.94 0.77 0.82 0.87 0.88 0.88 0.85 0.86 0.91 0.78 0.91 0.77 0.91 0.92 0.91 0.95 0.58 0.91 0.77-0.19-0.07-0.04-0.18-0.04-0.04-0.27-0.21 -0.2 -0.44-0.42 0.87 0.77 0.99 0.83 0.83 0.78 0.87 0.89 0.37 0.76 -0.03-0.05 0.02 -0.09-0.03-0.02 -0.1 -0.1 0.4 -0.55 0.91 0.86 0.8 0.97 0.83 0.82 0.64 0.73 0.57 0.28 -0.070.2nce ppl nce ppl0.8 0.71 0.74 0.86 0.9 0.89 0.6 0.96 0.82 0.97 0.7 0.64 0.65 0.75 0.79 0.79 0.57 0.81 0.72 0.83 0.85 0.9 0.84 0.93 0.92 0.97 0.92 0.66 0.94 0.8 0.95 0.8 0.79 0.85 0.84 0.89 0.84 0.62 0.84 0.73 0.87 0.920.8 0.89 0.99 0.98 0.83 0.88 0.78 0.15 0.62 0.76 0.8 0.8 0.7 0.72 0.72 0.07 0.52 0.86 0.88 0.98 0.98 0.76 0.78 0.51 0.16 -0.09 0.8 0.87 0.88 0.71 0.71 0.61 0.05 -0.010.0norm_nll norm_nce norm_nll norm_nce0.81 0.65 0.71 0.84 0.86 0.9 0.55 0.96 0.79 0.99 0.98 0.83 0.8 0.72 0.75 0.87 0.91 0.9 0.61 0.95 0.82 0.96 1 0.85 0.97 0.91 0.78 0.91 0.92 0.93 0.96 0.6 0.93 0.78 0.98 0.97 0.87 0.89 0.84 0.93 0.91 0.97 0.91 0.66 0.94 0.8 0.94 1 0.91 0.960.85 0.84 0.81 0.9 0.89 0.37 0.74 1 0.8 0.84 0.74 0.1 0.59 0.85 0.85 0.69 0.77 0.57 0.29 -0.13 1 0.74 0.75 0.52 0.13 -0.070.2norm_ppl norm_ppl0.8 0.72 0.75 0.87 0.91 0.9 0.61 0.94 0.82 0.96 0.99 0.85 0.97 1 0.88 0.84 0.93 0.91 0.97 0.91 0.66 0.93 0.8 0.94 0.99 0.92 0.96 10.79 0.83 0.74 0.09 0.59 0.73 0.74 0.54 0.12 -0.06distinct_1 distinct_10.63 0.65 0.61 0.74 0.85 0.75 0.57 0.85 0.71 0.83 0.91 0.79 0.86 0.91 0.91 0.74 0.79 0.78 0.77 0.88 0.76 0.64 0.8 0.71 0.75 0.87 0.82 0.8 0.87 0.870.95 0.71 0.07 0.46 0.96 0.48 -0.07-0.150.4distinct_2 distinct_20.74 0.67 0.68 0.82 0.89 0.85 0.59 0.94 0.78 0.93 0.97 0.82 0.95 0.97 0.97 0.96 0.87 0.83 0.87 0.88 0.95 0.89 0.67 0.91 0.8 0.88 0.95 0.85 0.92 0.94 0.93 0.940.8 0.2 0.59 0.52 0.01 -0.14DialogRPT-HvM DialogRPT-updown DialogRPT-HvM DialogRPT-updown0.75 0.61 0.68 0.8 0.8 0.89 0.5 0.86 0.74 0.94 0.92 0.8 0.94 0.92 0.92 0.82 0.91 0.46 0.19 0.39 0.44 0.38 0.55 0.05 0.56 0.37 0.68 0.6 0.45 0.67 0.58 0.58 0.42 0.54 0.62 0.89 0.85 0.94 0.95 0.91 0.96 0.67 0.85 0.85 0.92 0.92 0.87 0.93 0.93 0.93 0.8 0.89 -0.07-0.28-0.11-0.13-0.17-0.11-0.45-0.14-0.28-0.03-0.09-0.17-0.04 -0.1 -0.11-0.24-0.17-0.130.29 0.81 0.58 -0.11 0.32 -0.60.6DialogRPT-HvR DialogRPT-HvR0.71 0.48 0.64 0.72 0.69 0.83 0.35 0.81 0.65 0.91 0.86 0.71 0.91 0.85 0.85 0.69 0.82 0.92 0.8 0.83 0.88 0.88 0.88 0.86 0.86 0.87 0.8 0.96 0.78 0.81 0.73 0.79 0.8 0.8 0.71 0.8 0.85 -0.290.8bleu bleumeteor meteorbartscore bartscorerogueL rogueLprism prismbertscore bertscoreUSL-H USL-Hvup vupnup nupnll nllnce nceppl pplnorm_nll norm_nllnorm_nce norm_ncenorm_ppl norm_ppldistinct_1 distinct_1distinct_2 distinct_2DialogRPT-HvM DialogRPT-HvMDialogRPT-updown DialogRPT-updownDialogRPT-HvR DialogRPT-HvR", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Huda Khayrallah; Zuhaib Akhtar; Edward Cohen; João Sedoc
[ { "authors": "Fernando Alva-Manchego; Carolina Scarton; Lucia Specia", "journal": "Computational Linguistics", "ref_id": "b0", "title": "The (Un)Suitability of Automatic Evaluation Metrics for Text Simplification", "year": "2021" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Jan Deriu; Alvaro Rodrigo; Arantxa Otegi; Guillermo Echegoyen; Sophie Rosset; Eneko Agirre; Mark Cieliebak", "journal": "Artificial Intelligence Review", "ref_id": "b2", "title": "Survey on evaluation methods for dialogue systems", "year": "2021" }, { "authors": "Michel Galley; Chris Brockett; Alessandro Sordoni; Yangfeng Ji; Michael Auli; Chris Quirk; Margaret Mitchell; Jianfeng Gao; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets", "year": "2015" }, { "authors": "Varun Gangal; Harsh Jhamtani; Eduard Hovy; Taylor Berg-Kirkpatrick", "journal": "", "ref_id": "b4", "title": "Improving automated evaluation of open domain dialog via diverse reference augmentation", "year": "2021" }, { "authors": "Varun Gangal; Harsh Jhamtani; Eduard Hovy; Taylor Berg-Kirkpatrick", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Improving automated evaluation of open domain dialog via diverse reference augmentation", "year": "2021" }, { "authors": "Xiang Gao; Yizhe Zhang; Michel Galley; Chris Brockett; Bill Dolan", "journal": "", "ref_id": "b6", "title": "Dialogue response ranking training with large-scale human feedback data", "year": "2020" }, { "authors": "Prakhar Gupta; Shikib Mehri; Tiancheng Zhao; Amy Pavel; Maxine Eskenazi; Jeffrey Bigham", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Investigating evaluation of open-domain dialogue systems with human generated multiple references", "year": "2019" }, { "authors": "Francisco Guzmán; Peng-Jen Chen; Myle Ott; Juan Pino; Guillaume Lample; Philipp Koehn; Vishrav Chaudhary; Marc'aurelio Ranzato", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English", "year": "2019" }, { "authors": "Huda Khayrallah; João Sedoc", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "SMRT chatbots: Improving non-task-oriented dialog with Simulated Multiple Reference Training", "year": "2020" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Yanran Li; Hui Su; Xiaoyu Shen; Wenjie Li; Ziqiang Cao; Shuzi Niu", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b12", "title": "DailyDialog: A manually labelled multi-turn dialogue dataset", "year": "2017" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Chia-Wei Liu; Ryan Lowe; Iulian Serban; Mike Noseworthy; Laurent Charlin; Joelle Pineau", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "year": "2016" }, { "authors": "Ryan Lowe; Michael Noseworthy; Iulian Vlad Serban; Nicolas Angelard-Gontier; Yoshua Bengio; Joelle Pineau", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Towards an automatic Turing test: Learning to evaluate dialogue responses", "year": "2017" }, { "authors": "Nitika Mathur; Timothy Baldwin; Trevor Cohn", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics", "year": "2020" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b17", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Alexander Miller; Will Feng; Dhruv Batra; Antoine Bordes; Adam Fisch; Jiasen Lu; Devi Parikh; Jason Weston", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "ParlAI: A dialog research software platform", "year": "2017" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b21", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Vitou Phy; Yang Zhao; Akiko Aizawa", "journal": "", "ref_id": "b22", "title": "Deconstruct to reconstruct a configurable evaluation metric for open-domain dialogue systems", "year": "2020" }, { "authors": "Brian Thompson; Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Automatic machine translation evaluation in many languages via zero-shot paraphrasing", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b24", "title": "Attention is all you need", "year": "2017" }, { "authors": "Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b25", "title": "A neural conversational model", "year": "2015" }, { "authors": "Yi-Ting Yeh; Maxine Eskenazi; Shikib Mehri", "journal": "", "ref_id": "b26", "title": "A comprehensive assessment of dialog evaluation metrics", "year": "2021" }, { "authors": "Weizhe Yuan; Graham Neubig; Pengfei Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Bartscore: Evaluating generated text as text generation", "year": "2021" }, { "authors": "Chen Zhang; João Sedoc; Luis Fernando; D' Haro; Rafael Banchs; Alexander Rudnicky", "journal": "", "ref_id": "b28", "title": "Automatic evaluation and moderation of open-domain dialogue systems", "year": "2021" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b29", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Xuan Zhang; Kevin Duh", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b30", "title": "Reproducible and efficient benchmarks for hyperparameter optimization of neural machine translation systems", "year": "2020" }, { "authors": "Tiancheng Zhao; Ran Zhao; Maxine Eskenazi", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Learning discourse-level diversity for neural dialog models using conditional variational autoencoders", "year": "2017" } ]
[]
10.1073/pnas.2218523120
2023-10-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b70", "b29", "b57", "b40", "b19", "b55", "b5", "b59", "b71", "b55", "b9", "b56", "b19", "b14", "b65", "b57", "b40", "b58", "b43", "b24" ], "table_ref": [], "text": "Dialogue tutoring systems have demonstrated significant potential in augmenting learning outcomes across various domains (Wollny et al., 2021;Ji Figure 1: Current models achieve high accuracy in solving MWPs but struggle with teaching since they often give incorrect feedback or reveal directly the solution too early. MATHDIAL mitigates this using scaffolding questions and grounding annotations. et al., 2023). However, the progress of scaling them is considerably hindered by a lack of highquality datasets, which actually provide students with space for exploration by scaffolding their learning (Tack and Piech, 2022;Macina et al., 2023). The current datasets are frequently marred with issues like low pedagogical quality, are too small, or focus on noisy classroom settings. While recording tutoring sessions might be a scalable alternative, it bears strong privacy concerns (Demszky and Hill, 2023). On the other hand, crowdsourcing dialogues is costly, requires synchronizing annotators, and can lead to insufficient quality due to poor annotator training (Stasaski et al., 2020).\nAt the same time, recent advancements in Large Language Models (LLMs) have enabled significant improvements in generative dialogue systems (Budzianowski and Vulić, 2019;Thoppilan et al., 2022;Xu et al., 2023) (Stasaski et al., 2020) Language 391 3 315 1:1 role-playing image, answer 5 3.12 0.83 13.0 TSCC (Caines et al., 2020) Language 102 2 013 1:1 tutoring ✗ 5 3.55 0.66 12.3 TalkMoves (Suresh et al., 2022) Science 567 9 280 classroom ✗ 10 2.93 0.67 9.6 NCTE (Demszky and Hill, 2023) great success in reasoning over educational domains, such as math problems (Cobbe et al., 2021;Wei et al., 2022;Wang et al., 2023b;OpenAI, 2023). However, this has not yet translated to improvements in dialogue tutoring systems, as showcased by the lack of pedagogical understanding and factually incorrect behaviour of GPT-3 (Tack and Piech, 2022) and open-source LLMs (Macina et al., 2023).\nFigure 1 shows examples of generations that reveal information to students too early and misunderstand their solutions. This is also confirmed in our human evaluation: when asked ChatGPT to tutor a student as a teacher, it directly reveals the solution 66% of times and provides incorrect feedback 59% of times (cf. Section 6.3).\nTo address these issues, we collect and present a dialogue tutoring dataset called MATHDIAL . The dataset has rich tutoring quality which we measure by equitable tutoring (Tanner, 2013): providing opportunities for the student to learn, think and explore potential solutions. For this, we take inspiration from human tutoring strategies (Nye et al., 2014) and active learning approaches in classrooms (Freeman et al., 2014) that show a positive impact on student learning gains.\nWe collect our dataset using a novel data collection approach. This approach pairs human teachers with an LLM that simulates students and their errors, which the same teachers rate as representative of real students in our study.\nMATH-DIAL is grounded in math word problems and student confusions and therefore provides a challenging testbed for creating faithful and equitable dialogue tutoring models that can reason over complex data. Figure 1 shows one dialogue from MATH-DIAL , where a teacher scaffolds student learning by asking an interactive scaffolding question instead of leaking the solution.\nWe benchmark various models on the task of generating tutor responses for MATHDIAL , using both finetuning and prompting. We find that finetuning smaller open-source LLMs on our dataset can make them significantly more equitable and faithful to the teaching material than prompting larger LLMs (Section 6.3). Moreover, we propose an interactive, end-to-end tutoring simulation between a teacher and student model where we measure a trade-off between student solving success and teachers directly revealing answers in (Section 6.4). Open-source LLMs that are finetuned on our dataset achieve similar student-solving success as ChatGPT while telling solutions less often. Finally, we highlight open challenges on this dataset, such as generalization to new problems." }, { "figure_ref": [], "heading": "Background & Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dialogue Datasets & Collection Methodologies", "publication_ref": [ "b27", "b26", "b67", "b30", "b6", "b33", "b77", "b7", "b21", "b52", "b48", "b41", "b36", "b22", "b23", "b10", "b57", "b40" ], "table_ref": [], "text": "Research on task-oriented dialogue systems has mainly focused on customer service, for instance, restaurant reservations (Henderson et al., 2014;Gašic et al., 2014). Notably, Wen et al. (2017) collect such dialogues with the Wizard-of-Oz (WoZ) paradigm (Kelley, 1984), where crowdworkers are connected to roleplay interlocutors. One plays the user who interacts with the system, and the other roleplays the system and is often exclusively given access to domain knowledge. WoZ has been used to collect many popular datasets, such as Multi-WoZ (Budzianowski et al., 2018) and extensions (Kim et al., 2020;Zhu et al., 2020), Taskmaster (Byrne et al., 2019), and open-domain datasets like Wizard-of-Wikipedia (Dinan et al., 2019). Other collection methods include crowdworkers filling dialogue outlines (Shah et al., 2018;Rastogi et al., 2020;Majewska et al., 2023), or scraping from the web (Li et al., 2017;Dziri et al., 2019).\nMultiple works have shown shortcomings in using non-expert crowdworkers. For instance, document-grounded corpora often contain hallucinations in ground-truth data (Dziri et al., 2022), and task-oriented corpora tend to suffer from annotation errors and low lexical diversity (Casanueva et al., 2022). More closely related to this work, current tutoring corpora lack sufficient tutoring quality Figure 2: Overview of the data collection pipeline: First, student confusions are oversampled from an LLM and sorted by frequency. Then, a human teacher synchronously interacts with a student simulated by an LLM that is instructed with a student profile and incorrect solution. (Tack and Piech, 2022;Macina et al., 2023).\nMATHDIAL mitigates these issues by adapting the WoZ paradigm to using human teachers as experts in collaboration with an LLM." }, { "figure_ref": [], "heading": "Dialogue Tutoring Corpora & Teacher Moves", "publication_ref": [ "b50", "b53", "b54", "b28", "b49", "b0", "b51", "b68", "b8", "b40", "b55", "b9", "b56", "b19" ], "table_ref": [ "tab_1" ], "text": "Theoretical and empirical studies have shown the importance of questioning in human learning (Roscoe and Chi, 2008;Shahriar and Matsuda, 2021;Shridhar et al., 2022). Therefore, prior research has explored which types of questions in tutoring conversations improve student learning. Nye et al. ( 2014), for instance, show the effectiveness of deep reasoning questions, and (Howe et al., 2019) find that elaboration and challenging of previous contributions can benefit student learning. This has led to a series of human-authored dialogue tutoring systems, like AutoTutor (Nye et al., 2014), which guide students in problem-solving using natural language explanations. Assisting students to succeed in complex tasks commonly referred to as scaffolding (Reiser, 2004;Anghileri, 2006). More recently, several rule-based dialogue systems with predefined goals have been proposed (Ruan et al., 2019;Winkler et al., 2020;Cai et al., 2021), but scaling them requires extensive human authoring and quickly becomes complex. As a consequence, building effective automatic tutors at scale remains an open problem. While data-driven approaches seem like a promising direction (Macina et al., 2023;Wang et al., 2023a), only a limited number of tutoring corpora are publicly available to our knowledge: CIMA (Stasaski et al., 2020), TSCC (Caines et al., 2020), TalkMoves (Suresh et al., 2022), and NCTE (Demszky and Hill, 2023). All of them suffer from several limitations, such as missing grounding information (TSCC, TalkMoves, NCTE), low tutoring quality (CIMA), small dataset sizes (all), or a focus on noisy classroom scenarios (see Table 1)." }, { "figure_ref": [], "heading": "Synthetic Dialogue Data Creation", "publication_ref": [ "b16", "b32", "b11", "b18", "b1", "b78", "b42", "b57", "b40" ], "table_ref": [], "text": "LLMs have recently found their way as synthetic dialogue dataset generators due to their increasingly human-like behaviour. Both methods using finetuning (Dai et al., 2022) and prompting (Kim et al., 2022;Chen et al., 2023) haven been proposed. The human-like behaviour also manifests in them showing similar biases in logical reasoning as humans (Dasgupta et al., 2022;Binz and Schulz, 2023), and can be comparable to gold-human annotations for generation tasks (Ziems et al., 2023). Consequently, they have been used to simulate students for teacher training (Markel et al., 2023), suggesting that one might also rely upon them to create meaningful tutors. However, Tack and Piech (2022); Macina et al. (2023) show that they can not yet perform well as teachers out-of-the-box, because they often incorrectly assess student solutions and reveal answers too quickly." }, { "figure_ref": [ "fig_8" ], "heading": "MATHDIAL Collection Pipeline", "publication_ref": [ "b14", "b2" ], "table_ref": [ "tab_9" ], "text": "This section introduces a framework for collecting high-quality tutoring conversations, highlighted in Figure 2. The core idea behind it is to connect Table 2: Teacher moves with examples of utterances and their intents from the MATHDIAL dataset.\nan expert annotator, who roleplays a teacher, with an LLM that simulates the student. 1 We use this methodology to collect dialogues based on GSM8k (Cobbe et al., 2021), a diverse collection of grade school multi-step math word problems (MWPs). First, we estimate student confusion for a given MWP by using temperature sampling to obtain diverse solutions from an LLM. We then select the most frequent incorrect solution. Therefore, each tutoring dialogue deals with the solution of exactly one MWP and one confusion. As a next step, we pair a human teacher with the LLM to create a dialogue that should resolve the confusion. We ground the LLM in one of six student profiles. These student profiles consist of common misconceptions of students learning algebra, such as struggling to recognize the problem type, and are taken from Booth et al. (2017). A detailed description of these profiles is found in Section C.\nThe teacher has access to the MWP and its correct step-by-step solution, as well as the initial student confusion (cf. Figure 7). Then, the teacher is tasked to guide the student to solve the problem by employing a sequence of scaffolding moves, which we refer to as a teaching strategy. The teachers themselves can use their expertise to determine the strategy but are required to select the current move before writing a response, as we have found this to lead to more diverse pedagogical patterns. We describe these moves in Section 3.4. The dialogue ends when the teacher marks the problem as solved or a certain time limit is reached.\nIn addition to the collected dialogues, we obtain metadata that future work can explore for building more effective tutor models. In particular, for each dialogue MATHDIAL contains the MWP, step-by-step solution, the exact step that led to student confusion, and annotations indicating if it was resolved over the course of the dialogue. Stepby-step and student solutions are also provided as equations." }, { "figure_ref": [], "heading": "Teacher Selection", "publication_ref": [ "b72" ], "table_ref": [], "text": "We recruit professionals with teaching experience through Prolific2 . We only select teachers who have completed at least 500 submissions and achieved a 100% completion rate. Annotators read guidelines for the task in an initial training phase (cf. Section D.3) and then complete a test on an example conversation to assess their understanding of the task. We only select annotators with 100% test scores for further rounds of data collection, similar to Zhang et al. (2023). We employ 91 expert annotators, of which 71 identify as female and 18 as male. The majority of annotators are nationals of the UK, followed by the USA, Canada, Australia, India, and Germany, with a median age of 39 years." }, { "figure_ref": [], "heading": "Problem & Confusion Selection", "publication_ref": [ "b44" ], "table_ref": [], "text": "We employ an LLM to generate plausible student confusions and base the dialogues on them. We pick the most frequent incorrect solution sampled from ChatGPT (gpt-3.5-turbo) (Ouyang et al., 2022) using chain-of-thought prompting. To be precise, we first use temperature sampling to obtain N = 50 reasoning paths for every MWP in GSM8k, with T = 0.7 and no top-k truncation Wang et al. (2023b). Then, we group incorrect solutions according to their final numeric answer and pick one from the set with the largest cardinality. More details can be found in Appendix B. As we will show in Section 4.1, teachers think that the majority of sampled confusions are plausible and could also have been made by a real student." }, { "figure_ref": [], "heading": "Student Turn Generation", "publication_ref": [ "b44" ], "table_ref": [], "text": "We use InstructGPT (text-davinci-003) (Ouyang et al., 2022) to generate student turns. We prompt the model with the previous dialogue history and additional information that grounds the next turn. The prompt contains the MWP, the initial student confusion, as well as the student profile which explains the type of confusion and persona of the student." }, { "figure_ref": [], "heading": "Taxonomy of Teacher Moves", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "This section defines the taxonomy of all teacher moves that are used in MATHDIAL . We base the first two on the work of Reiser ( 2004), who suggest that scaffolding strategies can be split into two main categories: structure and problematize. These form the basis for the Focus and Probing moves employed in our study. Focus is used to constrain the student to make direct progress towards solving the problem. Probing is used to generalize certain aspects of the problem which allows the student to explore its underlying concepts. More concretely, a teacher might construct a new, related problem that targets only one specific concept that is needed to solve the original MWP. However, scaffolding might also fail, for example when a student gets stuck. Then, teachers may need to reveal parts of the answer. This is called Telling. Finally, turns that just serve as conversational elements and have limited pedagogical value are classed as Generic. Table 2 lists finer-grained intents for each of these four categories along with a set of accompanying examples." }, { "figure_ref": [], "heading": "MATHDIAL Analysis", "publication_ref": [ "b40", "b75", "b10", "b20" ], "table_ref": [ "tab_1" ], "text": "We quantitatively evaluate the collected tutoring dialogues to assess their quality. For this, we outline descriptive statistics in Table 1. First of all, we can see that our dataset is significantly larger in terms of the number of dialogues and utterances than all related datasets that are listed. By opensourcing such a large dataset, we fill a crucial gap of sufficiently-sized open-source tutoring corpora which has so far hindered research in the area (Macina et al., 2023). Furthermore, MATHDIAL exhibits a higher diversity, measured in bigram entropy (Zhang et al., 2018), than CIMA and TalkMoves. The diversity is similar to NCTE and TSCC which consist of transcripts of classroom and one-to-one tutoring sessions, respectively. This supports the observation that expert annotators tend to create more diverse utterances than untrained crowdworkers (Casanueva et al., 2022), and also that LLMs can be used to generate diverse tutoring dialogues. Finally, we measure the Uptake (Demszky et al., 2021) of annotated teacher utterances. Uptake indicates how coherent the teacher's utterance is with respect to the previous student's turn. We find that MATH-DIAL and CIMA have similar uptake. Both surpass the other datasets in our comparison." }, { "figure_ref": [ "fig_0" ], "heading": "How well can LLMs simulate students?", "publication_ref": [], "table_ref": [], "text": "Our collection methodology relies on LLMs for simulating students. Therefore, it is crucial to ensure that the turns simulated by the LLM also match what a teacher would expect of a real student, who in our case is a sixth grader. In this section, we evaluate this quantitatively.\nFigure 3 shows that annotators rate the majority of generations by the model positively along two dimensions. The first one says that the confusion of the student is typical confusion of a sixth grader. The second one says that the interaction with the student as a whole is as expected of a sixth grader. We release these annotations with our final dataset which allows users of MATHDIAL to filter out utterances that are of a lower quality.\nMoreover, LLMs can be prone to incorrect arithmetic calculations. Therefore, we asked annotators to distinguish conceptual errors from such simple calculation mistakes. Arithmetic errors may be easily resolved through calculators but conceptual errors are likely to require tutors to resolve them, for example by scaffolding. Annotators identified around 80% of the confusions as conceptual, leaving around a fifth containing arithmetic errors. Again, we include these annotations to allow for data filtering." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Which teaching strategies do annotators choose?", "publication_ref": [ "b31", "b60" ], "table_ref": [], "text": "In this Section, we evaluate when teachers use which teacher moves in the conversations. Figure 4 shows that teachers most frequently use Focus questions which are found in 37% of utterances. Focus is followed by Generic and Probing. Telling is the rarest move. To validate these annotations, we sampled 17 conversations consisting of 102 teacher utterances and asked two independent annotators to annotate their moves. We obtain an agreement of κ = 0.60 between the two annotators and κ = 0.49 and κ = 0.34, respectively, between either of the annotators and the teacher. We note that Probing and Focus appear to be particularly challenging to distinguish and acknowledge that the boundary between them may be subjective. Merging these two categories into one larger 'scaffolding' category improves agreements to κ = 0.67, κ = 0.75 and κ = 0.55. Our observations are in line with related works that have shown low inter-annotator agreement between experts for detailed teacher moves in classroom settings (Kelly et al., 2020).\nThe sequence of moves employed by the teachers constitutes their teaching strategy which we analyze in the following. Figure 4 shows the distribution of teacher moves for different stages of the conversations. We find that the initial utterance by the teacher is usually generic and serves as a conversation opener, oftentimes by asking the student to repeat the question or solution attempt. During the conversation, teachers mainly use scaffolding to either probe the student or focus the conversation on a specific part of the problem. The more the conversations progress the more likely teachers are to resort to Telling because students often get stuck at a specific subproblem and are unable to resolve it themselves. As a consequence, less Probing is used. This has been shown to keep students engaged in the conversation who otherwise become frustrated by being stuck (VanLehn, 2011). " }, { "figure_ref": [], "heading": "How often can student confusion be resolved?", "publication_ref": [], "table_ref": [], "text": "The goal of MATHDIAL is to enable building tutors that can help students resolve their confusion. Therefore, we would like to know how often teachers can do so in our collected data. This is annotated by the teachers themselves, who assessed that they were successful in almost 89% of the conversations. In ca. 75% of the conversations by using mainly scaffolding questions, and only in around 14% by revealing the majority of the answer. The conversations in which confusions could not be resolved can still be useful, as they, for instance, can be used to train classifiers to determine when human intervention in such tutoring sessions is required." }, { "figure_ref": [], "heading": "Modeling Tutors with MATHDIAL", "publication_ref": [], "table_ref": [], "text": "We focus our initial studies on MATHDIAL on the task of tutor response generation. Tutor response generation aims to model the teacher in a dialogue by generating follow-up turns to guide the student towards learning and solving the problem.\nIn the following subsections, we compare different finetuned and prompted language models on the task and evaluate how much detailed information that can be given to the model, such as step-by-step solutions of the MWP, influence performance." }, { "figure_ref": [], "heading": "Training details", "publication_ref": [ "b39", "b61", "b69", "b35", "b47", "b38", "b73", "b3", "b46", "b45", "b74", "b23", "b15", "b20" ], "table_ref": [], "text": "We use neural conditional language models that given a tutoring dialogue history u T 1 , grounding Table 3: Results of finetuned and zero-shot prompted models on the tutor response generation task. We find that i) models finetuned on our dataset can outperform much larger prompted models, ii) there is still a gap in terms of generalization, iii) simply scaling the same pretrained model does not immediately improve results.\nMATHDIAL MATHDIAL seen MATHDIAL unseen sBLEU (↑) BERTScore (↑) KF1 (↑) BERTScore (↑) Uptake (↑) sBLEU (↑) KF1 (↑) sBLEU (↑) KF1 (↑) Model (u T +1 , ûT +1 ) (u T +1 , MWP) (u T , u\ninformation K, and a teacher move A, we wish to generate a continuation of the dialogue u T +1 ⊂ V * .\nHere V * denotes all strings that can be constructed from the model vocabulary V using Kleene's closure. K is a string composed of information annotated in MATHDIAL , namely the MWP, step-bystep solution, and the students' solution attempt.\nWe study locally-normalized models of the form\np θ (u T +1 | u T 1 , K, A) = N T +1 n=1 p θ ([u T +1 ] n | [u T +1 ] n-1 1 , u T 1 , K, A),\nwhere θ denotes the parameters of the model and T is moved throughout the dialogue to evaluate each intermediate teacher turn. We either optimize these parameters by finetuning for 10 epochs or zero-shot prompting an LLM. When finetuning, we use an initial learning rate of 6.25e -5 and linear learning rate decay without warm-up, and optimize the negative log-likelihood of the ground-truth response using the AdamW optimizer (Loshchilov and Hutter, 2019). We experiment with state-of-the-art pretrained Transformer (Vaswani et al., 2017) models and make use of the checkpoints provided by the transformers library (Wolf et al., 2020). In particular, we finetune BART (Lewis et al., 2020), Flan-T5 (Chung et al., 2022) which is based on T5 (Raffel et al., 2020) and was finetuned on the instructionfollowing flan collection (Longpre et al., 2023), as well as OPT (Zhang et al., 2022). Finally, we zero-shot prompt ChatGPT (Brown et al., 2020).\nData split We split our data into a training split containing 80% of the conversations and a test set containing the remaining 20%. Around 60% of the problems in the test set are also found in the training data, where at least one conversation was based on it, and therefore constitute our 'seen' split.\nThe remaining 40% are unseen during training and test the ability of the model to generalize to new problems. The dataset split is published with the dataset.\nMetrics We assess our models using the sacrebleu (Post, 2018) implementation of BLEU (sBLEU) (Papineni et al., 2002), as well as BERTScore3 (Zhang et al., 2020) between generated response (u T +1 ) and annotated response (û T +1 ) for each teacher response in the conversation. Furthermore, in line with previous works (Dziri et al., 2022;Daheim et al., 2023), we report BERTScore and the token level F1 (KF1) between generated utterance and math word problem as a proxy for faithfulness. However, we note that an increase in these metrics can be caused by an increase in overlap, which may also indicate more telling and can be undesirable. However, finding good evaluation metrics for assessing the faithfulness of dialogue tutors remains an open problem. Finally, we measure the Uptake of the generated response (Demszky et al., 2021).\nWe propose two evaluation metrics for end-toend tutoring, where a tutor model is evaluated interactively by using it to teach an LLM that simulates a student. Success@k measures the percentage of conversations where the student reaches the correct final answer at least once within the first k turns (equivalent of % solve rate in prior work). Telling@k measures the percentage of conversations where the teacher explicitly tells the final answer before the student has reached it on their own within the first k turns. 6 Results" }, { "figure_ref": [], "heading": "Tutor Response Generation", "publication_ref": [], "table_ref": [], "text": "Table 3 shows our main results for the task of tutor response generation on MATHDIAL . A first general observation is that automatic metrics appear low when compared to state-of-the-art models on other dialogue data. This might be explained by two main challenges that tutoring models face: a high level of ambiguity when it comes to sound teaching strategies and complex problems that the models need be able to correctly assess. In contrast, the data that ground responses in other dialogue tasks often needs a lesser amount of interpretation. Scaling models in terms of their parameter size is not directly reflected in improved metrics. This indicates that just using larger models might not be enough to build meaningful tutors on MATH-DIAL . Still, as shown in BERTScore and lexical overlap between response and grounding information, smaller models appear to rely more on the grounding information and might paraphrase less which might make teaching less engaging for students. Instruction tuning seems to have a largely positive effect in tutoring, as well. This is exhibited by the improvements that Flan-T5 yields over T5.\nIn order to be used in real-world settings, dialogue tutoring models need to be able to generalize to new problems. However, we find that there is still a large gap in the performance of all finetuned models between seen and unseen problems. This indicates a clear need to build models that can generalize better. Uptake on the other hand is generally high and for different models even higher than the ground-truth annotations. Finally, finetuned models tend to outperform zero-shot prompted GPT in terms of automatic metrics but the validity of them for evaluating such models may be questioned." }, { "figure_ref": [], "heading": "Influence of grounding information", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "MATHDIAL provides a large set of annotations that can be used to ground the responses of dia-logue tutors trained on it. Table 4 shows results obtained with Flan-T5 780M when giving different information. The results show that the step-by-step solution is crucial for the model. Question and incorrect solution are not as crucial but are also often repeated by student or teacher throughout the dialogue. Future work can explore this information in more detail to improve tutoring models. Finally, we conduct a human evaluation according to three criteria: 1) Coherence: how coherent the teacher's response is with respect to the preceding dialogue, 2) Correctness: whether it is in itself correct, and 3) Equitable tutoring. Equitable tutoring describes how well the model provides the student with room for exploring the problem and solution space. We use three expert annotators that each annotate n = 50 responses. We obtain agreements of κ = 0.29, κ = 0.69, and κ = 0.34 for the three categories. We find that the ground-truth data that we have collected shows high scores in all three criteria which confirms its quality. Then, we find that small fine-tuned models perform much better in terms of correctness and equitable tutoring than a prompted large language model (ChatGPT), even though the latter is pretrained on much more data and has a significantly larger parameter count. This shows the importance of high-quality data for training meaningful tutors. The automatic metrics are only partially confirmed. For instance, Flan-T5 3B is rated slightly better than Flan-T5 780M in correctness despite lower automatic scores." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Interactive Evaluation of Dialogue Tutors", "publication_ref": [], "table_ref": [], "text": "Good tutoring models need to maintain high quality not only when viewed per-utterance but especially over an entire conversation. In order to assess this, we use them to tutor an InstructGPT student and measure their success (Success@k), as well as the rate of telling (Telling@k). The tutor models are used as outlined in the previous subsections and the student model uses the same settings as during data collection. We compare our Flan-T5 780M model with a simple baseline that repeatedly asks \"What is the next step?\" (NEXTSTEP), ChatGPT, and the ground-truth conversations.\nFigure 5 shows that NEXTSTEP has the lowest success rate, but never tells solutions by construction. ChatGPT, on the other hand, has a high success rate but also the highest rate of telling. This is a crucial shortcoming because high telling is counterproductive to effectively teach students. Flan-T5 780M achieves a balance between the two and shows a similar amount of telling as the ground truth.\nWe note that the gap in success rate between Flan-T5 780M and ChatGPT, at least in the initial steps, stems mostly from longer problems, as is evident from Figure 6. Overall, no model can match the success rate of the ground-truth annotations. This indicates a large room for future improvements and research." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce a new framework for semi-synthetic dialogue dataset collection. We use it to collect a pedagogically rich dataset for tutoring math word problems that follow equitable tutoring practices and learning sciences research on scaffolding student understanding, called MATHDIAL . Our dataset consists of ca. 3k tutoring conversations grounded in math word problems from GSM8k. We benchmark open-source models on the task of tutor response generation and show that smaller models finetuned on our MATHDIAL can significantly surpass the performance of much larger prompted LLMs. Moreover, in our proposed interactive tutoring simulation, the finetuned model achieves similar student-solving success as prompted LLM while keeping the direct telling rate lower. Nevertheless, models still require better reasoning over student solutions and better generalization to unseen problems.\nOur dataset fills a crucial gap towards studying effective dialogue tutors at scale by providing a significantly larger amount of dialogues than other available corpora in one-on-one tutoring and provides a tough testbed towards better tutoring models. We hope that it can spark more research in this meaningful but understudied area of NLP." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b76" ], "table_ref": [], "text": "In this work, we used an LLM to simulate student confusion. However, we acknowledge that these models have a limited understanding of human learning and this is a key limitation in our dataset -certain kinds of student confusions may be under-or over-represented in our dataset. Future work can focus on addressing this limitation. Furthermore, in our setup, teachers were interacting with an LLM role-playing as a student. However, it is possible that some teachers might have learned to interact with the student model in a different way than they would do in the classroom. Moreover, it is also possible that some teachers may have lost motivation when found out they are not interacting with real students, leading to lower data quality. In the future, we would like to explore solutions to build better LLM-based student models (Zhou et al., 2023).\nThe methodology to collect the dataset was instantiated just for the domain of math reasoning. The collection of additional domain-specific datasets is necessary to further generalize the effectiveness of our methodology.\nInspired by previous work in scaffolding, we acknowledge our focus is on a subset of common teaching moves. However, this does not cover all the goals of human tutors, such as meta-cognitive support or building rapport with a student. Moreover, text tutoring limits teachers' use of additional instructional practices such as drawings.\nFinally, measuring a student's immediate success in solving a problem does not capture all the aspects of student learning. From a learning perspective, focusing on and measuring long-term learning is desired. Therefore, even if students struggle to answer a specific problem correctly, teachers asking scaffolding questions requiring conceptual understanding offer even better promise for deeper, wider, and more long-term learning." }, { "figure_ref": [], "heading": "A Dataset statistics", "publication_ref": [], "table_ref": [], "text": "For NCTE, uptake is calculated on the teacherstudent dialogue pairs while bigram entropy is calculated on all teacher utterances. For TalkMoves and TSCC, bigram entropy is calculated on all teacher utterances having more than three words, while uptake is calculated on teacher utterances immediately following student utterances if both have more than three words." }, { "figure_ref": [], "heading": "B Problem and confusion selection", "publication_ref": [ "b25" ], "table_ref": [], "text": "While the problems in GSM8k are simple enough to be understood quickly by teachers, they remain challenging for students, who among others have to deal with equations or percentages. We follow the GSM8k reasoning format and prompt ChatGPT (gpt-3.5-turbo) with a 2-shot prompt. Given a prompt and a math word problem, we sample n reasoning paths r i solutions from the model. We parse the first numerical answer a i after the model generated \"####\" which represents the final result. Most of the generated outputs have this format and we discard all generations not following it. We sample N = 50 reasoning path candidates using the same settings as suggested by (Wang et al., 2023b). After sampling multiple reasoning pairs and corresponding answer pairs (r i , a i ) we use a majority vote over a i which does not lead to a ground truth answer a: arg max a n i=1 1(a i ̸ = a).\nWe select problems with at most four solution steps. Since our initial experiments show the occurence of rounding errors, which related work finds to be more common in LLMs than humans (Frieder et al., 2023), we limit them by discarding confusions that are within 0.1 of the original solution. Moreover, to filter out other simple calculation errors which are not interesting from a learning standpoint we parse all the intermediate equations which are in the format << a × b = c >> and use a calculator to check for inconsistencies.\nThe full prompt used is: Of the problems in the GSM8k dataset, 5684 problems were queried after eliminating problems with more than 5 steps in the solution. This yielded 2, 313 problems with at least one wrong solution. We then eliminated student solutions having fewer than 300 characters (having too few characters makes it harder to pinpoint where exactly the error occurred) or more than 500 characters (longer solutions require annotators to spend more time understanding the error), leaving us with 1, 379 wrong solutions. Finally, we eliminate problems where all 50 or 49 out of 50 proposed solutions have the same (wrong) final answer, leaving us with our final set of 1131 problems. " }, { "figure_ref": [], "heading": "C.2 Student characteristics", "publication_ref": [ "b2" ], "table_ref": [], "text": "To build a dataset that would reflect students of various backgrounds, we use numerous student names associated with their given pronouns. List of all student characteristics based on prior work studying misconceptions in learning algebra (Booth et al., 2017):\n• has a problem with understanding what steps or procedures are required to solve a problem.\n• has a problem with understanding underlying ideas and principles and a recognition of when to apply them.\n• struggle most with understanding what the problem is asking them to do.\n• has difficulty determining which pieces of information are relevant and which are irrelevant to solving the problem.\n• struggle to put the numbers in the correct order in the equation or determine the correct operation to use.\n• struggle to recognize the problem type and therefore do not know what strategy to use to solve it." }, { "figure_ref": [], "heading": "C.3 Common error cases", "publication_ref": [ "b34" ], "table_ref": [], "text": "We manually screened some conversations and teacher feedback to understand common error cases of student model. The most common problem among them was the occurence of simple arithmetic errors (e.g. 7-2=9) and inconsistent student behaviour (e.g. student returning to the incorrect answer after figuring out the correct one in the previous utterance). These errors are captured in the teacher quality Likert scale rating of student behaviour. We acknowledge further analysis is needed to better understand the fine-grained student model behavior on problems with different numbers of steps e.g. by cognitive task analysis (Koedinger and McLaughlin, 2016)." }, { "figure_ref": [ "fig_8" ], "heading": "D Data collection interface", "publication_ref": [], "table_ref": [], "text": "We use Prolific for data collection and hire annotators with teaching experience. To ensure the data quality we filter only annotators with 100% completion rate with more than 500 total submissions. All the payments to the annotators exceeded the US federal minimum wage and the final batch of annotators were paid the equivalent of $12/hour. The data collection interface is shown in Figure 7. Annotators were restricted to having a maximum of five conversations in one annotation session. One conversation takes ca. 6 minutes. Data collection took place over a period of 2 months." }, { "figure_ref": [ "fig_9", "fig_8", "fig_10" ], "heading": "D.1 Annotation pipeline", "publication_ref": [], "table_ref": [], "text": "For each annotator, we randomly assign a student and math word problem. Teachers were instructed to first analyze the student homework solution and then start the conversation to scaffold student problem understanding. Post-conversation questionnaire is filled out by teachers to rate the conversation and get feedback on the type of student error.\nComparing solutions As shown in Figure 8, the teacher first analyzes and compares the correct solution with the incorrect student solution (student confusion). The teacher marks the exact line of a first student error and categorizes the problem into the following categories:\n• Reached correct solution but proceeded further\n• Extra quantity or Missing quantity\n• Unit conversion error Tutoring conversation Next, the teacher has a conversation (see Figure 7) with a student and uses scaffolding moves to help the student understand the problem. The conversation ends when the student correctly solves the problem or if the total conversation time exceeds 10 minutes.\nPost conversation questionnaire Teacher fills the post conversation questionnaire as shown in Figure 9." }, { "figure_ref": [], "heading": "D.2 Annotators training phase", "publication_ref": [], "table_ref": [], "text": "We let annotators read best practices on how to have a productive conversation with students (cf. Section D.3 and D.4) and tested them on their understanding of our task afterwards. We started the data annotation with all the annotators able to successfully pass the test. Moreover, to improve the training phase we manually checked several conversations by each annotator in terms of the quality and usage of diverse scaffolding questions. " }, { "figure_ref": [], "heading": "D.3 Annotation Guidelines", "publication_ref": [], "table_ref": [], "text": "Teachers were instructed to have a one-on-one tutoring session with different 6th-grade students. They were told that students received a math word problem for homework and submitted their solutions beforehand. In a tutoring conversation, teachers were asked to go through the student's solution and try to let the student understand using a series of sensemaking questions to support student reasoning and learning. Specifically, they were instructed to not just correct student solutions by telling what's correct/incorrect, but to give students the opportunity to explore the problem with a focus on core aspects, such as their chosen strategy. However, as the goal is to focus on conceptual errors, they were allowed to let students use calculators or correct their arithmetic mistakes. giving out the partial or full answer to the student and should be mostly used when a student is stuck." }, { "figure_ref": [], "heading": "D.4 Teacher moves taxonomy", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.5 Background for teacher moves", "publication_ref": [ "b49", "b0", "b37", "b43", "b60", "b43", "b37" ], "table_ref": [], "text": "Scaffolding (Reiser, 2004;Anghileri, 2006) assists students to succeed in tasks that would otherwise be complex and differentiates between guidance (e.g. decomposing problem, clarifying) from cognitive activation (e.g. causing cognitive conflicts, activating prior knowledge (Limón, 2001)). The effective teacher moves to scaffold students' understanding have been studied extensively by analyzing and annotating real human tutoring conversations (Nye et al., 2014;VanLehn, 2011). Experienced teachers can through natural language guide students' focus and uncover misconceptions (Nye et al., 2014). The teacher moves in the form of scaffolding to support student understanding by asking open-ended questions, activating their prior knowledge, or causing cognitive conflicts (Limón, 2001). A teacher asking scaffolding questions provides learning opportunities for students to actively construct their knowledge. However, at the same time asking only difficult questions could lead to a loss of learner motivation and potentially the end of the dialogue. On the other hand, only constantly revealing answers does not lead to long-term learning." }, { "figure_ref": [], "heading": "D.6 Postprocessing", "publication_ref": [], "table_ref": [], "text": "As we are interested in real educational use cases for our tutoring system, we apply a safety filter to filter out conversations with any sensitive content. In particular, we use the Perspective API 4 to filter out conversations containing toxic content (<1%)." }, { "figure_ref": [], "heading": "D.7 Initial pilots", "publication_ref": [ "b12" ], "table_ref": [], "text": "We initially explored two additional approaches of data collection: i) human-human conversations, and ii) synthetic generation by LLMs. The framework we used in the final data collection enables us to scalably create data since we are only reliant on one user who can quickly create entire conversations with the LLM, taking ca. 6 minutes per 7+ turn conversation. We found this more efficient and performant than both human-human conversations and synthetic data generation. Specifically, the human-to-human collection is too time-consuming (on average 15 minutes per conversation in our pilot experiments) and requires waiting times to synchronously connect participants (Choi et al., 2018), and synthetic generation has proven to be error-prone (see example in Figure 10); for example, models fail to understand student solutions and themselves make arithmetic errors that are not expected from teachers." }, { "figure_ref": [], "heading": "E Interactive evaluation of tutoring", "publication_ref": [], "table_ref": [], "text": "The student model in all 3 cases is an InstructGPT model (text-davinci-003) as defined in Section C.1, with the student name fixed to \"Kayla\". The first utterance of the teacher is hardcoded to \"Hi Kayla, could you walk me through your solution?\". For Flan-T5 780M teacher model decoding, we used sampling without a beam search. For the Chat-GPT teacher model (gpt-3.5-turbo), the following prompt is used: A tutor and a student work together to solve the following math word problem.\\n Math problem: (MATH PROBLEM)\\n The correct solution is as follows: (CORRECT SOLUTION)\\n Your role is tutor.\nThe tutor is a soft-spoken empathetic person who dislikes giving out direct answers to students and instead likes to answer with other questions that would help the student understand the concepts 4 https://perspectiveapi.com so students can solve the problem themselves." }, { "figure_ref": [], "heading": "F Human Evaluation Protocol", "publication_ref": [], "table_ref": [], "text": "The following dimensions were rated by annotators:\n• Coherence -\"The response naturally follows up on the previous utterance and context and has no logical conflicts with the context.\"\n• Correctness -\"The response is factually and mathematically correct and respects the learning concepts being taught.\"\n• Equitable tutoring -\"The response gives a learning opportunity for the student by providing space for reflection, explanation, pointing to follow-up challenge, or engaging the student in other ways.\nWe use a 3-point Likert scale ranging from 1 (poor) and 3 (very good) for coherence and equitable tutoring and a binary scale for correctness.\nChatGPT prompt is the same as in the interactive tutoring scenario (Section E) with an additional section containing student solution." }, { "figure_ref": [], "heading": "Context", "publication_ref": [], "table_ref": [], "text": "Conversation Q: Sam memorized six more digits of pi than Carlos memorized. Mina memorized six times as many digits of pi as Carlos memorized. If Mina memorized 24 digits of pi, how many digits did Sam memorize?" }, { "figure_ref": [], "heading": "Ground truth:", "publication_ref": [], "table_ref": [], "text": "Carlos memorized 24/6=4 digits of pi. Sam memorized 4+6=10 digits of pi. Final Answer: 10" }, { "figure_ref": [], "heading": "Student solution:", "publication_ref": [], "table_ref": [], "text": "Step 1: Carlos memorized 18 digits of pi. Step 2: Mina memorized 24 digits of pi. Step 3: Sam memorized 6 more digits of pi than Carlos, so he memorized 24 digits of pi (18 + 6 = 24). Final Answer: 24 T: (probing) You outlined that Carlos memorized 18 digits of pi -where did you get this figure from? S: I got this figure by taking the number of digits that Mina memorized (24) and subtracting the number of digits that Carlos memorized more than Mina (6). So 24 -6 = 18.\nT: (probing) If the question says that Mina memorised six times as many digits of pi as Carlos memorised, and she memorised 24, what do you think that means for the digits of pi that Carlow memorised? S: It means that Carlos memorized 4 digits of pi (24/6 = 4).\nT: (focus) That's brilliant -really well done. So, if that means that Carlos memorised 4 digits of pi, and the question says that Sam memorised six more digits of pi than Carlos, how many digits of pi has Sam memorised? S: Sam has memorized 10 digits of pi (4 + 6 = 10).\nT: (focus) That's absolutely perfect! So, based on that, how many digits of pi did Carlos, Mina and Sam each memorise? S: Carlos memorized 4 digits of pi, Mina memorized 24 digits of pi, and Sam memorized 10 digits of pi. Figure 10: In our initial pilot study we observed that synthetic data generation by InstructGPT strictly followed the same structure of only asking next-step questions (highlighted in yellow) and was prone to inconsistencies in factual correctness and order of steps (highlighted in red)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "9 Acknowledgements This project was made possible by an ETH AI Center Doctoral Fellowship to Jakub Macina with further support from the Asuera Stiftung and the ETH Zurich Foundation. Nico Daheim has received funding by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. Mrinmaya Sachan acknowledges support from the Swiss National Science Foundation (Project No. 197155), a Responsible AI grant by the Haslerstiftung; and an ETH Grant (ETH-19 21-1)." } ]
While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MATHDIAL , a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MATH-DIAL and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions.
MATHDIAL: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems
[ { "figure_caption": "Figure 3 :3Figure 3: Teacher judgments on the ability of Instruct-GPT to simulate students. Teachers rate the simulated behaviour as largely plausible. Lighter regions on top account for questions where the confusion was not resolved.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Overall distribution of teacher moves (left) and their distribution at each dialogue step (right). Teachers tend to start with Focus and Probing and then increasingly use Telling as the conversation progresses.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance of our tutor model and 3 baselines on interactive tutoring of the student model. We find the model trained on MATHDIAL to have a similar success@5 rate with less telling.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Performance of our tutor model and ChatGPT on interactive tutoring of the student model on problems with solutions of different lengths (n is the number of steps in the ground truth solution). The performance of all models drops for problems with more than 2 step solutions.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Q:Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? A: Natalia sold 48/2 = «48/2=24»24 clips in May. Natalia sold 48+24 = «48+24=72»72 clips altogether in April and May. #### 72 Q: Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn? A: Weng earns 12/60 = «12/60=0.2»0.2 per minute. Working 50 minutes, she earned 0.2 x 50 = «0.2*50=10»10. #### 10", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "We use InstructGPT (text-davinci-003) with the following prompt using temperature sampling with T = 0.4 and no top-k truncation: Student Persona: (STUDENT PERSONA)\\n\\n Math problem: (MATH PROBLEM)\\n\\n Student solution: (STUDENT SOLUTION)\\n\\n Context: (STUDENT NAME) thinks their answer is correct.Only when the teacher provides several good reasoning questions, (STUDENT NAME) understands the problem and corrects the solution. (STUDENT NAME) can use a calculator and thus makes no calculation errors. Send EOM tag at the end of the student message.\\n\\n (DIALOGUE HISTORY)", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Web interface of the tool for collecting dialogue tutoring conversations. The left panel shows math word problem, correct solution, and student solution.The right panel contains conversation history, a panel for selecting the category of response, and a text area to send a response to the student. After clicking Send, the student model is immediately invoked using an internal API call.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Teacher first compares student solution with the correct solution and marks the exact step of the error.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Post questionnaire.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "T: (focus) Well done Luca! You got it right! Table 6: Examples of MATHDIAL conversations. T refers to a teacher utterance, S refers to a student utterance. Each conversation is grounded in the correct solution and student solution. Bold text is information for the reader indicating error categories.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and simultaneously shown", "figure_data": "DatasetDomainDialogues Dialogic SettingsGroundingTeacher Bigram UptakeAvg. wordsPairsInformationMoves Entropyper utteranceMATHDIAL (ours)Math2 86114 197 1:1 semi-synthetic confusion, answers43.540.8317.3CIMA", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of dialogue tutoring datasets. MATHDIAL has grounding annotations, and is significantly larger while keeping high diversity and utterance lengths.", "figure_data": "Math1 6602 348 classroom✗✗3.570.7629.2", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation on the influence of grounding information, which shows that the ground-truth solution gives the model the most valuable information.", "figure_data": "sBLEU (↑) BERTScore(↑)(uT +1, ûT +1)Flan-T5780M8.053.0+ question8.653.2+ incorrect solution8.353.5+ ground-truth9.555.0+ all9.755.0", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Human evaluation shows that finetuning models on MATHDIAL increases their performance in terms of correctness and equitable tutoring.", "figure_data": "ModelCoherence (↑) Correctness (↑) Equitable (↑)3-point0/13-pointFlan-T5 780M2.850.892.19Flan-T5 3B2.840.912.18OPT 1.3B2.610.721.95ChatGPT2.890.431.43Ground-truth2.940.982.42", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "refers to the details of teacher moves usedduring annotation. In summary, Focus comprisesof all conversation elements that direct the studenttowards the solution without actually giving out anyof the solution, while Probing attempts to developreasoning skills and world knowledge relevant tothe problem, but not necessarily specific to thegiven problem. Telling is giving out parts of thesolution, either calculations or strategy or both. Allother conversational elements, including trying tounderstand what the student has already tried, fallunder Generic.Most importantly, scaffolding questions that areproductive for long-term learning are Focus andProbing. On the other hand, Telling represents", "figure_id": "tab_9", "figure_label": "2", "figure_type": "table" } ]
Jakub Macina; Nico Daheim; Sankalan Pal Chowdhury; Tanmay Sinha; Manu Kapur; Iryna Gurevych; Mrinmaya Sachan
[ { "authors": "Julia Anghileri", "journal": "Journal of Mathematics Teacher Education", "ref_id": "b0", "title": "Scaffolding practices that enhance mathematics", "year": "2006" }, { "authors": "Marcel Binz; Eric Schulz", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b1", "title": "Using cognitive psychology to understand gpt-3", "year": "2023" }, { "authors": "Julie L Booth; Kelly M Mcginn; Christina Barbieri; Laura K Young", "journal": "", "ref_id": "b2", "title": "Misconceptions and learning algebra. And the rest is just algebra", "year": "2017" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Paweł Budzianowski; Ivan Vulić", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Hello, it's GPT-2 -how can I help you? towards the use of pretrained language models for task-oriented dialogue systems", "year": "2019" }, { "authors": "Paweł Budzianowski; Tsung-Hsien Wen; Bo-Hsiang Tseng; Iñigo Casanueva; Stefan Ultes; Milica Osman Ramadan; Gašić", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "MultiWOZ -a largescale multi-domain Wizard-of-Oz dataset for taskoriented dialogue modelling", "year": "2018" }, { "authors": "Bill Byrne; Karthik Krishnamoorthi; Chinnadhurai Sankar; Arvind Neelakantan; Ben Goodrich; Daniel Duckworth; Semih Yavuz; Amit Dubey; Kyu-Young Kim; Andy Cedilnik", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Taskmaster-1: Toward a realistic and diverse dialog dataset", "year": "2019" }, { "authors": "William Cai; Josh Grossman; Jerry Zhiyuan; Hao Lin; Johnny Tian-Zheng Sheng; Joseph Jay Wei; Sharad Williams; Goel", "journal": "Machine Learning", "ref_id": "b8", "title": "Bandit algorithms to personalize educational chatbots", "year": "2021" }, { "authors": "Andrew Caines; Helen Yannakoudakis; Helena Edmondson; Helen Allen; Pascual Pérez-Paredes; Bill Byrne; Paula Buttery", "journal": "", "ref_id": "b9", "title": "The teacher-student chatroom corpus", "year": "2020" }, { "authors": "Inigo Casanueva; Ivan Vulić; Georgios Spithourakis; Paweł Budzianowski", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "NLU++: A multilabel, slot-rich, generalisable dataset for natural language understanding in task-oriented dialogue", "year": "2022" }, { "authors": "Maximillian Chen; Alexandros Papangelis; Chenyang Tao; Seokhwan Kim; Andy Rosenbaum; Zhou Liu; Dilek Yu; Hakkani-Tur", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "PLACES: Prompting language models for social conversation synthesis", "year": "2023" }, { "authors": "Eunsol Choi; He He; Mohit Iyyer; Mark Yatskar; Wentau Yih; Yejin Choi; Percy Liang; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "QuAC: Question answering in context", "year": "2018" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b13", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Mark Chen; Heewoo Jun; Lukasz Kaiser; Matthias Plappert; Jerry Tworek; Jacob Hilton; Reiichiro Nakano; Christopher Hesse; John Schulman", "journal": "", "ref_id": "b14", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Nico Daheim; Nouha Dziri; Mrinmaya Sachan; Iryna Gurevych; Edoardo M Ponti", "journal": "", "ref_id": "b15", "title": "Elastic weight removal for faithful and abstractive dialogue generation", "year": "2023" }, { "authors": "Zhuyun Dai; Arun Tejasvi Chaganty; Y Vincent; Aida Zhao; Qazi Mamunur Amini; Mike Rashid; Kelvin Green; Guu", "journal": "", "ref_id": "b16", "title": "Dialog inpainting: Turning documents into dialogs", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "Ishita Dasgupta; Andrew K Lampinen; C Y Stephanie; Antonia Chan; Dharshan Creswell; James L Kumaran; Felix Mcclelland; Hill", "journal": "", "ref_id": "b18", "title": "Language models show human-like content effects on reasoning", "year": "2022" }, { "authors": "Dorottya Demszky; Heather Hill", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "The NCTE transcripts: A dataset of elementary math classroom transcripts", "year": "2023" }, { "authors": "Dorottya Demszky; Jing Liu; Zid Mancenido; Julie Cohen; Heather Hill; Dan Jurafsky; Tatsunori Hashimoto", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Measuring conversational uptake: A case study on student-teacher interactions", "year": "2021" }, { "authors": "Emily Dinan; Stephen Roller; Kurt Shuster; Angela Fan; Michael Auli; Jason Weston", "journal": "", "ref_id": "b21", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "year": "2019" }, { "authors": "Nouha Dziri; Ehsan Kamalloo; Kory Mathewson; Osmar Zaiane", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Augmenting neural response generation with context-aware topical attention", "year": "2019" }, { "authors": "Nouha Dziri; Ehsan Kamalloo; Sivan Milton; Osmar Zaiane; Mo Yu; Edoardo M Ponti; Siva Reddy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b23", "title": "Faithdial: A faithful benchmark for informationseeking dialogue", "year": "2022" }, { "authors": "Scott Freeman; Sarah L Eddy; Miles Mcdonough; Michelle K Smith; Nnadozie Okoroafor; Hannah Jordt; Mary Pat Wenderoth", "journal": "Proceedings of the national academy of sciences", "ref_id": "b24", "title": "Active learning increases student performance in science, engineering, and mathematics", "year": "2014" }, { "authors": "Simon Frieder; Luca Pinchetti; Ryan-Rhys Griffiths; Tommaso Salvatori; Thomas Lukasiewicz; Philipp Christian Petersen; Alexis Chevalier; Julius Berner", "journal": "", "ref_id": "b25", "title": "Mathematical capabilities of chatgpt", "year": "2023" }, { "authors": "Milica Gašic; Dongho Kim; Pirros Tsiakoulis; Catherine Breslin; Matthew Henderson; Martin Szummer; Blaise Thomson; Steve Young", "journal": "", "ref_id": "b26", "title": "Incremental on-line adaptation of pomdp-based dialogue managers to extended domains", "year": "2014" }, { "authors": "Matthew Henderson; Blaise Thomson; Jason D Williams", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "The second dialog state tracking challenge", "year": "2014" }, { "authors": "Christine Howe; Sara Hennessy; Neil Mercer; Maria Vrikki; Lisa Wheatley", "journal": "Journal of the learning sciences", "ref_id": "b28", "title": "Teacher-student dialogue during classroom teaching: Does it really impact on student outcomes", "year": "2019" }, { "authors": "Hyangeun Ji; Insook Han; Yujung Ko", "journal": "Journal of Research on Technology in Education", "ref_id": "b29", "title": "A systematic review of conversational ai in language education: focusing on the collaboration with human teachers", "year": "2023" }, { "authors": "John F Kelley", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b30", "title": "An iterative design methodology for user-friendly natural language office information applications", "year": "1984" }, { "authors": "Sean Kelly; Robert Bringe; Esteban Aucejo; Jane Cooley Fruehwirth", "journal": "Education Policy Analysis Archives", "ref_id": "b31", "title": "Using global observation protocols to inform research on teaching effectiveness and school improvement: Strengths and emerging limitations", "year": "2020" }, { "authors": "Hyunwoo Kim; Jack Hessel; Liwei Jiang; Ximing Lu; Youngjae Yu; Pei Zhou; Ronan Le Bras; Malihe Alikhani; Gunhee Kim; Maarten Sap", "journal": "", "ref_id": "b32", "title": "Soda: Million-scale dialogue distillation with social commonsense contextualization", "year": "2022" }, { "authors": "Seokhwan Kim; Mihail Eric; Karthik Gopalakrishnan; Behnam Hedayatnia; Yang Liu; Dilek Hakkani-Tur", "journal": "", "ref_id": "b33", "title": "Beyond domain APIs: Task-oriented conversational modeling with unstructured knowledge access", "year": "2020" }, { "authors": "R Kenneth; Elizabeth A Koedinger; Mclaughlin", "journal": "International Educational Data Mining Society", "ref_id": "b34", "title": "Closing the loop with quantitative cognitive task analysis", "year": "2016" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Yanran Li; Hui Su; Xiaoyu Shen; Wenjie Li; Ziqiang Cao; Shuzi Niu", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b36", "title": "DailyDialog: A manually labelled multi-turn dialogue dataset", "year": "2017" }, { "authors": "Margarita Limón", "journal": "Learning and instruction", "ref_id": "b37", "title": "On the cognitive conflict as an instructional strategy for conceptual change: A critical appraisal", "year": "2001" }, { "authors": "Shayne Longpre; Le Hou; Tu Vu; Albert Webson; Hyung Won Chung; Yi Tay; Denny Zhou; Quoc V Le; Barret Zoph; Jason Wei; Adam Roberts", "journal": "", "ref_id": "b38", "title": "The flan collection: Designing data and methods for effective instruction tuning", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b39", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Jakub Macina; Nico Daheim; Lingzhi Wang; Tanmay Sinha; Manu Kapur; Iryna Gurevych; Mrinmaya Sachan", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Opportunities and challenges in neural dialog tutoring", "year": "2023" }, { "authors": "Olga Majewska; Evgeniia Razumovskaia; M Edoardo; Ivan Ponti; Anna Vulić; Korhonen", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b41", "title": "Crosslingual dialogue dataset creation via outline-based generation", "year": "2023" }, { "authors": "Julia M Markel; Steven G Opferman; James A Landay; Chris Piech", "journal": "Association for Computing Machinery", "ref_id": "b42", "title": "Gpteach: Interactive ta training with gpt-based students", "year": "2023" }, { "authors": "Arthur C Benjamin D Nye; Xiangen Graesser; Hu", "journal": "International Journal of Artificial Intelligence in Education", "ref_id": "b43", "title": "Autotutor and family: A review of 17 years of natural language tutoring", "year": "2014" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b47", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Abhinav Rastogi; Xiaoxue Zang; Srinivas Sunkara; Raghav Gupta; Pranav Khaitan", "journal": "", "ref_id": "b48", "title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset", "year": "2020" }, { "authors": "Brian J Reiser", "journal": "Journal of the Learning Sciences", "ref_id": "b49", "title": "Scaffolding complex learning: The mechanisms of structuring and problematizing student work", "year": "2004" }, { "authors": "D Rod; Michelene Th Roscoe; Chi", "journal": "Instructional science", "ref_id": "b50", "title": "Tutor learning: The role of explaining and responding to questions", "year": "2008" }, { "authors": "Liwei Sherry Ruan; Justin Jiang; Bryce Xu; Joe-Kun; Zhengneng Tham; Yeshuang Qiu; Elizabeth L Zhu; Emma Murnane; James A Brunskill; Landay", "journal": "Association for Computing Machinery", "ref_id": "b51", "title": "Quizbot: A dialogue-based adaptive learning system for factual knowledge", "year": "2019" }, { "authors": "Pararth Shah; Dilek Hakkani-Tür; Gokhan Tür; Abhinav Rastogi; Ankur Bapna; Neha Nayak; Larry Heck", "journal": "", "ref_id": "b52", "title": "Building a conversational agent overnight with dialogue self-play", "year": "2018" }, { "authors": "Tasmia Shahriar; Noboru Matsuda", "journal": "Springer-Verlag", "ref_id": "b53", "title": "Can you clarify what you said?: Studying the impact of tutee agents' follow-up questions on tutors' learning", "year": "2021-06-14" }, { "authors": "Jakub Kumar Shridhar; Mennatallah Macina; Tanmay El-Assady; Manu Sinha; Mrinmaya Kapur; Sachan", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Automatic generation of socratic subquestions for teaching math word problems", "year": "2022" }, { "authors": "Katherine Stasaski; Kimberly Kao; Marti A Hearst", "journal": "Association for Computational Linguistics", "ref_id": "b55", "title": "CIMA: A large open access dialogue dataset for tutoring", "year": "2020" }, { "authors": "Abhijit Suresh; Jennifer Jacobs; Margaret Perkoff; James H Martin; Tamara Sumner", "journal": "Association for Computational Linguistics", "ref_id": "b56", "title": "Finetuning transformers with additional context to classify discursive moves in mathematics classrooms", "year": "2022" }, { "authors": "Anaïs Tack; Chris Piech", "journal": "International Educational Data Mining Society", "ref_id": "b57", "title": "The AI teacher test: Measuring the pedagogical ability of blender and GPT-3 in educational dialogues", "year": "2022" }, { "authors": "Kimberly D Tanner", "journal": "CBE-Life Sciences Education", "ref_id": "b58", "title": "Structure matters: Twentyone teaching strategies to promote student engagement and cultivate classroom equity", "year": "2013" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Yaguang Du; Hongrae Li; Huaixiu Lee; Amin Steven Zheng; Marcelo Ghafouri; Yanping Menegali; Maxim Huang; Dmitry Krikun; James Lepikhin; Dehao Qin; Yuanzhong Chen; Zhifeng Xu; Adam Chen; Maarten Roberts; Vincent Bosma; Yanqi Zhao; Chung-Ching Zhou; Igor Chang; Will Krivokon; Marc Rusch; Pranesh Pickett; Laichee Srinivasan; Kathleen Man; Meredith Ringel Meier-Hellstern; Tulsee Morris; Renelito Delos Doshi; Toju Santos; Johnny Duke; Ben Soraker; Vinodkumar Zevenbergen; Mark Prabhakaran; Ben Diaz; Kristen Hutchinson; Alejandra Olson; Erin Molina; Josh Hoffman-John; Lora Lee; Ravi Aroyo; Alena Rajakumar; Matthew Butryna; Viktoriya Lamm; Joe Kuzmina; Aaron Fenton; Rachel Cohen; Ray Bernstein; Blaise Kurzweil; Claire Aguera-Arcas; Marian Cui; Ed Croak; Quoc Chi; Le", "journal": "", "ref_id": "b59", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Kurt Vanlehn", "journal": "Educational Psychologist", "ref_id": "b60", "title": "The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems", "year": "2011" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b61", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b62", "title": "", "year": "" }, { "authors": "Lingzhi Wang; Mrinmaya Sachan; Xingshan Zeng; Kam-Fai Wong", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "Strategize before teaching: A conversational tutoring system with pedagogy self-distillation", "year": "2023" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; V Quoc; Ed H Le; Sharan Chi; Aakanksha Narang; Denny Chowdhery; Zhou", "journal": "", "ref_id": "b64", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b65", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b66", "title": "", "year": "" }, { "authors": "Tsung-Hsien Wen; David Vandyke; Nikola Mrkšić; Milica Gašić; Lina M Rojas-Barahona; Pei-Hao Su; Stefan Ultes; Steve Young", "journal": "Association for Computational Linguistics", "ref_id": "b67", "title": "A network-based end-to-end trainable task-oriented dialogue system", "year": "2017" }, { "authors": "Rainer Winkler; Sebastian Hobert; Antti Salovaara; Matthias Söllner; Jan Marco Leimeister", "journal": "Association for Computing Machinery", "ref_id": "b68", "title": "Sara, the lecturer: Improving learning in online education with a scaffolding-based conversational agent", "year": "2020" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Sebastian Wollny; Jan Schneider; Daniele Di Mitri; Joshua Weidlich; Marc Rittberger; Hendrik Drachsler", "journal": "Frontiers in Artificial Intelligence", "ref_id": "b70", "title": "Are we there yet? -a systematic literature review on chatbots in education", "year": "2021" }, { "authors": "Jing Xu; Da Ju; Joshua Lane; Mojtaba Komeili; Eric Michael Smith; Megan Ung; Morteza Behrooz; William Ngan; Rashel Moritz; Sainbayar Sukhbaatar; Y-Lan Boureau; Jason Weston; Kurt Shuster", "journal": "", "ref_id": "b71", "title": "Improving open language models by learning from organic interactions", "year": "2023" }, { "authors": "Lining Zhang; Simon Mille; Yufang Hou; Daniel Deutsch; Elizabeth Clark; Yixin Liu; Saad Mahamood; Sebastian Gehrmann; Miruna Clinciu; Khyathi Raghavi Chandu; João Sedoc", "journal": "Association for Computational Linguistics", "ref_id": "b72", "title": "A needle in a haystack: An analysis of high-agreement workers on MTurk for summarization", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b73", "title": "Opt: Open pretrained transformer language models", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b74", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "Yizhe Zhang; Michel Galley; Jianfeng Gao; Zhe Gan; Xiujun Li; Chris Brockett; Bill Dolan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b75", "title": "Generating informative and diverse conversational responses via adversarial information maximization", "year": "2018" }, { "authors": "Wangchunshu Zhou; Yuchen ; Eleanor Jiang; Long Li; Jialong Wu; Tiannan Wang; Shi Qiu; Jintian Zhang; Jing Chen; Ruipu Wu; Shuai Wang", "journal": "", "ref_id": "b76", "title": "Agents: An open-source framework for autonomous language agents", "year": "2023" }, { "authors": "Qi Zhu; Kaili Huang; Zheng Zhang; Xiaoyan Zhu; Minlie Huang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b77", "title": "Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset", "year": "2020" }, { "authors": "Caleb Ziems; William Held; Omar Shaikh; Jiaao Chen; Zhehao Zhang; Diyi Yang", "journal": "", "ref_id": "b78", "title": "Can large language models transform computational social science?", "year": "2023" } ]
[ { "formula_coordinates": [ 7, 75.42, 72.55, 444.44, 27.05 ], "formula_id": "formula_0", "formula_text": "MATHDIAL MATHDIAL seen MATHDIAL unseen sBLEU (↑) BERTScore (↑) KF1 (↑) BERTScore (↑) Uptake (↑) sBLEU (↑) KF1 (↑) sBLEU (↑) KF1 (↑) Model (u T +1 , ûT +1 ) (u T +1 , MWP) (u T , u" }, { "formula_coordinates": [ 7, 77.66, 381.72, 204.67, 53.87 ], "formula_id": "formula_1", "formula_text": "p θ (u T +1 | u T 1 , K, A) = N T +1 n=1 p θ ([u T +1 ] n | [u T +1 ] n-1 1 , u T 1 , K, A)," } ]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b17", "b3", "b19", "b28", "b20", "b8", "b12", "b38", "b25", "b11", "b26", "b1", "b42", "b38" ], "table_ref": [], "text": "With recent progress in generation capabilities of LLMs, automatic summarization is making its appearance in practical information consumption situations such as summarizing work meetings (Arabzadeh et al., 2022), health records (Jain et al., 2022), or scientific documents (Cachola et al., 2020). To ensure the safe and effective implementation of these applications, it is essential to limit the reach of factually inconsistent summaries, a known issue with generated summaries (Kryściński et al., 2019;Maynez et al., 2020).\nPrior work (Kryściński et al., 2020;Fabbri et al., 2021;Gao and Wan, 2022) has annotated corpora of model summaries with labels of factual consistency, finding that most abstractive summarization systems produce a non-negligible portion of inconsistent summaries. In turn, such corpora are used to instantiate tasks such as inconsistency detection (ID) (Laban et al., 2022a;Tang et al., 2022), in which models are given (document, summary) pairs, and must identify whether the summary is consistent with the document. Recent investigations of using LLMs for evaluation have shown promising results across different NLP tasks (Liu et al., 2023;Fu et al., 2023), including factual consistency (Luo et al., 2023). In this work we continue this line of research and explore applying LLMs as factuality evaluators in the context of text summarization. We first establish baseline performance for a suite of LLMs on three existing consistency benchmarks. Accuracy results confirm that some LLMs perform competitively with state-of-the-art specialized methods such as QAFactEval (Fabbri et al., 2022). However, a manual analysis of free-text explanations that LLMs generate reveals two key limitations of the accuracy-only analysis. Ideally, a model correctly predicting the consistency label of a sum-mary should be capable of generating an explanation for its binary prediction. Yet, we find that most LLMs generate explanations that do not accurately pinpoint factual inaccuracies, with only three models -GPT4 (OpenAI, 2023), Claude V1.3 (Bai et al., 2022), and Bard (Thoppilan et al., 2022) generating correct explanations for more than 50% of cases we annotated. Second, the manual analysis in the AggreFact consistency benchmark (Tang et al., 2022) of conflict cases -in which GPT4 predictions disagree with the dataset label -reveals a significant number of mislabeled samples (7+%) of factual inconsistencies undetected by annotators during dataset creation that the model explanation reveals. This lack of quality of benchmarks limits the precise evaluation of model performance at factual inconsistency detection.\nTo address this issue, we introduce a protocol designed to create challenging benchmarks while ensuring the reproducibility of the labels. The protocol involves manually verifying the consistency of a small set of seed summaries and subsequently generating numerous edited versions of these summaries. We discover that assessing the consistency of edited summaries is relatively straightforward and easy to scale for human annotators, thus guaranteeing low cost and high agreement among annotators, yet keeping the task challenging for models.\nWe create the SUMMEDITS benchmark by implementing the protocol in ten diverse textual domains, including the legal, dialogue, academic, financial, and sales domains. Figure 1 summarizes experimental results on the benchmark, which indicate that SUMMEDITS presents a challenge for both specialized models and current LLMs, with only four models -GPT3-Davinci003, ChatGPT, PaLM2-Bison, and GPT4 -outperforming the specialized model QAFactEval. Our estimate of human performance of 90%+ is largely above all model performance, suggesting most current LLMs are not yet proficient at complex factual reasoning, and still cannot assess the factual validity of summaries with precision.\nWe believe SUMMEDITS can serve as a tool for evaluating LLMs' abilities to detect when factual inconsistencies (inevitably) occur and encourage LLM developers to report their performance on the benchmark. For practitioners requiring specific domain expertise, the protocol can be adapted to generate low-cost, in-domain benchmarks that can probe for model capabilities prior to production use.\nWe release the code, protocol steps, and benchmark data publicly1 ." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b19", "b28", "b20", "b8", "b16", "b34", "b4", "b12", "b39", "b10", "b8", "b44", "b36", "b14" ], "table_ref": [], "text": "Annotating Factuality of Summaries. With advances in language models and the increase in fluency and abstractiveness of summarizers, prior work showed that one of the key challenges in summarization was enforcing factual consistency (Kryściński et al., 2019), particularly with models trained on datasets with unfaithful references (Maynez et al., 2020). Several efforts -such as FactCC (Kryściński et al., 2020), SummEval (Fabbri et al., 2021), Polytope (Huang et al., 2020), FRANK (Pagnoni et al., 2021), and CLIFF (Cao and Wang, 2021) -annotated the generated summaries of tens of model, finding that most models produce a non-negligible portion of inconsistent summaries. Although most annotation effort has focused on the summarization of news, some prior work also looked at dialogue summarization (Gao and Wan, 2022), or the medical domain (Tang et al., 2023). In most work, scalable high-quality annotation is challenging, due to low inter-annotator agreement when relying on crowd-workers, with some work showing that 10+ annotators are required to achieve some level of consensus (Falke et al., 2019), and some work recommending solely relying on experts (Fabbri et al., 2021). At the heart of the issue, annotating the factual consistency of a summary is challenging: it requires careful reading of long documents and the detection and interpretation of nuanced facts. In this work, we propose a new protocol to annotate factual consistency resources and show that it lowers the cost and increases reproducibility by minimizing the amount of reasoning required for each annotation.\nDetecting Factual Errors. Some work has taken an automated approach to the detection of inconsistencies, with approaches falling into two main categories: question and entailment-based. In questionbased approaches, questions are generated with the expectation that paired documents and summaries should provide consistent answers. QAFactEval (Fabbri et al., 2022) unified prior work (Wang et al., 2020;Scialom et al., 2021) by systematically evaluating each element of the pipeline and proposing a best-performing combination. Entailmentbased methods either rely on entailment on depen-dency parses, such as with the DAE method (Goyal and Durrett, 2020), or directly leverage naturallanguage entailment models, such as SummaC (Laban et al., 2022a). We include these three representative models in our experiments and find that although they require several orders of magnitudes fewer parameters than LLMs, they can reach similar performances on challenging benchmarks." }, { "figure_ref": [], "heading": "LLM Aptitude In Controlled Setting", "publication_ref": [ "b20", "b32" ], "table_ref": [], "text": "In this section, we present the initial set of experiments that were conducted on the FactCC benchmark (Kryściński et al., 2020). FactCC was created based on the XSum news summarization dataset (Narayan et al., 2018) and consists of news article-summary sentence pairs manually labeled based on their factuality. While simple in nature, the benchmark can serve as a test bed for exploring the basic understanding LLMs have of the task at hand. Furthermore, the data points come with manually annotated error types, allowing for experiments in fine-grained error detection.\nIn the following subsections, we define the experimental setup, i.e. prompts, models, and data, and present the experiment results along with a discussion." }, { "figure_ref": [], "heading": "Prompt Selection", "publication_ref": [ "b35", "b2", "b45", "b24", "b46" ], "table_ref": [], "text": "As part of this initial study, we explore a wide range of prompts that have been shown to unlock some of the emergent abilities of LLMs. These prompts can be organized into four groups as follows:\nZero-Shot Prompts (Radford et al., 2019) Evaluate the zero-shot transfer abilities of models. These prompts are limited to a short task description and the input data based on which the models generate their output. In our study, we included three different zero-shot prompts offering varying levels of detail in the task description. The bestperforming prompt was selected by a majority vote across models and used as the base for the prompts described in the following paragraphs.\nFew-Shot Prompts (Brown et al., 2020) Enable the in-context learning abilities of LLMs. These prompts include a task description and one or more demonstrations of the task. The provided demonstrations condition the model for the actual input data that the model is expected to process. In this study we experiment with one-, two-, and threeshot prompts which build upon each other.\nChain-of-Thought Prompts (Wei et al., 2022) Explore LLM models' ability to generate step-bystep reasoning for answers and have been shown to improve performance on complex reasoning tasks. The models are given a task description and input data and are asked to generate a series of intermediate reasoning steps necessary to solve the task alongside the answer. We explore chain-of-thought prompts both in zero-and few-shot settings.\nGenerate-with-Evidence Prompts (Lampinen et al., 2022) Explore the models' ability to present evidence for the generated answers and has also been shown to improve performance on reasoning-intense tasks. Similar to chain-ofthought prompts, the models are given a task description and input data and are asked to answer the task, and then generate evidence for the chosen answer. In this work we explore generate-withevidence prompts in zero-and few-shot settings.\nPersona-based Prompts (White et al., 2023) Extract certain points of view from LLMs or focus them on a set of abilities. Shown to work best with chat-tuned LLMs, models are assigned a role, or \"persona\", and next prompted to complete a given task. The assigned personas condition the models on the task at hand. In this work, models were assigned the persona of a \"journalist\" who is factchecking a text before publication. Three prompts were tested, where the persona-based prompt was used in zero-and few-shot settings, and in combination with a chain-of-thought prompt.\nAll prompt templates described in this section and used in the study are presented in the associated code repository." }, { "figure_ref": [], "heading": "Model Selection", "publication_ref": [ "b14", "b43", "b40", "b6" ], "table_ref": [], "text": "Similar to the prompt selection, we begin with evaluating a wide range of methods that can be applied to the task of factual consistency evaluation. Selected models span different architectures and training procedures, and can be categorized into the following groups:\nNon-LLM Models that were designed and trained specifically for the task of factual consistency evaluation in text summarization. Those models include two NLI-based approaches, DAE (Goyal and Durrett, 2020) and SummaC (Laban et al., 2022a), and a QA-based method QAFactEval (Fabbri et al., 2022). In this work, we treat the Non-LLM models as baselines and points of comparison with LLM-based factuality evaluators.\nFoundation Models Large-scale models that have been pre-trained on web-scale corpora, but have not been fine-tuned on task-specific or instruction-following datasets. Such models have shown emergent abilities, such as zero-and fewshot in-context learning. Models in this group include Meta's LLaMa-13b (Touvron et al., 2023), and OpenAI's Ada001, Babbage001, Curie001, and DaVinci-001.\nInstruction-tuned LLMs Foundation models which were further tuned on instruction-following data either through supervised learning or RLbased methods. Such models show enhanced capabilities of following natural language instructions, including zero-and few-shot prompts as well as chain-of-thought approaches. Models in this group include Databrick's Dolly, Stanford's Alpaca (Taori et al., 2023), Anthropic's Claude V1.3, Cohere's Command-XL, Google's PaLM2-bison, and Ope-nAI's DaVinci-002, and DaVinci-003 models.\nChat-based LLMs Foundation models tuned on conversational and instruction-following datasets. The fine-tuning procedure aims to enhance the model capabilities of engaging in multi-turn dialog with end-users while being able to solve a wide range of complex tasks. This group includes Google's Bard, Mosaic's MPT-7b-chat (Team, 2023), Vicuna-13b (Chiang et al., 2023), and Ope-nAI's GPT3.5-turbo (ChatGPT), and GPT-4.\nFor each model, model cards and method of access are provided in Appendix A, model architecture and training details are described in the associated literature." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Experiments described in the following subsections were conducted on the synthetic part of the FactCC dataset. We select a total of 150 samples to conduct experiments, by including 25 examples for each of the 5 error types in the dataset, i.e. date-, entity-, negation-, number-, and pronoun-related errors, and 25 factually correct samples. Considering that the original data was generated using heuristics, all examples were selected manually to ensure highquality data. " }, { "figure_ref": [], "heading": "Inconsistency Detection", "publication_ref": [], "table_ref": [], "text": "We first evaluate the models' ability to detect factual inconsistencies in a binary classification setting with Yes-No labels. Non-LLM models return a continuous score attributed to a label class using a tuned threshold, while LLM-based approaches generate free-form text where the final output is retrieved using regular expressions. Due to input length restrictions, certain models could not be tested on more complex (and longer) prompts. Table 1 presents the balanced accuracy scores averaged across three prompts within each prompt category. The results provide the following insights:\nTwo non-LLM models achieve near-perfect accuracy and substantially outperform LLM-based evaluators. We speculate that this might be caused by non-LLM models being over-optimized to the idiosyncrasies of this simple error detection task and might not hold for more involved detection examples. We investigate this question further in later sections of the paper.\nRegarding the prompt design, we notice that for most models (8/12), providing a few examples of the task (zero-→ few-shot) improves the per- formance by an average of 2.7 percentage points. However, for two models, GPT4 and PaLM2, the performance in the same setting dropped substantially (-6.2 pp). Considering those the two models achieve strong performance across all prompt types, we conclude that few-shot examples can help models but are not necessary for top-performing models.\nIn the majority of cases (8/12) Generate-with-Evidence prompts outperform Chain-of-Thought prompts corroborating prior findings of Ye and Durrett (2022) that models perform better when generating an answer followed by the evidence, rather than generating reasoning followed by an answer as in CoT. An in-depth evaluation of the factual reasoning capabilities of models is presented in the following section.\nPersona-based prompts improve the performance of GPT3.5-Turbo; however, they lower the performance of all other models, including the remaining chat-based LLMs. This finding suggests that conditioning the generation on a specific persona might be a feature exclusive to ChatGPT, possibly linked to the data used for tuning, rather than a broader trait of chat-tuned LLMs." }, { "figure_ref": [], "heading": "Input Document", "publication_ref": [], "table_ref": [], "text": "Prue Leith, who takes over from Mary Berry on the show, insists \"baking is the best way to get people into cooking\". The latest series returns this autumn on Channel 4 after it outbid the BBC. The previous hosts announced they \"were not going with the dough\" when the show switched channels. The pair had presented all seven series of the show alongside judges Berry and Hollywood. Similarly, Berry also announced she would not remain with the show on Channel 4, citing \"loyalty\" to the BBC. [...]" }, { "figure_ref": [], "heading": "Inconsistent Summary", "publication_ref": [], "table_ref": [], "text": "Berry also announced that he would not be staying on the show on Channel 4, citing \"loyalty\" to BBC." }, { "figure_ref": [], "heading": "• Correct Explanation", "publication_ref": [], "table_ref": [], "text": "No. The summary incorrectly refers to Berry as \"he\" instead of \"she\". -GPT-4" }, { "figure_ref": [], "heading": "• Partially Correct Explanation", "publication_ref": [], "table_ref": [], "text": "The summary incorrectly states that Mary Berry is a he and that Berry cited loyalty to the BBC as a reason for not remaining on the show. -Claude V1.3" }, { "figure_ref": [], "heading": "• No Explanation", "publication_ref": [], "table_ref": [], "text": "No -Bab001" }, { "figure_ref": [], "heading": "• Unrelated Explanation", "publication_ref": [], "table_ref": [], "text": "For me the answer is No! The document says \"The new BBC series returns this autumn on Channel 4 after it outbid the BBC\". -LlaMa-13b" }, { "figure_ref": [], "heading": "• Incorrect Explanation", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The document does not mention Mary Berry announcing that she would not be staying on the show on Channel 4. -Alpaca-13b\nTable 3: Examples of explanation types used to evaluate model ability at explaining inconsistencies they detect.\nWe study model performance for each error type, averaging the accuracy score across all prompts for LLM models. Results are presented in Table 2. We notice that the majority of LLM-based evaluators achieve satisfactory accuracy (> 80%) in classifying positive (factually correct) examples. The results also highlight that with a few exceptions (Claude, ChatGPT, and GPT4) LLMs struggle to consistently detect factual inconsistencies, in many cases achieving scores below random chance. Pronoun swap detection seems to be the most complicated of error types averaging 38.87% across all LLM-based evaluators and also scoring lowest for specialized, non-LLM models." }, { "figure_ref": [], "heading": "Factual Reasoning", "publication_ref": [ "b29" ], "table_ref": [], "text": "To gain further insights into LLM ability to reason about factual consistency, we performed a manual analysis of more than 3,600 explanations generated for sixteen of the seventeen LLMs included in our experiments2 .\nIn our analysis, we focus on cases the model classifies as inconsistent, as there is a clear explanation a model should provide to pinpoint elements in the summary it identifies as inconsistent.\nFor each known inconsistent (document, summary) sample in FactCC, and each model output explaining why the sample is inconsistent, we hired an annotator to label the explanation with one of five labels: • entirely correct: the model's full explanation must accurately describe a factual inaccuracy in the summary, • partially correct: the model correctly describes at least one factual inconsistency in the summary, but also describes an incorrect element or facts unrelated to factual consistency, • no explanation: the model provides a classification label (Yes/No) without providing the requested explanation, • unrelated: the model's output addresses aspects other than factual consistency or is not an explanation (e.g., the model writing a new summary), and • incorrect: the model's explanation is invalid and does not correctly identify an element in the summary which is factually incorrect. Table 3 gives an example of each explanation type from the annotation, and Appendix B provides further information on the hiring and onboarding of the two annotators that completed the task. During annotation, the annotator samples were presented in a shuffled order, and the annotator was not aware of the models that had generated any particular sample.\nWe analyze annotation reproducibility by collecting multiple annotations for roughly 200 annotations and computing Cohen's Kappa. We find a moderate agreement amongst annotators of 0.72 on the five-way annotation.\nFigure 2 summarizes the results by providing the distribution of types of explanations generated by each LLM.\nFirst, we find that all models struggle to provide correct explanations pinpointing the inconsistencies in summaries, with nine of the sixteen models providing correct explanations less than 10% of the time and only three models -Bard, Claude V1.3, and GPT4 -providing correct explanations more than 50% of the time.\nWe also notice that better performance at the binary classification task (Table 1) does not necessarily lead to more accurate model explanations. For example, GPT3.5-turbo performs 5-10% better in terms of binary accuracy than Claude V1.3, yet it generates almost half as many correct explanations. This finding suggests accuracy metrics sometimes overlook models that are right for the wrong reasons (McCoy et al., 2019), which might negatively affect user trust.\nAnalyzing the failure behavior of models reveals differences amongst models. The first group of models -including Dav001, Dav002, and Cur001 -fails by not providing an explanation for the inconsistency, even though they were explicitly prompted to accompany their answer with an explanation. A second group -including Ada001, LLaMa-13B, and Cohere-cmd-XL -most often generates \"unrelated explanations\", which do not explain the nature of factual inconsistency, but might instead quote other facts omitted from the summary, propose a new summary, or other tangential text. A third group -with models such as MPT-7B-Chat and Dolly-v2-12B -generates plausible explanations that are factually incorrect, or present invalid logic. We argue that although all models should strive to produce only correct explanations, some failure cases are preferable to others. For example, it might be preferable for a model to provide no explanation than a plausible-looking but incorrect one that might mislead a user. For example, MPT-7B-Chat and Dav003 both generate roughly the same proportion of correct explanations (21 vs. 24%), yet when they fail, Dav003 is much more likely to abstain from providing an explanation, whereas MPT-7B-Chat is more likely to provide an explanation with incorrect reasoning, which could prove more harmful." }, { "figure_ref": [], "heading": "Fine-grained Inconsistency Detection", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "To explore the LLMs' fine-grained understanding of factual evaluation we designed a set of experiments prompting the models to evaluate each (document, sentence) pair with respect to individual error types. For each of the error types present in the data models were expected to overlook any other factual error, thus evaluating model ability to selectively focus on one error type while ignoring others. In the experiments, we filter out 4.\nThe results show a consistent pattern across all considered LLMs and error types, where the models achieve a low precision and high recall score. This indicates that the models are not able to follow the fine-grained instructions and distinguish different error types. Instead, they simply detect factual inconsistencies (to the best of their abilities) on a general level and assign a negative label. Providing the models with examples of the error to be detected (few-shot setting) does improve the performance of a subset of models; however, no general performance improvement patterns emerged.\nIn short, none of the models we experiment with can perform the task of fine-grained factual inconsistency detection, in which they are tasked with focusing on a single type of factual error.\nAdditionally, we carried out an experiment where the per-error-type prompts were combined into a single instruction with multiple tasks that the model was expected to complete in a sequence.\nQualitative analysis of the results showed that most of the models could not follow the instructions and consistently provide answers in the expected format, thus the results were not included in the final results table." }, { "figure_ref": [], "heading": "Limits of Crowd-Based Benchmarks", "publication_ref": [ "b38", "b12" ], "table_ref": [], "text": "In this section we analyze two popular benchmarks for factual consistency detection in summarization: AggreFact (Tang et al., 2022) and DialSummEval (Gao and Wan, 2022) and uncover limitations that guide the design principles of the SUMMEDITS benchmark we build." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "In the subsequent sections of the paper, we incorporate the insights gained from the experiments on FactCC to inform our experiment design in terms of model and prompt selection.\nFirst, we filter out all models that did not achieve a balanced accuracy above 60% on FactCC, as such models are unlikely to significantly outperform random chance on more challenging benchmarks. Checkmarks (✓) in Table 1 indicate models that are retained in the experiments of Sections 4.1-6.\nSecond, to minimize the computational cost of experiments, we select a single Zero-Shot (ZS) prompt that is used for all LLM models. We make this choice instead of using multiple prompts per model or selecting each model's best-performing prompt on FactCC results for three reasons: (1) there's no guarantee that prompt quality will transfer across benchmarks, and using a single common prompt removes variance from prompt optimization that does not measure underlying model ability, (2) top-performing LLMs such as GPT4 achieve their best performance on FactCC with ZS prompts, indicating that high performance with a simple ZS prompt is achievable, and (3) more complex prompts would require adaptation to each domain (e.g. domain-specific few-shot examples), and restrict the evaluation of models with shorter maximum sequence lengths due to longer prompts." }, { "figure_ref": [], "heading": "AggreFact", "publication_ref": [ "b38", "b34" ], "table_ref": [], "text": "The AggreFact-SOTA (Tang et al., 2022) benchmark is a factual consistency benchmark focused on the news domain, modified from the SummaC benchmark (Laban et al., 2022a) showed that summaries from older models were less relevant to the field of consistency detection. Table 1 reports the balanced accuracy of specialized models and LLMs on AggreFact. At first glance, the specialized models still outperform LLMs, even though increasing LLM size leads to performance improvements and helps close the gap, with GPT-4 performing within 2.4% points of the specialized DAE. However, all models perform relatively poorly, with no model reaching a balanced accuracy of 80% on a binary classification task.\nTo inspect performance on the AggreFact benchmark further, we conducted a manual annotation similar to the one conducted in FactCC Section 3.5 but focused on cases where GPT4 disagrees with the label of AggreFact. More precisely, we manually inspected the explanations provided by GPT4\nfor the 101 summaries it judged were inconsistent but labeled as consistent in the dataset.\nOf the 101 samples, 80 were labeled by the annotator as correct or partially correct explanations that identify and explain a factual inconsistency in the summary. In other words, this manual analysis of a subset of AggreFact reveals that a minimum of 6% of the samples in AggreFact are mislabeled. The low reliability of labels in crowdsourced benchmarks like AggreFact is a known issue (Pagnoni et al., 2021) stemming from task complexity that requires the annotator to carefully read and understand an entire document and accompanying summary, leading to low repeatability and inter-annotator agreement. This methodology reveals the potential for LLMs as part of dataset creation. In some cases, an LLM explanation that is verifiable -such as an explanation for an identified factual inconsistencycan accelerate and improve the quality of annotation. We note however that LLM explanations are only valuable for a subset of the samples. For example, in cases where the model asserts a summary is consistent, manual verification is still required to assure quality. In Section 6, we explore a new protocol for factual consistency benchmark creation which can involve an LLM.\nBased on the low reliability of labels in Ag-greFact, we note that a key requirement for future benchmarks is to improve label reliability, which can be demonstrated with high annotator agreement when multiple annotators are involved. " }, { "figure_ref": [], "heading": "DialSummEval", "publication_ref": [ "b8" ], "table_ref": [ "tab_4" ], "text": "The DialSummEval (Gao and Wan, 2022) benchmark is a summarization evaluation benchmark created following the format of SummEval (Fabbri et al., 2021) for the domain of dialogue summarization. In DialSummEval, each (dialogue, summary) tuple is evaluated by three annotators, each assigning a Likert score (1-5) assessing the consistency of the summary. The authors of the benchmark report an agreement level of 0.67 Krippendorff's alpha on the labels, indicating a moderate amount of agreement among annotators.\nWe evaluate model performance in two ways: (1) direct correlation between model predictions and the average annotator score, and (2) we follow Laban et al. (2022a)'s procedure to transform the benchmark into a binary classification task, amenable to the balanced accuracy metric. Results are summarized in Table 5.\nEchoing results on AggreFact, increasing model size leads to a minor improvement in performance both in balanced accuracy and correlation, but most LLMs still underperform specialized methods. In absolute terms, all methods struggle to achieve strong performance on the benchmark, with accuracies all below 70%.\nIn Figure 6, we aggregate model predictions into 0.5-width buckets on the Likert scale. We find that most models achieve strong performance on nonborderline buckets ([1.0, 1.5), [1.5, 2.0], [4.0, 4.5], [4.5, 5.0]), assigning a vast majority of samples to the correct class (inconsistent for low buckets, consistent for high buckets). The borderline buckets ([2.0, 4.0]) however are less clear-cut: most mod- Human Anno" }, { "figure_ref": [], "heading": "SummEdits Benchmark", "publication_ref": [], "table_ref": [], "text": "Does the edit introduce a factual inconsistency?\n✔ Consistent ✘ Inconsistent 7 for example samples produced by the protocol.\nels assign large proportions of samples from each bucket into consistent and inconsistent classes.\nWe argue that annotating the consistency of summaries using a Likert scale limits the quality and interpretability of the benchmark, as it is not evident to interpret the differences between scores, limiting reproducibility, which is reflected in the moderate Kripendorff's alpha. Instead, we favor framing factual consistency benchmarks as a detection task. In the detection task, identifying any factual inconsistency between the document and summary leads to an overall assessment of the summary being inconsistent. If no inconsistency is detected, the summary is consistent. The detection framing also allows for models to provide natural language explanations when identifying a summary as inconsistent, which can be manually verified to confirm model reasoning ability, and model failure modes, as done in Section 3.5.\nIn the next section, we propose a novel protocol to create factual consistency benchmarks, incorporating lessons learned from existing benchmarks." }, { "figure_ref": [], "heading": "Edited Summary Labeled As Consistent", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Edited Summary Labeled As Inconsistent", "publication_ref": [], "table_ref": [], "text": "The characters discuss ponder the consequences of banishing Marcius, with Cominius warning that his alliance collaboration with the Volscians will bring great danger to Rome.\nThe characters discuss the consequences of banishing Marcius, with Cominius warning that his alliance with the Volscians Romans will bring great danger to Rome the Volscians. -Entity Manipulation\nWe introduced a novel new, simple, and efficient data augmentation method that boosts improves the performances of existing GANs when training data is limited and diverse.\nWe introduced a novel, simple, and efficient data augmentation method that boosts the performances of existing GANs when training data is limited abundant and diverse. -Antonym Swap Employees of the European Commission are now forced instructed to delete remove TikTok from their work devices, and delete get rid of it from their personal devices too if they have work-related apps applications installed.\nEmployees of the European Commission are now forced not required to delete TikTok from their work devices, and delete but should still remove it from their personal devices too if they have work-related apps installed. -Hallucinated Fact A conversation between a sales agent and a potential client possible customer. The sales agent provides information on different home insurance plans options and pricing, as well as available discounts for clients with good credit scores and other factors.\nA conversation between a sales agent and a potential client. The sales agent provides information on different home insurance plans and, but not on pricing, as well as or available discounts for clients with good credit scores and other factors. -Negation Insertion Table 7: Example edit summaries -deletions, insertions -for four domains of SUMMEDITS (top-to-bottom: Shakespeare Plays, SciTLDR, News, Sales Call). Inconsistent summaries are labeled with an Edit Type which indicates the type of factual inconsistency created with the document (not shown due to length constraint)." }, { "figure_ref": [], "heading": "SUMMEDITS Protocol", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Design Principles", "publication_ref": [], "table_ref": [], "text": "Based on the analysis of previous benchmarks, we set several design principles that can help create higher quality factual consistency benchmark: P1. Binary Classification Task: In the benchmark, a summary should either be labeled as inconsistent if any factual inconsistency is identified with the document or consistent otherwise, to improve label interpretability.\nP2. Focus on Factual Consistency: Summaries in the benchmark should be flawless on aspects unrelated to consistency, such as fluency, coherence, and formatting, to avoid confounding effects on the quality of the benchmark.\nP3. Reproducibility: Benchmark labels should not depend on annotator identity, and high annotator agreement should confirm the validity of the benchmark, as well as estimate human performance on the benchmark.\nP4. Benchmark Diversity: Inconsistency errors in the benchmark should represent a wide range of errors in realistic textual domains, to increase understanding of model strengths and weaknesses, and better establish gaps in performance between models and human annotators at factual reasoning, if there are any." }, { "figure_ref": [ "fig_2" ], "heading": "Creation Procedure", "publication_ref": [ "b18", "b7" ], "table_ref": [], "text": "We now describe the creation procedure we design for the SUMMEDITS benchmark with an objective to satisfy the design principles stated above, the procedure is visually introduced in Figure 3. At a high level, the procedure consists of three steps: (1) seed summary verification, (2) generation of summary edits, and (3) annotation of edited summaries.\nSeed Summary Verification. Benchmark creators select a small collection of documents in a domain of choice, and a seed summary is collected for each document, which can either be humanwritten or model generated. An annotator answers two questions about each (document, seed summary) tuple: (a) \"Are there any flaws with the summary? (fluency, format, etc.)\", (b) \"Is the summary factually consistent with the document?\". If the annotator identifies a flaw in the summary (e.g., an incomplete or disfluent sentence), or any inconsistency, the tuple is filtered out (P2), otherwise, it proceeds to Step 2.\nGeneration of Edited Summaries. Once a seed summary has been verified, the second step consists in generating multiple minor edits to the summary, which might or might not affect the consistency of the summary. This procedure can be carried out manually, or automatically with an LLM. Proposed edits should be atomic and localized, not entirely rewriting a novel summary. Example edits of summaries are shown in Table 7.\nAnnotation of Edited Summaries. The annotator who completed the seed verification task (Step 1) is tasked with reviewing each edited summary and assigning it with one of three labels: (a) consistent if an edit does not lead to an inconsistency in the summary, (b) inconsistent if the edit modifies the seed summary in a way that introduces a factual inconsistency, (c) borderline for any other case such as the edit making the summary unclear, or the edit requiring subjective interpretation.\nCrucially, we note that a single annotator should complete both Steps 1 and 3, as once they have invested the time in reading the (document, summary seed) tuple, the time required to judge the consistency of edits is greatly reduced. We also recommend including a large number of edits (e.g., 30 edits) to maximize edit diversity (P4), and encouraging annotators to assign the borderline label if they are unsure about any aspect of an edit, in order to maximize reproducibility (P3).\nA benchmark can be formed by retaining edited summaries that are labeled as consistent and inconsistent and filtering out borderline cases.\nWe note that this procedure only requires a small number of documents and seed summaries, as each seed summary is derived into many edited summaries. This flexibility facilitates the creation of factual consistency benchmarks in application domains that lack such resources, such as legal (Kornilova and Eidelman, 2019) or podcast summarization (Clifton et al., 2020)." }, { "figure_ref": [], "heading": "SUMMEDITS Benchmark", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benchmark Creation", "publication_ref": [], "table_ref": [], "text": "We implemented the SUMMEDITS protocol on ten realistic summarization domains to explore the reliability of the protocol. For five domains, seed summaries are automatically generated due to the lack or low quality of existing reference summaries. In such cases, we used ChatGPT and domain-specific prompts to generate seed summaries. We note that for all domains, the quality of seed summaries is ultimately manually confirmed in step 1 of the protocol, which consists of ensuring seed summaries are factually consistent and flawless in terms of fluency, formatting, etc. For all domains, we use GPT3.5-turbo as the LLM to produce edited summaries 3 . The model chosen to produce summary edits has an important impact on the benchmark. We experimented with integrating multiple LLMs in the edit generation process, but preliminary results indicated that many LLMs were not successful at generating minorly edited summaries and often attempted to write entirely novel summaries, which led us to use ChatGPT as the single model to generate edited summaries. This choice is discussed further in Section 7." }, { "figure_ref": [], "heading": "Domain", "publication_ref": [ "b15", "b7", "b18", "b13" ], "table_ref": [], "text": "We hired two professional annotators, who were compensated at a rate of $20/hour to perform steps 1 and 3 of the protocol. Three authors of the paper also participated in the annotation for quality control purposes. Appendix C has further detail on annotation protocol, and an overview of the annotation interface, which ensured that each annotator completed Task 1 and 3 sequentially for any sample in the benchmark. We next introduce the ten domains included in the SUMMEDITS benchmark.\nNews To avoid selecting documents and summaries that are in the training corpora of evaluated models, we follow prior work (Goyal et al., 2022) and select (document, summary) tuples from recent news articles. We obtained news articles from the Google News top events feed in February 2023, selecting at most one sample per news source to increase coverage diversity (Laban 3 The prompts we use are listed in our open-source release. Table 9: Balanced accuracy of models on the SUMMEDITS benchmark. The top three models are non-LLM specialized models, the middle section are LLMs. We also report a GPT4 oracle performance and an estimate of human performance. et al., 2023). Seed summaries are extracted from the article's metadata.\nPodcast (Clifton et al., 2020) We collected 40 podcast transcripts from the unreleased test set of Spotify's podcast summarization dataset. Due to low reference summary quality, we generated seed summaries automatically.\nBillSum (Kornilova and Eidelman, 2019) We collected 40 US bills and their accompanying summaries as seeds from the training portion of Bill-Sum, a challenging dataset for summarization in the legal domain.\nSamSum (Gliwa et al., 2019) We collected 40 dialogues and their accompanying summaries from the training portion of SamSum, a common dialogue summarization dataset for messenger-like conversations." }, { "figure_ref": [], "heading": "Shakespeare (Karpathy, 2015)", "publication_ref": [ "b3", "b48", "b30" ], "table_ref": [], "text": "We collected 40 scenes from Shakespeare plays from the Tiny Shakespeare corpus, each roughly 700 words long. We generated seed summaries automatically.\nSciTLDR (Cachola et al., 2020) We collected 40 research paper abstracts and their corresponding TLDRs from the training portion of SciTLDR, a dataset for scientific paper summarization.\nQMSum (Zhong et al., 2021) We collected 40 document and seed summaries from QMSum, a dataset for query-based meeting summarization.\nECTSum (Mukherjee et al., 2022) We collected 40 documents from the ECTSum dataset, a summarization dataset for the financial earnings call transcripts. Due to low reference summary quality, we generated seed summaries automatically." }, { "figure_ref": [], "heading": "Sales Call & Email", "publication_ref": [], "table_ref": [], "text": "We generated fictional sales call transcripts and sales emails -40 for eachand corresponding seed summaries using ChatGPT. This subset of the benchmark evaluates the protocol's validity with textual data entirely machinegenerated in targeted domains that lack pre-existing summarization datasets." }, { "figure_ref": [], "heading": "SUMMEDITS Statistics", "publication_ref": [ "b27", "b34" ], "table_ref": [ "tab_6" ], "text": "Table 8 provides statistics of the finalized SUMMEDITS benchmark. Each domain yielded between 400-900 edited summaries, depending on the fraction of seed summaries that pass the first step validation (58% overall pass rate) and the percentage of edited summaries that are annotated as borderline and filtered out (around 6%). In the five domains where seed summaries were generated by ChatGPT, 17.8% of the seed summaries were labeled as factually inconsistent, indicating that modern LLMs like ChatGPT still struggle to remain consistent when summarizing documents. At least 20% of each domain's samples were annotated by multiple annotators, allowing us to measure the agreement level when completing the annotation. When considering all three labels (consistent, inconsistent, borderline), Cohen's Kappa in each domain varies between 0.72-0.90, averag-ing 0.82. When removing samples annotated as borderline by any annotator, the average Cohen's Kappa rises to 0.92, empirically validating the importance of labeling and filtering out borderline samples to create a reproducible benchmark.\nIn the final benchmark, 37% of summaries are consistent, and the rest are inconsistent, approaching our objective of a balanced benchmark to facilitate robust evaluation and minimize metric fluctuations (Luque et al., 2019).\nThe total annotation cost of SUMMEDITS is around USD 3,000, representing around 150 hours of annotator work. The average cost of adding a domain to SUMMEDITS is therefore around USD 300, within reach for NLP practitioners looking to evaluate the model's ability to detect factual errors in their domain of choice. Authors of the FRANK benchmark (Pagnoni et al., 2021) -samples of which are in AggreFact -estimate that each sample in their benchmark required 30 minutes of annotator time. At similar annotator pay, annotating a benchmark for a new domain similar to the ones in SummEdits would cost an estimated USD 6,000: twenty times more. This cost analysis reveals the dual advantage of our protocol: by focusing the annotation task on atomic edits, costs can be drastically reduced while maintaining high reproducibility." }, { "figure_ref": [], "heading": "SUMMEDITS Results", "publication_ref": [ "b14", "b38", "b5", "b37" ], "table_ref": [], "text": "Table 9 reports the average performance of specialized models, LLMs with a zero-shot prompt, an oracle version for the LLM in which it has access to additional information and an estimate of human performance computed on the subset of the benchmark which was plurally annotated.\nOverall, model performance on the benchmark is low, with a single model -GPT4 -getting within 10% of human performance. Larger or more recent LLMs perform better on the benchmark, illustrated by the performance of models in the OpenAI family, with each model generation leading to an improvement in performance and confirming that the SUMMEDITS benchmark assesses model ability at factual reasoning.\nPaLM2-Bison, Dav003, ChatGPT, and GPT4 are the only four LLMs that outperform the best non-LLM approach QAFactEval, providing evidence that most LLMs are not yet capable to reason out-of-the-box about the consistency of facts.\nAll three specialized models achieve their high-est performance in the news domain, unlike LLM models. The specialized models are likely calibrated to the news domain, which they are most frequently tested on (Goyal and Durrett, 2020;Laban et al., 2022a;Tang et al., 2022;Fabbri et al., 2022;?). This finding confirms the importance of creating multi-domain benchmarks to measure model ability in diverse and realistic scenarios. Some domains such as Shakespeare's plays or the legal BillSum are more challenging to the majority of models, with the latter seeing no model score higher than 71.1%. Yet, factual reasoning in the legal domain is an important application area of NLP (Chalkidis et al., 2020;Shen et al., 2022).\nTo assess the feasibility of the benchmark, we experiment with an oracle setting of the benchmark, in which the model is provided the seed summary in addition to the input document and the seed summary. The seed summary serves as an information scaffold, enabling the model to directly assess the modifications between the seed and edited summaries when assessing factual consistency. The oracle setting leads to a large boost in performance for the GPT4 model across domains, with the model performing within 2% of human performance. The GPT4 oracle experiment confirms that high model performance on the benchmark is attainable and that the challenge of SUMMEDITS lies in aligning the facts of the edited summary with the document, without knowing that it has been edited." }, { "figure_ref": [], "heading": "Edit Type Analysis", "publication_ref": [], "table_ref": [], "text": "To gain more specific insights into the types of edits present in SUMMEDITS, we annotated each inconsistent sample in the benchmark with tags of edit types that lead to factual inconsistency.\nThe four types are:\n(1) Entity Modification in which an entity or phrase in the summary has been changed in a way that alters the meaning, (2) Antonym Swap is when a word or phrase is replaced by a word of opposite meaning (e.g., increasing vs. decreasing), (3) hallucinated fact insertion is a novel fact is introduced in the summary which is not supported by the document, and (4) negation insertion is the use of any negator word (e.g., not, neither) which modifies summary meaning. Figure 7 provides an example of each edit type in SUMMEDITS.\nTo annotate the entire benchmark, one author of the paper first manually annotated 200 samples Table 11: Relationship between the number of edits types in the summary and balanced accuracy of models on SUMMEDITS. Models generally perform better as the number of introduced edits in a summary increases. due to such edits modifying an existing consistent fact in a more nuanced way." }, { "figure_ref": [], "heading": "Number of Edits Effect", "publication_ref": [], "table_ref": [], "text": "It is common for the LLM to introduce multiple edits in each of its candidate summaries, as can be seen in the examples in Table 7, in which each edited summary contains multiple inserted and deleted words. We group inconsistent summaries by the number of distinct edit types they contain (1 to 4) and compute model performance on each group, with results summarized in Table 11.\nAs the number of edit types in a summary increases, most models see sizable performance improvements, with average performance increasing from 59.2 to 74.1 between summaries with 1 or 4 edit types represented.\nThis analysis confirms the perspective the task in the SUMMEDITS benchmark corresponds to a detection task: as the number of introduced errors increases, model performance increases as there is generally more evidence of inconsistencies for the models to detect. This also points in the direction of a more challenging explanation analysis, in which one could annotate whether a model can detect all inconsistencies in a summary.\nIn turn, future work looking to create more challenging versions of benchmarks using the SummEdits protocol can focus on editing summaries with a single edit type, as such inconsistent summaries are more challenging to detect." }, { "figure_ref": [], "heading": "Limitations and Discussion", "publication_ref": [ "b10" ], "table_ref": [], "text": "Why not fix existing benchmarks? In Section 5, analysis reveals limitations with existing benchmarks that in theory can be fixed to yield improved versions of known benchmarks. The analysis we performed however only helps us invalidate a subset of samples in an opportunistic way, by looking at samples where benchmark labels and GPT4 disagree. However, this methodology cannot help us efficiently correct or confirm all samples, and improving existing benchmarks would require reannotating a large portion of the benchmarks, and we do not have a guarantee that new annotations would improve on previous ones. By designing a new protocol for sample annotation that relies on clear, atomic edits, we simplify the annotation process, improving reproducibility.\nEffect of LLM in benchmark creation.\nStep 2 of the protocol described in Section 5 relies on an LLM to generate many edits of the seed summary, which are subsequently manually annotated and included in the benchmark. The choice of LLM likely has an effect on the benchmark which could favor a subset of LLMs most similar to the one used for benchmark creation. Initial attempts to use a pool of LLMs to produce edits were unsuccessful as we found that only ChatGPT and GPT4 were currently capable of following editing instructions that do not fully rewrite summaries. Future iterations on similar benchmarks should consider including diverse pools of LLMs in benchmark creation processes to avoid model-specific bias.\nEvalutating Summarizers. Previous benchmarks were in part collected to evaluate which summarization models are least likely to generate factual inconsistencies (Falke et al., 2019). Since the summaries in SUMMEDITS are synthetic modifications of summaries, the benchmark cannot directly provide insights on summarizers and their ability to remain consistent. Future work can explore using methods such as Near-Negative Distinction (NND) (Laban et al., 2022b) to adapt SUMMEDITS into a set of tests to evaluate summarizer performance, and model ability to avoid generating inconsistent samples in the first place.\nBuild Your Own Benchmark. By implementing the protocol in ten diverse domains for an average cost of around USD300 per domain, we've demonstrated that the protocol can be adapted to widely different textual domains -from US legal bills to Shakespeare plays -and produce domain-specific benchmarks. Although we hope that the domains we've selected span a range of practical use cases, we hope that others will adopt and adapt the protocol to new domains, languages, and NLP tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we explore the capabilities of LLMs to act as factual reasoners through the lens of factual evaluation in text summarization. We show that on a surface level, LLMs perform on par with specialized non-LLM evaluators, but the performance substantially degrades in more advanced evaluation settings. As part of this analysis, we also uncover and discuss shortcomings of existing benchmarks for factual evaluation. Using those insights we develop a new protocol for creating inconsistency detection benchmarks, which we implement in a 10-domain benchmark called SUMMEDITS. The SUMMEDITS benchmark is highly reproducible and more cost-effective per sample than previous benchmarks. Our experiments show that the benchmark is challenging for most current LLMs, with the best-performing model, GPT-4, still 8% below estimated human performance. We believe that SUMMEDITS can serve as a valuable tool for evaluating LLMs' abilities to reason about facts, detect factual errors and promote more reliable NLG systems. We encourage LLM developers to report their performance on the benchmark, and practitioners to adapt the protocol to generate in-domain benchmarks for model evaluation.\nAn example for each type of explanation was provided during onboarding, similar to the ones given in Table 3. In order to obtain impartial results that do not benefit or disadvantage any model, for cases where multiple explanations were annotated for the same (document, summary) sample, the explanations' order was shuffled, and annotators were not aware of the model origin of any explanation.\nAnnotation was performed in batches, and the first two batches of annotation of each annotator were reviewed by the authors of the paper. Incorrect annotations were discussed, allowing annotators to better understand edge cases of the task, and modify their annotation in the first batches. The annotators were added to a Slack channel with one of the authors and regularly discussed edge cases to maintain a common understanding of the task. For example, both annotators raised the question of how to deal with cut-off explanations, in which the last sentence is incomplete (due to the max-length of generation). Annotators were both instructed to disregard any incomplete sentence and only consider full sentences in their assessment." }, { "figure_ref": [ "fig_3" ], "heading": "C SUMMEDITS Annotation Guidelines", "publication_ref": [], "table_ref": [], "text": "We hired two professional annotators to complete the annotation of Steps 1 and 3 of the SUMMEDITS protocol (see Section 5). The annotators were compensated at $20/hour. They received onboarding documentation that introduced them to the task and used the interface shown in Figure 4.\nAnnotators were first assigned 10 warm-up seed summaries, each with roughly 30 edited summaries, which had been pre-annotated by the authors of the paper. The authors reviewed the completed warmup exercises, and a strong agreement level on the warm-up task with both annotators was observed. We discussed disagreement cases with the annotators and added both annotators to a Slack channel with one of the authors of the paper to allow them to discuss any edge case or domain-specific question. For example, since the QMSumm domain is the more specific query-focused summarization, the annotators were given updated instructions on Slack on how to deal with the \"query\" component when evaluating summaries. Namely, during Step 1 of the protocol, participants were asked to additionally judge whether the summary accurately responded to the query, and otherwise mark summaries as inadequate." }, { "figure_ref": [], "heading": "Document:", "publication_ref": [], "table_ref": [], "text": "Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (nondifferentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data. In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data. We find that our approach (i) quickly converges to the optimal simulation parameters in controlled experiments and (ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications." }, { "figure_ref": [], "heading": "Original Summary:", "publication_ref": [], "table_ref": [], "text": "We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized." }, { "figure_ref": [], "heading": "Task 1:", "publication_ref": [], "table_ref": [], "text": "Is any of the information in the summary not present in the document?\nYes No\nAre there any other issues with the summary? (incomplete sentence, formatting, etc.) Yes No" }, { "figure_ref": [], "heading": "Submit", "publication_ref": [], "table_ref": [], "text": "Task 2:\nModified Summaries:\nWe propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized only slightly improved ." }, { "figure_ref": [], "heading": "Inconsistent Consistent Borderline", "publication_ref": [], "table_ref": [], "text": "We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training testing data for a neural network such that validation accuracy is maximized." }, { "figure_ref": [], "heading": "Inconsistent Consistent Borderline", "publication_ref": [], "table_ref": [], "text": "We propose an algorithm that automatically adjusts changes parameters of a simulation engine to generate training data for a neural network in such a way that validation accuracy is maximized. " }, { "figure_ref": [], "heading": "Inconsistent Consistent Borderline", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "of the dataset, which was used to evaluate several GPT4-based Zero-Shot and Few-Shot approaches.\nThe best approach was then used to annotate each edited summary with edit types. The best-performing prompt provides the definition of each edit type and a canonical example of each, and it achieved a performance of 0.85 F-1 and 0.92 recall, which was deemed sufficient for analysis purposes. 4 Overall in SUMMEDITS, 78% of inconsistent summaries contain an entity modification, 48% an antonym swap, 22% a hallucinated fact insertion, and 18% a negator insertion. We note that the distribution of edit types is highly influenced by the LLM used to produce the edits, which is ChatGPT in our case. Table 10 presents model performance across each of the edit types.\nAll models detect inconsistencies due to negator insertions the best, a sign that such errors are more discernable to models. Fact hallucinations are relatively harder to detect for non-LLM models but gradually become more evident to more performant LLMs. Finally, the entity modification and antonym error types generally see the lowest rate of detection by models across the board, perhaps 4 We provide the prompt with the code release. " }, { "figure_ref": [], "heading": "#Distinct Edit", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A Model Access Detail", "publication_ref": [ "b43", "b40", "b6", "b42" ], "table_ref": [], "text": "In Section 3, we experiment with a wide range of models. For each model, we specify its model card, and how it was accessed.\nNon-LLM models. The three specialized models -SummaC 5 , DAE 6 , and QAFactEval 7 -were implemented through their online public repositories, and run locally on a multi-GPU machine (with 2 V-100 GPUs).\nOpen-source Models. We experimented with five open-source LLM models: LLama-13b (Touvron et al., 2023), Alpaca-13b (Taori et al., 2023), Dolly-V2-12b (databricks/dolly-v2-12b), Vicuna-13b (Chiang et al., 2023), and MosaicML's MPT-7b-chat (Team, 2023). All models were accessed through the public, online demonstration of LMSys.org 8 . Model responses were collected between April 15th, 2023, and May 15th, 2023.\nGoogle Models. We experiment with two Google models, the Bard (Thoppilan et al., 2022) We also include GT3.5-turbo (gpt-3.5-turbo) and . All models were accessed through OpenAI's official API 12 ." }, { "figure_ref": [], "heading": "B Explanation Annotation Guidelines", "publication_ref": [], "table_ref": [], "text": "We hired two professional annotators to complete the annotation of model-generated explanations on the FactCC and AggreFact domains. The annotators were compensated at $20/hour. They received onboarding documentation that introduced them to the task, and provided the following definition for each type of explanation:\n• No Explanation: If the model did not provide any explanation. (For example just saying: \"The summary is inconsistent\"),\n• Entirely Correct: if the explanation correctly identifies and explains one or more factual inconsistencies in the summary,\n• Partially Correct: if the explanation provided contains several elements and at least one of them correctly identifies and explains a factual inconsistency in the summary,\n• Unrelated: if the explanation given does not directly relate to a factual inconsistency between the summary and the document,\n• Incorrect: if the explanation given does not correctly identify a factual inconsistency in the summary, for example, making a logical error." } ]
With the recent appearance of LLMs in practical settings, having methods that can effectively detect factual inconsistencies is crucial to reduce the propagation of misinformation and improve trust in model outputs. When testing on existing factual consistency benchmarks, we find that a few large language models (LLMs) perform competitively on classification benchmarks for factual inconsistency detection compared to traditional non-LLM methods. However, a closer analysis reveals that most LLMs fail on more complex formulations of the task and exposes issues with existing evaluation benchmarks, affecting the evaluation precision. To address this, we propose a new protocol for inconsistency detection benchmark creation and implement it in a 10-domain benchmark called SUMMEDITS. This new benchmark is 20 times more cost-effective per sample than previous benchmarks and highly reproducible, as we estimate inter-annotator agreement at about 0.9. Most LLMs struggle on SUMMEDITS, with performance close to random chance. The bestperforming model, GPT-4, is still 8% below estimated human performance, highlighting the gaps in LLMs' ability to reason about facts and detect inconsistencies when they occur.
LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond
[ { "figure_caption": "Figure 1 :1Figure1: SUMMEDITS is a benchmark to evaluate the factual reasoning abilities of LLMs, measuring if models detect factual inconsistencies when they occur in summaries. Capable detection models can help build more reliable NLG systems.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: SUMMEDITS protocol diagram, a three-step protocol to create summarization ID benchmarks. See Table7for example samples produced by the protocol.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Two-column annotation interface used to annotate samples in the SUMMEDITS benchmark. Participants could read the document on the left-hand column. Once they completed Task 1 in the right-hand column, the second annotation task became visible.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Error TypeModel (↓)POSDSES NSentNSPSDAE96.012.044.028.052.044.0SummaC96.0 100.0 100.0 100.0 100.080.0QAFactEval96.084.092.096.096.084.0LLaMa-13B88.810.413.614.412.812.8Alpaca-13B80.030.420.036.025.628.0Dolly-v2-12B93.63.211.210.47.25.6MPT-7B72.036.041.652.838.440.0Vicuna-13B68.859.263.274.465.648.8Cohere-CMD-XL85.132.031.536.326.117.1Claude v1.371.782.480.389.189.978.1Bard80.068.869.377.383.759.2Palm296.547.745.365.152.038.9Ada00158.736.540.345.939.236.3Bab00170.133.929.641.630.734.7Cur00188.012.017.345.116.512.3Dav00188.021.928.550.427.513.1Dav00280.366.461.374.971.555.5Dav00393.166.158.471.269.939.7GPT3.5-turbo87.282.463.587.589.166.7GPT486.174.977.381.684.374.1LLM Avg.81.64 44.95 44.25 56.10 48.81 38.87", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "focused on summaries generated by SOTA models (i.e., models based on pre-trained Transformers), as analysis Precision (P) and Recall (R) scores of error detection with fine-grained prompts for individual error types. Experiments run in Zero-and Few-shot settings for each of the error types: Date Swap (DS), Entity Swap (ES), Negated Sentences (NSent), Number Swap (NS), Pronoun Swap (PS).", "figure_data": "Zero-ShotFew-ShotDSESNSentNSPSDSESNSentNSPSModel (↓)PRPRPRPRPRPRPRPRPRPRLLaMa-13B12.012.018.8 12.0 29.440.020.820.012.58.0----------Alpaca-13B20.532.012.9 16.0 36.660.020.936.023.136.0----------Dolly-v2-12B26.716.018.8 12.0 14.316.022.28.033.312.0----------MPT-7B-Chat19.152.019.7 60.0 23.668.018.152.018.848.0----------Vicuna-13B20.8 100.0 19.1 84.0 23.4 100.0 21.496.018.380.0----------Cohere-CMD-XL 22.1 AggreFact DialSummEvalModel Name%BAcc.%BAcc. Corr.DAE76.056.20.44SummaC71.662.70.35QAFactEval73.964.40.59Cohere-cmd-XL63.156.60.36Claude V1.350.656.80.30Bard62.759.50.26PaLM2-Bison57.055.60.57Dav00153.352.90.11Dav00254.359.20.49Vicuna-13b60.358.60.36Dav00364.860.90.51GPT3.5-turbo70.262.00.56GPT-473.668.40.58", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of models on the AggreFact, Di-alSummEval consistency benchmarks reported in balanced accuracy (%Bacc.) and correlation (corr.).", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Percent of summaries classified as consistent in DialSummEval, bucketed by average Likert consistency score. Models are more uncertain in mid-range borderline buckets ([2.0, 4.0]).", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of the ten domains included in the SUMMEDITS benchmark, including the number of samples (N), the percentage of consistent summaries (%Balance), and the inter-annotator agreement (IAA).", "figure_data": "N%Balance IAANews81939.2%0.91Podcast50032.6%0.91Billsum85342.3%0.90Samsum66436.4%0.90Shakespeare 81446.4%0.96SciTLDR46631.1%0.93QMSum43142.5%0.92ECTSum66838.0%0.96Sales Email61329.2%0.87Sales Call52033.3%0.93Overall6,34837.10%0.92", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" } ]
Philippe Laban; Wojciech Kryści; Divyansh Agarwal; Alexander R Fabbri; Caiming Xiong; Shafiq Joty; Chien-Sheng Wu
[ { "authors": "Negar Arabzadeh; Ali Ahmadvand; Julia Kiseleva; Yang Liu; Ahmed Hassan Awadallah; Ming Zhong; Milad Shokouhi", "journal": "", "ref_id": "b0", "title": "Preme: Preference-based meeting exploration through an interactive questionnaire", "year": "2022" }, { "authors": "Yuntao Bai; Saurav Kadavath; Sandipan Kundu; Amanda Askell; Jackson Kernion; Andy Jones; Anna Chen; Anna Goldie; Azalia Mirhoseini; Cameron Mckinnon", "journal": "", "ref_id": "b1", "title": "Constitutional ai: Harmlessness from ai feedback", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Isabel Cachola; Kyle Lo; Arman Cohan; Daniel S Weld", "journal": "", "ref_id": "b3", "title": "Tldr: Extreme summarization of scientific documents", "year": "2020" }, { "authors": "Shuyang Cao; Lu Wang", "journal": "", "ref_id": "b4", "title": "Cliff: Contrastive learning for improving faithfulness and factuality in abstractive summarization", "year": "2021" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Prodromos Malakasiotis; Nikolaos Aletras; Ion Androutsopoulos", "journal": "", "ref_id": "b5", "title": "Legal-bert: The muppets straight out of law school", "year": "2020" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez", "journal": "", "ref_id": "b6", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Ann Clifton; Aasish Pappu; Sravana Reddy; Yongze Yu; Jussi Karlgren; Ben Carterette; Rosie Jones", "journal": "", "ref_id": "b7", "title": "The spotify podcast dataset", "year": "2020" }, { "authors": "Wojciech Alexander R Fabbri; Bryan Kryściński; Caiming Mc-Cann; Richard Xiong; Dragomir Socher; Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "Summeval: Re-evaluating summarization evaluation", "year": "2021" }, { "authors": "Alexander Richard Fabbri; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "", "ref_id": "b9", "title": "Qafacteval: Improved qa-based factual consistency evaluation for summarization", "year": "2022" }, { "authors": "Tobias Falke; Leonardo Fr Ribeiro; Prasetya Ajie Utama; Ido Dagan; Iryna Gurevych", "journal": "", "ref_id": "b10", "title": "Ranking generated summaries by correctness: An interesting but challenging application for natural language inference", "year": "2019" }, { "authors": "Jinlan Fu; See-Kiong Ng; Zhengbao Jiang; Pengfei Liu", "journal": "", "ref_id": "b11", "title": "Gptscore: Evaluate as you desire", "year": "2023" }, { "authors": "Mingqi Gao; Xiaojun Wan", "journal": "", "ref_id": "b12", "title": "Dialsummeval: Revisiting summarization evaluation for dialogues", "year": "2022" }, { "authors": "Bogdan Gliwa; Iwona Mochol; Maciej Biesek; Aleksander Wawer", "journal": "", "ref_id": "b13", "title": "Samsum corpus: A humanannotated dialogue dataset for abstractive summarization", "year": "2019" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "", "ref_id": "b14", "title": "Evaluating factuality in generation with dependency-level entailment", "year": "2020" }, { "authors": "Tanya Goyal; Junyi ; Jessy Li; Greg Durrett", "journal": "", "ref_id": "b15", "title": "News summarization and evaluation in the era of gpt-3", "year": "2022" }, { "authors": "Dandan Huang; Leyang Cui; Sen Yang; Guangsheng Bao; Kun Wang; Jun Xie; Yue Zhang", "journal": "", "ref_id": "b16", "title": "What have we achieved on text summarization?", "year": "2020" }, { "authors": "Raghav Jain; Anubhav Jangra; Sriparna Saha; Adam Jatowt", "journal": "", "ref_id": "b17", "title": "A survey on medical document summarization", "year": "2022" }, { "authors": "Anastassia Kornilova; Vladimir Eidelman", "journal": "", "ref_id": "b18", "title": "Billsum: A corpus for automatic summarization of us legislation", "year": "2019" }, { "authors": "Wojciech Kryściński; Nitish Shirish Keskar; Bryan Mc-Cann; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b19", "title": "Neural text summarization: A critical evaluation", "year": "2019" }, { "authors": "Wojciech Kryściński; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b20", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul N Bennett; Marti A Hearst ; A", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b21", "title": "Summac: Re-visiting nlibased models for inconsistency detection in summarization", "year": "2022" }, { "authors": "Philippe Laban; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "", "ref_id": "b22", "title": "Near-negative distinction: Giving a second life to human evaluation datasets", "year": "2022" }, { "authors": "Philippe Laban; Chien-Sheng Wu; Lidiya Murakhovs' Ka; Caiming Xiang 'anthony' Chen; Xiong", "journal": "", "ref_id": "b23", "title": "Designing and evaluating interfaces that highlight news coverage diversity using discord questions", "year": "2023" }, { "authors": "Andrew K Lampinen; Ishita Dasgupta; C Y Stephanie; Kory Chan; Michael Henry Matthewson; Antonia Tessler; James L Creswell; Jane X Mcclelland; Felix Wang; Hill", "journal": "", "ref_id": "b24", "title": "Can language models learn from explanations in context?", "year": "2022" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuohang Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b25", "title": "Gpteval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "Zheheng Luo; Qianqian Xie; Sophia Ananiadou", "journal": "", "ref_id": "b26", "title": "Chatgpt as a factual inconsistency evaluator for text summarization", "year": "2023" }, { "authors": "Amalia Luque; Alejandro Carrasco; Alejandro Martín; Ana De; Las Heras", "journal": "Pattern Recognition", "ref_id": "b27", "title": "The impact of class imbalance in classification performance metrics based on the binary confusion matrix", "year": "2019" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan Mcdonald", "journal": "", "ref_id": "b28", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020" }, { "authors": "Tom Mccoy; Ellie Pavlick; Tal Linzen", "journal": "", "ref_id": "b29", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "year": "2019" }, { "authors": "Rajdeep Mukherjee; Abhinav Bohra; Akash Banerjee; Soumya Sharma; Manjunath Hegde; Afreen Shaikh; Shivani Shrivastava; Koustuv Dasgupta; Niloy Ganguly; Saptarshi Ghosh", "journal": "", "ref_id": "b30", "title": "Ectsum: A new benchmark dataset for bullet point summarization of long earnings call transcripts", "year": "2022" }, { "authors": "Sharan Narang; Aakanksha Chowdhery", "journal": "", "ref_id": "b31", "title": "Pathways language model (palm): Scaling to 540 billion parameters for breakthrough performance", "year": "2022" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "", "ref_id": "b32", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b33", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "", "ref_id": "b34", "title": "Understanding factuality in abstractive summarization with frank: A benchmark for factuality metrics", "year": "2021" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b35", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Thomas Scialom; Paul-Alexis Dray; Sylvain Lamprier; Benjamin Piwowarski; Jacopo Staiano; Alex Wang; Patrick Gallinari", "journal": "", "ref_id": "b36", "title": "Questeval: Summarization asks for fact-based evaluation", "year": "2021" }, { "authors": "Zejiang Shen; Kyle Lo; Lauren Yu; Nathan Dahlberg; Margo Schlanger; Doug Downey", "journal": "", "ref_id": "b37", "title": "Multilexsum: Real-world summaries of civil rights lawsuits at multiple granularities", "year": "2022" }, { "authors": "Liyan Tang; Tanya Goyal; Alexander R Fabbri; Philippe Laban; Jiacheng Xu; Semih Yahvuz; Wojciech Kryściński; Justin F Rousseau; Greg Durrett", "journal": "", "ref_id": "b38", "title": "Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors", "year": "2022" }, { "authors": "Liyan Tang; Zhaoyi Sun; Betina Idnay; Jordan G Nestor; Ali Soroush; Pierre A Elias; Ziyang Xu; Ying Ding; Greg Durrett; Justin Rousseau", "journal": "medRxiv", "ref_id": "b39", "title": "Evaluating large language models on medical evidence summarization", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "Stanford Center for Research on Foundation Models", "ref_id": "b40", "title": "Alpaca: A strong, replicable instruction-following model", "year": "2023" }, { "authors": "Nlp Mosaicml; Team", "journal": "Accessed", "ref_id": "b41", "title": "Introducing mpt-7b: A new standard for open-source, ly usable llms", "year": "2023" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Du", "journal": "", "ref_id": "b42", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b43", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Alex Wang; Kyunghyun Cho; Mike Lewis", "journal": "", "ref_id": "b44", "title": "Asking and answering questions to evaluate the factual consistency of summaries", "year": "2020" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b45", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Jules White; Quchen Fu; Sam Hays; Michael Sandborn; Carlos Olea; Henry Gilbert; Ashraf Elnashar; Jesse Spencer-Smith; Douglas C Schmidt", "journal": "", "ref_id": "b46", "title": "A prompt pattern catalog to enhance prompt engineering with chatgpt", "year": "2023" }, { "authors": "Xi Ye; Greg Durrett", "journal": "", "ref_id": "b47", "title": "The unreliability of explanations in few-shot in-context learning", "year": "2022" }, { "authors": "Ming Zhong; Da Yin; Tao Yu; Ahmad Zaidi; Mutethia Mutuma; Rahul Jha; Ahmed Hassan; Asli Celikyilmaz; Yang Liu; Xipeng Qiu", "journal": "", "ref_id": "b48", "title": "Qmsum: A new benchmark for query-based multi-domain meeting summarization", "year": "2021" } ]
[]
10.18653/v1/2021.emnlp-main.532
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b7", "b13", "b19", "b25", "b12", "b25" ], "table_ref": [], "text": "Prior work (Fabbri et al., 2022;Goyal and Durrett, 2020;Laban et al., 2022) formulates the problem of factual inconsistency detection as a binary classification task, which predicts whether a summary is consistent with the source document. However, these approaches have two drawbacks. First, they cannot predict the types of factual errors made by a summary and thus provide limited insights into the weakness of summarization systems. Although recent studies (Pagnoni et al., 2021;Tang et al., 2022;Goyal and Durrett, 2021a) have manually inspected the types of factual errors in summaries, there is no existing work on automatic detection of fine-grained factual inconsistency.\nSecond, existing models typically cannot explain which portions of the document are used to detect the inconsistency in the input summary. In order to verify and correct an inconsistent summary, humans still need to read the entire source document to find the supporting evidence. Kryscinski et al. (2020) introduce an auxiliary task to extract the supporting spans in the document for inconsistency detection, which requires expensive ground-truth labels of supporting spans.\nTo address the first limitation, we propose the fine-grained factual inconsistency detection task. The goal is to predict the types of factual inconsistency in a summary. We show examples of different factual error types in Table 1.\nTo solve the second challenge, we further introduce an interpretable fine-grained inconsistency detection model (FINEGRAINFACT) that does not require any label of supporting text spans, inspired by how humans verify the consistency of a summary. When humans annotate the factual error types of a summary, they first identify facts in the document that are relevant to the summary and then determine the factual error types in the summary. Following this intuition, our model first extracts facts from the document and summary using Semantic Role Labeling (SRL). We consider each extracted semantic frame as a fact since a semantic frame captures a predicate and its associated arguments to answer the question of \"who did what to whom\". After fact extraction, a document fact attention module enables the classifier to focus on the facts in the document that are most related to the facts in the summary. By highlighting the facts in the document with the highest attention scores, our model can explain which facts in the document are most pertinent to inconsistency detection.\nExperiment results show that our model outperforms strong baselines in detecting factual error types. Moreover, the document facts highlighted by our model can provide evidence to support or refute the input summary, which can potentially help users to verify the predicted error types and correct an inconsistent summary.\nIntrinsic noun phrase error: Errors that misrepresent object(s), subject(s), or prepositional object(s) from the source article.\nDavid was using FaceTime with Marcy Smith and saw the flames.\nExtrinsic predicate error: Errors that add new main verb(s) or adverb(s) that cannot be inferred from the source article.\nDavid was eating and saw the flames.\nIntrinsic predicate error: Errors that misrepresent main verb(s) or adverb(s) from the source article.\nDavid was engulfed and saw the flames.\nTable 1: A text document and example summaries with different factual error types according to the typology defined by Tang et al. (2022). The errors in the sample summaries are in red color and italicized. We bold the text spans from the document that refute the sample summaries." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [ "b25" ], "table_ref": [], "text": "The goal of the fine-grained inconsistency detection task is to predict the types of factual errors in a summary. We frame it as a multi-label classification problem as follows. Given a pre-defined set of l factual error types {e 1 , . . . , e l }, a document d, and a summary s, the goal is to predict a binary vector y ∈ {0, 1} l where each element y i indicates the presence of one type of factual errors.\nWe follow the typology of factual error types proposed by (Tang et al., 2022), which include intrinsic noun phrase error, extrinsic noun phrase error, intrinsic predicate error, and extrinsic predicate error. The definitions and examples of these error types are presented in Table 1." }, { "figure_ref": [], "heading": "Our FINEGRAINFACT Model", "publication_ref": [ "b24", "b10" ], "table_ref": [], "text": "The model architecture is illustrated in Figure 1.\nFact extraction. To represent facts from the input document and summary, we extract semantic frames with a BERT-based semantic role labeling (SRL) tool (Shi and Lin, 2019). A semantic frame contains a predicate and its arguments, e.g., Fact encoder. We first represent tokens in the concatenated sequence of the input document and summary by fusing hidden states across all layers in Adapter-BERT (Houlsby et al., 2019) with max pooling. To represent facts, we apply attentive pooling to all tokens in the semantic frame under the assumption that different tokens in a fact should con- tribute differently to the fact representation. Given the token representations t j , we calculate the attention scores α j = exp(ϕ(t j ))/ m j=1 exp(ϕ(t j )), and represent each document or summary fact as\n[ ARG0 David][ V saw][ ARG1 the flame]. We use f doc\nf i = m j=1 α j (ϕ(t j ))\n, where m is the number of tokens in the fact and ϕ is a two-layer fully-connected network.\nDocument Fact Attention module. This module aims to retrieve the facts in the document that are related to the facts in the summary. We first concatenate the document fact representations into a document fact matrix F doc . We attend each summary fact f sum i to the document fact matrix to compute a document context vector:\nc i = MULTIHEADATT(f sum i , F doc , F doc )\n, where f sum i acts as the query, F doc is used as the key and value. The document context vector c i captures the information of the facts in the document that are related to the summary fact f sum i .\nFor each document fact, we sum up its attention scores received from all summary facts as its importance score. Concretely, we use α j→i to denote the sum of attention scores injected from the j-th summary fact to the i-th document fact over all attention heads. The importance score of a document fact f doc i is defined as n j=1 α j→i , where n is the total number of facts in the summary. Then, we return the top k document facts with the highest importance scores as the document fact highlights, where k is a hyper-parameter.\nClassification module. A linear classifier predicts the probability of each factual error type based on the concatenation of the representations of summary facts and document context vectors. Specifically, we first use mean pooling to fuse all summary fact representation vectors and all document context vectors into two fixed-size vectors:\nf sum = 1 n n i=1 f sum i , c = 1 n n i=1 c i .\nThese two vectors contain the information of all facts in the summary and the information of all document facts that are related to the summary. Next, we feed the concatenation of f sum and c to a linear classification layer to predict the probability of each factual error type:\np(y) = σ(W[ f sum ; c] + b), where W ∈ R d×l , b ∈ R, d is the hidden size of Adapter-BERT, σ denotes the sigmoid function.\nTraining objective. We train our model with weighted binary cross-entropy (BCE) loss, The technical details are in Appendix A." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b25", "b15", "b19", "b0", "b25", "b12", "b13", "b22", "b2", "b10" ], "table_ref": [], "text": "Dataset. We conduct experiments on the Aggrefact-Unified dataset (Tang et al., 2022), which collects samples and unifies factual error types from four manually annotated datasets (Maynez et al., 2020;Pagnoni et al., 2021;Goyal and Durrett, 2021b;Cao and Wang, 2021). We remove the duplicated samples (i.e., duplicated document-summary pairs) in the Aggrefact-Unified dataset (Tang et al., 2022) and obtain 4,489 samples. We randomly split data samples into train/validation/test sets of size 3,689/300/500. The statistics of the error type labels are in Appendix B.1.\nEvaluation metrics. We adopt the macroaveraged F1 score and balanced accuracy (BACC) as the evaluation metrics. BACC is an extension of accuracy for class-imbalanced datasets and is widely adopted by previous literature on inconsistency detection (Kryscinski et al., 2020;Laban et al., 2022). All experiment results are averaged across four random runs.\nBaselines. We adapt the following baselines2 for the new task. FACTCC-MULTI: FactCC (Kryscinski et al., 2020) is originally trained on synthetic data for binary inconsistency detection. We replace the binary classifier with a multi-label classifier and finetune the model on Aggrefact. FACTGRAPH-MULTI: FactGraph (Ribeiro et al., 2022) parses each sentence into an AMR graph and uses a graph neural network to encode the document. We replace the binary classifier with a multi-label classifier. We also fine-tune the BERT (Devlin et al., 2019) and ADAPTERBERT (Houlsby et al., 2019)." }, { "figure_ref": [], "heading": "Performance of Error Type Detection", "publication_ref": [ "b25" ], "table_ref": [ "tab_1" ], "text": "Following (Tang et al., 2022), we detect error types in summaries from different models: SOTA includes the pre-trained language models published in or after 2020. XFORMER contains the Transformer-based models published before 2020. OLD includes earlier RNN-or CNN-based models. REF represents reference summaries. From Table 2, we observe that: (1) Representing facts with semantic frames improves factual error type prediction.. We observe that in most of the cases, our model outperforms other baselines that do not use semantic frames to represent facts. (2) The performance of our model drops after we remove the document fact attention module. The results show that our document fact attention module not only improves the interpretability, but also boost the performance of factual error type detection.\n(3) All detection models perform better in summaries generated by OLD systems. It suggests that the factual errors made by OLD systems are relatively easier to recognize than the errors made by more advanced systems." }, { "figure_ref": [], "heading": "Evaluation of Document Fact Highlights", "publication_ref": [ "b26" ], "table_ref": [], "text": "Since ground-truth document fact highlights are not available, we apply a fact verification dataset to evaluate whether the predicted document fact highlights provide evidence for inconsistency detection. Specifically, we adopt the FEVER 2.0 dataset (Thorne et al., 2018), which consists of claims written by humans and evidence sentences from Wikipedia that can support or refute the claims. We first extract facts from the evidence sentences via SRL and use them as the groundtruth document fact highlights. We then consider each claim as the input summary and the section of a Wikipedia article that contains the evidence sentences as the input document.\nWe devise the following method to compute document fact highlights for the baseline models. Since all baselines utilize the CLS token to predict the factual error types, we use the attention scores received from the CLS token to compute an importance score for each document fact. We then return the facts that obtain the highest importance scores as the document fact highlights for each baseline. More details are in Appendix B.2. fact highlights predicted by different models. We observe that our model obtains substantially higher recall scores, which demonstrates that our model provides more evidence to support the inconsistency prediction. Thus, compared with the baselines, our model allows users to verify the predicted error types and correct inconsistent summaries." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [ "b25" ], "table_ref": [ "tab_4", "tab_6" ], "text": "Table 4 shows a sample summary generated by an OLD model with an intrinsic noun phrase error, where the \"a school in northern ireland\" in the summary contradicts with \"Northern Ireland charity\" in the document. Our model accurately predicts the error type with evidence in the form of document fact highlight, which helps users verify the error and correct the summary.\nIn Table 5, we present an error analysis on a sample summary generated by a SOTA model. According to the source text, the word \"West\" in the summary is incorrect and should be removed since the statement in the summary is made by \"Sussex PPC\" instead of \"West Sussex PCC\". In order to (Tang et al., 2022). The error in the sample summary is in red color and italicized. We bold the text spans from the document that refute the sample summary.\ndetect this error, a model needs to understand that the expressions \"Sussex PCC Katy Bourne\", \"Ms Borune\", and \"she\" in the document refer to the same entity. This sample illustrates that the errors generated by a SOTA model are more subtle and more difficult to be detected. Our model fails to predict the correct error type for this sample. Since the top five document fact highlights returned by our model do not contain the entity \"Sussex PCC Katy Bourne\", we suspect that our model fails to recognize the co-referential relations among \"Sussex PCC Katy Bourne\", \"Ms Borune\", and \"she\" for this sample. Thus, improving the co-reference resolution ability of fine-grained inconsistency detection models is a potential future direction." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b23", "b4", "b13", "b12", "b22", "b7", "b18", "b28", "b5" ], "table_ref": [], "text": "Factual consistency metrics. QA-based consistency metrics (Durmus et al., 2020;Scialom et al., 2021;Fabbri et al., 2022) involve generating ques-tions from the given document and its summary, and then comparing the corresponding answers to compute a factual consistency score. Entailmentbased consistency metrics (Laban et al., 2022;Kryscinski et al., 2020;Ribeiro et al., 2022;Goyal and Durrett, 2020) utilize a binary classifier to determine whether the contents in a system summary are entailed by the source article. In contrast, our model is a multi-label classifier that detects the types of factual errors in a summary. Moreover, our model leverages SRL to encode the facts in the input document and summary, enabling users to interpret which facts in the document are most relevant to the inconsistency detection.\nFact-based evaluation methods. To evaluate the informativeness of a summary, the Pyramid human evaluation protocol (Nenkova and Passonneau, 2004) asks annotators to extract semantic content units (SCUs) from the system summary and reference summary, respectively, and then compute their overlap. Each SCU contains a single fact. Xu et al. (2020) approximate the Pyramid method by using SRL to extract facts. They then compute the embedding similarity between the facts extracted from the system summary and those from the reference summary. Fischer et al. (2022) also use SRL to extract facts, but they measure the similarity between the facts extracted from the system summary and those from the source document to compute a faithfulness score. On the other hand, our model integrates SRL with a multi-label classifier to predict the factual error types of a summary." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b12", "b1" ], "table_ref": [ "tab_4" ], "text": "In this paper, we present a new task of fine-grained inconsistency detection, which aims to predict the types of factual inconsistencies in a summary. Compared to the previous binary inconsistency detection task, our new task can provide more insights into the weakness of summarization systems. Moreover, we propose an interpretable finegrained inconsistency detection model, which represents facts from documents and summaries with semantic frames and highlights highly relevant document facts. Experiments on the Aggrefact-Unified dataset show that our model can better identify factual error types than strong baselines. Furthermore, results on the FEVER 2.0 dataset validate that the highlighted document facts provide evidence to support the inconsistency prediction.\nAlthough our model allows users to interpret which parts of the input document are most relevant to the model's prediction, our model does not allow users to interpret which text spans of the input summary contain errors. We use the summary in Table 4 as an example. If the model can indicate the text span \"a school in northern ireland\" contains errors, it will be easier for the user to correct the summary. Kryscinski et al. (2020) introduced an auxiliary task to extract erroneous text spans in summaries, but their method requires expensive text span groundtruth labels. Locating incorrect text spans in the summaries without requiring span-level training labels remains unexplored. Another limitation of our model is that it does not allow users to interpret the uncertainty of the prediction results (Deutsch et al., 2021)." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "The factual error types and document fact highlights predicted by our model can help users correct factually inconsistent summaries. Since factually inconsistent summaries often convey misinformation, our model can potentially help users combat misinformation. However, the factual error types predicted by our model may be incorrect. For example, it is possible that an input summary contains extrinsic noun phrase errors, but our model predicts the error type of intrinsic predicate error. Hence, users still need to be cautious when using our model to detect and correct inconsistent summaries. The Aggrefact-Unified dataset contains public news articles from CNN, DailyMail, and BBC. Hence, the data that we used does not have privacy issues." }, { "figure_ref": [], "heading": "A Details of Training Objective", "publication_ref": [], "table_ref": [], "text": "Since some error types may have an imbalanced distribution of positive and negative samples, we apply sampling weighting to the training objective. We first weigh the loss for the positive samples according to their proportion in the training set. Then we sum up the binary cross-entropy loss of each error type as the training objective. The weighted binary cross-entropy (BCE) loss of our model is formally defined as follows:\nL i = β i y * i log p(y i ) + (1 -y * i ) log(1 -p(y i )),(1)\nL = K i=1 L i ,(2)\nwhere β i is the weight for positive samples of the i-th error type. We set β i to be the ratio of the number of positive samples to the number of negative samples of the i-th error type in the training data." }, { "figure_ref": [], "heading": "B Experiment Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Aggrefact-Unified Dataset", "publication_ref": [ "b16", "b17" ], "table_ref": [ "tab_7" ], "text": "This dataset contains news documents from CNN/DM (Nallapati et al., 2016) and XSum (Narayan et al., 2018). In addition to the four factual error types presented in Table 1, the Aggrefact-Unified dataset also provides the labels of intrinsic entire-sentence error, extrinsic entire-sentence error, and entire-sentence error.\nWe map intrinsic (extrinsic) entire-sentence errors to intrinsic (extrinsic) noun phrases and intrinsic (extrinsic) predicate errors. We also map the entire-sentence error to all four types of factual errors. Statistics of the factual error type labels are shown in Table 6. to the i-th token of the semantic frame in the last layer of the baseline model over all attention heads. Then we compute the importance score as follows:\nm i=1 α ′ CLS→i ,\nwhere m is the number of words in the fact. Finally, we return the document facts with the highest importance scores as the document fact highlights." }, { "figure_ref": [], "heading": "B.3 Hyper-parameter Settings", "publication_ref": [], "table_ref": [], "text": "To compute F1 and BACC scores, we set the classification threshold to be 0.5. The dimension of the adapter in the Adapter-BERT model is set to 32. The number of attention heads in our document fact attention module is set to 16. We search the optimal number of attention heads from {1, 4, 8, 16} that obtains the highest BACC score in the validation set. We train our models for 40 epochs and select the checkpoint that obtains the highest BACC score in the validation set. We set the learning rate to be 1e-5. The training batch size is 12 with a gradient accumulation steps of 2. The AdapterBERT, BERT, and FineGrainFact models receive the same amount of hyperparameter tuning." }, { "figure_ref": [], "heading": "B.4 Hardware and Software Configurations", "publication_ref": [ "b27", "b24", "b6", "b20" ], "table_ref": [], "text": "We run all the experiments using a single NVIDIA V100 GPU. It takes around 1 hour and 50 minutes to train our model for 40 epochs. Our model contains 113.1M of parameters in total. We only need to train 3.6M of the model parameters since most of the parameters are frozen by the Adapter-BERT model. We obtain the BERT-base-uncased checkpoint from Huggingface (Wolf et al., 2019). We adopt the implementation of the BERT-based SRL model (Shi and Lin, 2019) provided by Al-lenNLP (Gardner et al., 2018) to conduct semantic role labeling (Palmer et al., 2005). " }, { "figure_ref": [], "heading": "C Results on Different Summarization Datasets and Error Types", "publication_ref": [ "b3" ], "table_ref": [ "tab_10" ], "text": "In Table 8, we separate the F1 scores obtained by our FINEGRAINFACT model according to the summarization dataset and the type of factual errors.\nIt is observed that our model has relatively low performance (< 50%) on detecting intrinsic errors (intrinsic noun phrase and intrinsic predicate errors) in the XSum dataset. We analyze the reason as follows. According to previous studies (Durmus et al., 2020), system summaries generated in the XSum dataset tend to have a high abstractiveness (low textual overlapping with the source document). We suspect that our FINEGRAINFACT model learns a spurious correlation that suggests an inconsistent summary with high abstractiveness contains extrinsic errors rather than intrinsic errors. A critical future direction is to address this spurious correlation of our model." }, { "figure_ref": [], "heading": "D Generalization Ability Analysis", "publication_ref": [ "b14", "b21", "b29" ], "table_ref": [], "text": "To more robustly evaluate the generalization ability of inconsistency detection models, we further construct a challenging data split in which there are no overlapped systems and documents between the test set and the training set. We first gather all the samples that contain a summary generated by the BART model (Lewis et al., 2020) to construct the test set. We choose BART since it is a common baseline in recent summarization literature (Reddy et al., 2022;Zhong et al., 2022). After that, we randomly split the remaining data samples into training and validation sets. Finally, we remove the duplicated documents between the training set and the test set. This data split contains 3,839/550/100 samples for train/validation/test sets. The results of different inconsistency detection models are shown in Table 9. We observe that our FINEGRAIN-FACT model outperforms all the baselines, which demonstrates the strong generalization ability of our model. Table 9: Performance of fine-grained inconsistency detection models in the challenging data split (%)." }, { "figure_ref": [], "heading": "E Scientific Artifacts", "publication_ref": [], "table_ref": [], "text": "We list the licenses of the scientific artifacts used in this paper: AllenNLP (Apache License 2.0), Huggingface Transformers (Apache License 2.0), and FACTCC (BSD-3-Clause License). We apply the above artifacts according to their official documentation. We will release an API of our model for research purposes. Our API can be applied to detect the fine-grained factual error types in summaries written in the English language." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers for their insightful comments on our work. This research is based upon work supported by U.S. DARPA AIDA Program No. FA8750-18-2-0014, DARPA INCAS Program No. HR001121C0165, NSF under award No. 2034562, the Molecule Maker Lab Institute: an AI research institute program supported by NSF under award No. 2019897 and No. 2034562, and the AI Research Institutes program by National Science Foundation and the Institute of Education Sciences, U.S. Department of Education through Award # 2229873 -AI Institute for Transforming Education for Children with Speech and Language Processing Challenges. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Government, the National Science Foundation, the Institute of Education Sciences, or the U.S. Department of Education. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Hou Pong Chan was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST)." } ]
Existing factual consistency evaluation approaches for text summarization provide binary predictions and limited insights into the weakness of summarization systems. Therefore, we propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary. Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FINEGRAINFACT, which explicitly represents the facts in the documents and summaries with semantic frames extracted by semantic role labeling, and highlights the related semantic frames to predict inconsistency. The highlighted semantic frames help verify predicted error types and correct inconsistent summaries. Experiment results demonstrate that our model outperforms strong baselines and provides evidence to support or refute the summary. 1
Interpretable Automatic Fine-grained Inconsistency Detection in Text Summarization
[ { "figure_caption": "iand f sum i to denote the i-th fact in the document and summary, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Performance of fine-grained consistency detection models in summaries generated by different systems (%). \"-Doc. Fact Attention\" indicates that we remove the document fact attention module and use mean pooling to fuse all document semantic representation vectors.", "figure_data": "SOTAXFORMEROLDREFAllModelF1BACCF1BACCF1BACCF1BACCF1BACCBERT32.1562.4545.7959.7947.4865.1341.7057.0845.1463.59ADAPTERBERT33.8762.9546.0159.2146.8763.7242.4257.5745.0663.05FACTCC-MULTI34.3564.0445.2060.2847.4364.4736.5248.9044.5963.05FACTGRAPH-MULTI34.2463.6237.0356.8938.1259.7635.6652.6337.4759.61FINEGRAINFACT35.1064.0846.0259.4248.6365.4846.4461.8146.4364.31-Doc. Fact Attention 34.7763.1245.6159.3647.4364.6346.3560.6745.9663.99ModelR@3 R@4 [email protected] 46.18 53.34ADAPTERBERT36.34 46.14 53.80FACTCCMULTI41.11 50.95 58.41FACTGRAPHMULTI 42.25 52.10 60.24FINEGRAINFACT49.99 59.91 67.92", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The recall@3,4,5 scores of document fact highlights (%).", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table 3 presents the recall scores of document Source text: Children in P6 and P7 will learn how to cope with change under the Healthy Me programme developed by Northern Ireland charity , Action Mental Health ... The charity is now hoping the programme will be rolled out in schools across Northern Ireland ... ...", "figure_data": "Summary generated by an OLD model:a school in northern ireland has launched a programmeto help children with mental health problems in northernireland .Ground-truth factual error type:Intrinsic Noun Phrase ErrorFactual error type predicted by FINEGRAINFACT:Intrinsic Noun Phrase ErrorDocument fact highlight predicted by FINEGRAIN-", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Sample outputs of our FINEGRAINFACT model in the Aggrefact-Unified dataset. The error in the sample summary is in red color and italicized.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Source text:The move is part of national fire service reforms unveiled by Home Secretary Theresa May last week . Sussex PCC Katy Bourne said emergency services would have an increased duty to collaborate under the new bill . But West Sussex County Council ( WSCC ) said it already had an excellent model . East Sussex ' s fire authority said it would co -operate with the PCC but it believed collaboration could be achieved without elaborate structural change . Ms Bourne said she had written to WSCC leader Louise Goldsmith and Phil Howson , East Sussex Fire Authority chairman , to request they begin to look at the feasibility of bringing both fire services under her authority . ... [ARG0 they] [V begin] [ARG1 to look at the feasibility of bringing both fire services under her authority] 4. [ARG0 they] [V look] [ARG1 at the feasibility of bringing both fire services under her authority] 5. [ARG0 she] [V request] [ARG1 they begin to look at the feasibility of bringing both fire services under her authority]", "figure_data": "Summary generated by a SOTA model:West Sussex 's police and crime commissioner ( PCC ) hassaid she wants to look at the feasibility of bringing EastSussex 's fire service under her authority .Ground-truth factual error type:Intrinsic Noun Phrase ErrorFactual error type predicted by FINEGRAINFACT:No ErrorDocument fact highlights predicted by FINEGRAIN-FACT (k = 5):1. [ARG1 collaboration] [ARGM-MOD could] [V achieved][ARGM-MNR without elaborate structural change]2. [V bringing] [ARG1 both fire services] [ARG3 under herauthority]3.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Incorrect output sample of our FINEGRAIN-FACT model in the Aggrefact-Unified dataset", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Table 7 presents the statistics of summaries generated by different systems. Statistics of fine-grained error types in the AggreFact-Unified dataset.", "figure_data": "B.2 Extraction of Document Fact Highlightsfor Baseline ModelsGiven a baseline model and a sample output fromthe baseline model, we first extract all the factsfrom the input document by SRL. Then for eachextracted document fact, we compute the averageattention score injected from the CLS token to thetokens in the semantic frame in the last layer ofthe baseline model. This average attention scoreis treated as the importance score of the documentfact. Concretely, we use α ′ CLS→i to denote thetotal attention score injected from the CLS token", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Statistics of summaries generated by different systems in the AggreFact-Unified dataset.", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The F1 score results of the FINEGRAINFACT model in each summarization dataset and factual error type (%).", "figure_data": "", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" } ]
Hou Pong Chan; Qi Zeng; Heng Ji
[ { "authors": "Shuyang Cao; Lu Wang", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "CLIFF: contrastive learning for improving faithfulness and factuality in abstractive summarization", "year": "2021-07-11" }, { "authors": "Daniel Deutsch; Rotem Dror; Dan Roth", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b1", "title": "A statistical analysis of summarization evaluation metrics using resampling methods", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Esin Durmus; He He; Mona T Diab", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization", "year": "2020-07-05" }, { "authors": "Alexander R Fabbri; Chien-Sheng Wu; Wenhao Liu; Caiming Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Qafacteval: Improved qa-based factual consistency evaluation for summarization", "year": "2022-07-10" }, { "authors": "Tim Fischer; Steffen Remus; Chris Biemann", "journal": "", "ref_id": "b5", "title": "Measuring faithfulness of abstractive summaries", "year": "2022" }, { "authors": "Matt Gardner; Joel Grus; Mark Neumann; Oyvind Tafjord; Pradeep Dasigi; Nelson F Liu; Matthew E Peters; Michael Schmitz; Luke Zettlemoyer", "journal": "", "ref_id": "b6", "title": "Allennlp: A deep semantic natural language processing platform", "year": "2018" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Evaluating factuality in generation with dependency-level entailment", "year": "2020-11" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Annotating and modeling fine-grained factuality in summarization", "year": "2021-06-06" }, { "authors": "Tanya Goyal; Greg Durrett", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Annotating and modeling fine-grained factuality in summarization", "year": "2021-06-06" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b10", "title": "Parameter-efficient transfer learning for NLP", "year": "2019-06" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Wojciech Kryscinski; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Evaluating the factual consistency of abstractive text summarization", "year": "2020-11-16" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul N Bennett; Marti A Hearst", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b13", "title": "Summac: Re-visiting nlibased models for inconsistency detection in summarization", "year": "2022" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b14", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Joshua Maynez; Shashi Narayan; Bernd Bohnet; Ryan T Mcdonald", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "On faithfulness and factuality in abstractive summarization", "year": "2020-07-05" }, { "authors": "Ramesh Nallapati; Bowen Zhou; Cícero Nogueira Dos Santos; Çaglar Gülçehre; Bing Xiang", "journal": "", "ref_id": "b16", "title": "Abstractive text summarization using sequence-tosequence rnns and beyond", "year": "2016-08-11" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018-10-31" }, { "authors": "Ani Nenkova; Rebecca J Passonneau", "journal": "The Association for Computational Linguistics", "ref_id": "b18", "title": "Evaluating content selection in summarization: The pyramid method", "year": "2004-05-02" }, { "authors": "Artidoro Pagnoni; Vidhisha Balachandran; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics", "year": "2021-06-06" }, { "authors": "Martha Palmer; Paul R Kingsbury; Daniel Gildea", "journal": "Comput. Linguistics", "ref_id": "b20", "title": "The proposition bank: An annotated corpus of semantic roles", "year": "2005" }, { "authors": "Revanth Gangi Reddy; Heba Elfardy; Pong Hou; Kevin Chan; Heng Small; Ji", "journal": "", "ref_id": "b21", "title": "Sumren: Summarizing reported speech about events in news", "year": "2022" }, { "authors": "Leonardo Ribeiro; Mengwen Liu; Iryna Gurevych; Markus Dreyer; Mohit Bansal", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Factgraph: Evaluating factuality in summarization with semantic graph representations", "year": "2022-07-10" }, { "authors": "Thomas Scialom; Paul-Alexis Dray; Sylvain Lamprier; Benjamin Piwowarski; Jacopo Staiano; Alex Wang; Patrick Gallinari", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Questeval: Summarization asks for fact-based evaluation", "year": "2021-07-11" }, { "authors": "Peng Shi; Jimmy Lin", "journal": "", "ref_id": "b24", "title": "Simple BERT models for relation extraction and semantic role labeling", "year": "2019" }, { "authors": "Liyan Tang; Tanya Goyal; Alexander R Fabbri; Philippe Laban; Jiacheng Xu; Semih Yahvuz; Wojciech Kryscinski; Justin F Rousseau; Greg Durrett", "journal": "", "ref_id": "b25", "title": "Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors", "year": "2022" }, { "authors": "James Thorne; Andreas Vlachos; Oana Cocarascu; Christos Christodoulopoulos; Arpit Mittal", "journal": "", "ref_id": "b26", "title": "The FEVER2.0 shared task", "year": "2018" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Jamie Brew", "journal": "", "ref_id": "b27", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Xinnuo Xu; Ondrej Dusek; Jingyi Li; Verena Rieser; Ioannis Konstas", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Fact-based content weighting for evaluating abstractive summarisation", "year": "2020-07-05" }, { "authors": "Ming Zhong; Yang Liu; Yichong Xu; Chenguang Zhu; Michael Zeng", "journal": "AAAI Press", "ref_id": "b29", "title": "Dialoglm: Pre-trained model for long dialogue understanding and summarization", "year": "2022-02-22" } ]
[ { "formula_coordinates": [ 2, 70.87, 632.55, 217.77, 12.77 ], "formula_id": "formula_0", "formula_text": "[ ARG0 David][ V saw][ ARG1 the flame]. We use f doc" }, { "formula_coordinates": [ 2, 306.14, 733.45, 92.09, 15.24 ], "formula_id": "formula_1", "formula_text": "f i = m j=1 α j (ϕ(t j ))" }, { "formula_coordinates": [ 3, 70.87, 167.27, 184.11, 14 ], "formula_id": "formula_2", "formula_text": "c i = MULTIHEADATT(f sum i , F doc , F doc )" }, { "formula_coordinates": [ 3, 70.65, 485.93, 169.52, 15.78 ], "formula_id": "formula_3", "formula_text": "f sum = 1 n n i=1 f sum i , c = 1 n n i=1 c i ." }, { "formula_coordinates": [ 3, 70.47, 567.36, 220.02, 39.78 ], "formula_id": "formula_4", "formula_text": "p(y) = σ(W[ f sum ; c] + b), where W ∈ R d×l , b ∈ R, d is the hidden size of Adapter-BERT, σ denotes the sigmoid function." }, { "formula_coordinates": [ 9, 77.25, 223.28, 212.61, 36.96 ], "formula_id": "formula_5", "formula_text": "L i = β i y * i log p(y i ) + (1 -y * i ) log(1 -p(y i )),(1)" }, { "formula_coordinates": [ 9, 80.63, 267.03, 209.23, 33.71 ], "formula_id": "formula_6", "formula_text": "L = K i=1 L i ,(2)" }, { "formula_coordinates": [ 9, 317.66, 293.79, 52.16, 15.38 ], "formula_id": "formula_7", "formula_text": "m i=1 α ′ CLS→i ," } ]
10.1109/TKDE.2006.152
2023-05-23
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b26", "b21" ], "table_ref": [], "text": "Customers of e-commerce websites fall in various stages of the purchase funnel 1 in their journey to purchase specific products. While lower-funnel customers target specific products or product categories, a customer in the middle to upper funnel only has vague shopping interests (SIs) and requires additional guidance to determine the right products to purchase. Existing e-commerce websites are limited today in their ability to assist them in this kind of interest-oriented shopping. For example, a customer searching for COVID-19 crisis gets top results showing product types (PTs) such as books and test kits, while missing other essential categories such as the face mask, thermometer, or medicine. Moreover, the search result is a random assortment of products, without a clear organization that helps upper-funnel customers discover products within relevant categories.\nThe main problem is the concept of \"shopping interest\" is generally absent in e-commerce catalogs, which makes it difficult to directly establish the SI-PT connections and give corresponding recommendations. To circumvent such system limitations, customers today are accustomed to researching their products on hand-curated \"hub Web pages\" 2 , each related to an SI and presenting PT suggestions as organized lists, before returning to e-commerce websites. This stretches the total time spent on a purchase. We aim to find SI-related PTs directly on the e-commerce website, reducing customer effort for all their interest-oriented needs. Figure 1 shows the desired search experience.\nThe first step to this end is collecting hub pages, which is realized by querying Google Search with automatically selected prompts (appendix A). The rest of the paper focuses on PT extraction from the HTML pages, which presents several challenges. First, hub websites are heterogeneous in their format and terminology, with PTs often interspersed among long descriptive paragraphs, making it challenging for any solution designed for one or a few websites to work well for others. Second, our page collection approach assumes that all PTs presented on a page are related to the same SI, which may not hold true in practice, requiring us to filter out irrelevant PTs. Finally, our goal to find PTs for a wide range of SIs motivates us to consider a zeroshot learning setup (Xian et al., 2019) w.r.t. SIs, to generalize to interests not seen during training.\nRepresenting an HTML document by a Document Object Model (DOM) tree whose nodes are HTML tags with text sequences, we formulate PT extraction as a node classification task that entails checking whether its text sequence represents a PT phrase. It is based on the empirical discovery that in our collected hub pages, a PT phrase generally occupies a single DOM node within a coherent group of enumerated HTML elements such as section titles or bullet points, where knowing one PT phrase suggests the potential presence of other PT phrases in the neighboring elements (Figure 3a). Node classification emphasizes learning inter-node structural dependencies rather than intra-node token interactions, which results in better generalization to a wide variety of HTML structures.\nDue to the absence of a dedicated DOM tree encoding method, we propose TRENC (Tree-Transformer Encoders for Node Classification) to fill in the blanks. Adapted from the Transformer (Vaswani et al., 2017), TRENC incorporates ancestor-descendant and sibling node relations using modified self-attention mechanisms and positional embeddings that are suited to the unique DOM node arrangement of the rendered hub pages. The ancestor-descendant relation provides relative structural information between nodes in the DOM node hierarchy, whereas the sibling relation tracks the semantical connection among sibling nodes. The modified attention mechanisms reconstruct the tree architecture from the linearized input nodes and facilitate long-term dependency modeling. To capture the relevance between an SI and a node, we leverage a gating network to dynamically integrate SI semantics with a node's textual semantics, which generalizes TRENC to unseen SIs.\nEvaluated on our dataset WEBPT with 453 Web pages covering 95 interests, TRENC achieves 2.37 absolute F 1 performance gain over the strongest baseline method. " }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b2", "b0", "b20", "b30", "b27", "b14", "b31", "b23", "b4", "b30", "b11", "b27", "b23", "b22", "b10" ], "table_ref": [], "text": "Web Information Extraction Information extraction from the semi-structured Web data is a long-studied topic (Chang et al., 2006;Banko et al., 2007;Sleiman and Corchuelo, 2013). The works most relevant to ours are those on product attribute extraction (Zheng et al., 2018;Xu et al., 2019;Lockard et al., 2020;Zhou et al., 2021;Wang et al., 2022;Deng et al., 2022). For example, Zheng et al. (2018) train a BiLSTM-CRF network (Huang et al., 2015) for each attribute to locate its corresponding values on text sequences. Xu et al. (2019) scale it up by injecting the attribute name into the network as an attention objective. Wang et al. (2022) encode the DOM tree with graph attention network (Veličković et al., 2018) to incorporate the dependencies between nodes. However, attribute extraction is different from our PT extraction task at two major points. First, attributes are typically extracted from product detail pages, each of which mentions multiple attributes; and the attribute name-value pairs cluster around titles, bullet points and descriptions. In contrast, a hub page generally focuses on a single SI, with PTs scattered throughout the page. Unlike attribute extraction approaches that limit the searching scope to certain regions, the characteristics of hub pages require us to consider a page holistically instead of a small part. Second, attribute extraction is performed as token-level entity recognition in previous works, while PT extraction requires a node-level classification, which prevents approaches for the former from being directly applied to the latter. To our best knowledge, no applicable DOM node classification or similar dataset exists in openly available benchmarks such as OGB (Hu et al., 2020)." }, { "figure_ref": [], "heading": "Graph Transformers", "publication_ref": [ "b12", "b22", "b1", "b16", "b28", "b17", "b25", "b16", "b28", "b17", "b25" ], "table_ref": [], "text": "Recently, graph neural networks (GNNs) such as the graph convolutional network (GCN, Kipf and Welling, 2017) and graph attention network (GAT, Veličković et al., 2018;Brody et al., 2022) have dominated the graph encoding research. But some works try to model graphs using Transformers (Dwivedi and Bresson, 2020;Maziarka et al., 2020;Ying et al., 2021;Park et al., 2022;Wu et al., 2022), to which our work is more related. For example, Maziarka et al. (2020) add inter-atomic distances into the self-attention heads to parameterize the molecular graph structure. Also targeting molecules, Graphormer (Ying et al., 2021) takes a step further and introduces centrality encoding, edge encoding and spacial encoding to evaluate the atom importance and capture the edge and graph structure. Park et al. (2022) and Wu et al. (2022) extend Transformers to knowledge graphs with partial message-passing tricks. Although applicable, the hierarchical and acyclic nature of DOM trees is different from the graphs for which the approaches were designed. Directly applying them to DOM trees leads to sub-optimal performance, as shown in § 5." }, { "figure_ref": [], "heading": "Problem Setup", "publication_ref": [], "table_ref": [], "text": "We possess the DOM tree of a Web page associated with a given shopping interest C. The DOM tree can be represented by a set of nodes V = {V 1 , V 2 , . . . , V |V| } as well as a set of edges E = {E 1 , E 2 , . . . , E |E| } that connect the parent and children nodes. |V| and |E| are the sizes of node and edge sets respectively. We aim to design a binary node classifier f : V∪E∪{C} → {0, 1} |V| to judge whether the text sequence in each node is a phrase representing a product type. The nodes with positive labels are referred to as \"PT nodes\" and the labels are denoted by y m = 1, m ∈ 1 : |V|. We focus our discussion on one DOM tree and use m ∈ 1 : |V| as its node index. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We propose TRENC to model the DOM tree of hub Web pages for PT extraction. Figure 2 shows the model architecture. We treat the problem as a DOM node classification task that entails detecting whether its textual sequence defines a PT phrase. We first create a node representation that integrates three basic signals of a node that may be indicative of a PT ( § 4.1). We then adapt the Transformer architecture by adding two attention mechanisms, namely path attention and sibling attention, that allow capturing of inter-node dependencies presented by their HTML structure ( § 4.2.1). We also include three kinds of positional encodings that assist the attention layers with the node's unique positional information within the DOM tree ( § 4.2.2). Finally, we integrate the outputs from the path and sibling attention layers, which are used in a classification layer to predict node labels ( § 4.2.3). The implementation details are in appendix B.1." }, { "figure_ref": [], "heading": "Node Features", "publication_ref": [ "b5" ], "table_ref": [], "text": "Besides the SI C associated with the tree, we consider two features for each node V m : 1) its HTML tag t m ∈ T where T is a finite tag set; and 2) the text sequence S m = {w m,1 , w m,2 , . . . , w m,|Sm| }, where |S m | is the length and w is the token. HTML Tag HTML tags are a finite vocabulary of keywords that define how browsers display their content. Specifically, they convey the semantic nature of their enclosed content. For example, <p> denotes a paragraph, while <ul> represents a list. Based on the observation that some tags tend to contain PT phrases more than others, we capture the tag information as a distinct structural feature and encode t m with a vector t m ∈ R d model using an embedding layer. Here, d model is the model dimensionality as in Transformers.\nText Sequence Text sequences convey the semantic character of an HTML document. In addition to directly indicating a PT phrase, they can also serve as useful contextual information about the neighboring nodes' propensity to contain a PT phrase. For example, a node stating \"Essentials for camping\" is a clear indicator that what follows is likely a set of camping-related PT nodes.\nWe leverage the power of pre-trained language models (PLMs) such as BERT (Devlin et al., 2019) to properly encode their semantics. For a given sequence, BERT generates an embedding w m,i ∈ R d BERT for each token w m,i , i ∈ 1 : |S m |, besides two special tokens w m,0 and w m,|Sm|+1 representing the start and end of the sequence. We derive the sequence embedding s m ∈ R d model by taking an average of all the token embeddings and passing it through a feed-forward network (FFN) layer:\ns m = W seq (GELU( 1 |S m | + 2 |Sm|+1 i=0 w m,i )),\n(1) where W seq ∈ R d model ×d BERT are parameters.\nShopping Interest Although we assume that a DOM tree is associated with only one SI C, in rare cases this assumption does not hold. We are thereby motivated to capture the relevance between a node and the interest. Accordingly, we incorporate C with an embedding vector c ∈ R d model in a similar manner as that for node text sequence (1), and let the model learn the relevance between C and related PTs to rule out any false positive cases." }, { "figure_ref": [], "heading": "Feature Integration", "publication_ref": [], "table_ref": [], "text": "We integrate node features into the node embedding e m ∈ R d model in two steps to honor the distinctiveness between the structural feature t m and semantic features s m and c.\nFirst, we merge the semantic features. Since different nodes have differing levels of correlations with the interest, we use gating vectors (Hochreiter and Schmidhuber, 1997) to automatically control how much interest embeddings c should be integrated into the sequence embedding s m . We calculate the weights g as:\ng(x 1 , x 2 ) = σ(W 1 x 1 + W 2 x 2 + b), (2)\nwhere x 1 and x 2 are feature vectors; W 1 and W 2 are trainable square matrices; b is the bias, and σ is the sigmoid function. With (2), the updated sequence embedding vector becomes\ns ′ m = g(c, s m ) ⊙ c + s m ,\nwhere ⊙ is the element-wise product.\nThen, we integrate the semantic and structural embeddings using concatenation followed by an FFN layer to maintain the embedding dimensionality. The integrated node embedding e m is\ne m = W emb [s ′ m T ; t T m ] T ,\nwhere [•; •] represents vector concatenation and\nW emb ∈ R d model ×2d model is an FFN layer." }, { "figure_ref": [], "heading": "TRENC Architecture", "publication_ref": [], "table_ref": [], "text": "Compared with conventional GNNs that generally aggregate only 1-hop neighboring messages in each layer, Transformers are better at tracking long-term dependencies.However, applying the Transformer encoder to DOM trees as is can lead us astray because it is not designed to naturally accommodate the hierarchical structure of a tree. To address this limitation, we adapt the Transformer architecture by adding structural attention mechanisms with node positional encodings to better encode unique information within the DOM trees with the existing abilities of the Transformer architecture." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Structural Attentions", "publication_ref": [ "b21" ], "table_ref": [], "text": "The DOM tree structure presents two kinds of relations that convey how nodes are related. The ancestor-descendant relation, represented by the edges E, conveys the granular nature of a node (high or low) within the DOM hierarchy. The sibling relation between nodes conveys how they semantically represent a coherent group, as shown in Figure 3a. We incorporate these relationships via structural attention mechanisms, namely path attention and sibling attention. Correspondingly, we represent these two views of the DOM tree by two types of node sets: path node sets and sibling node sets. A path set N P ⊂ V is the ordered collection of all nodes in an HTML path, from the root node to an arbitrary node, as illustrated in Figure 3b.\nA sibling set N S ⊂ V consists of the immediate children of a non-leaf node. Thereupon, we develop path and sibling attention mechanisms, as described below, to explore the potential of modeling tree structures with Transformers.\nPath Attention The path attention mechanism captures the granularity of a node V m within the DOM tree, which carries useful information about the node's tendency to present a PT phrase. It limits the attention target of a DOM node to its ancestors or descendants only, echoing the edges E that define the DOM tree structure. Path node sets help define an attention mask toward this purpose by leaving out all \"off-path\" elements during the self-attention message passing operation. Suppose the input is H P ∈ R |V|×d model , in each attention head, the path attention scores a P m ∈ (0, 1) 1×|V| of V m attending to all DOM nodes are a P m = SoftMax(\nH P m W Q (H P W K ) T √ d k + M P m ).\n(3) Here W ∈ R d model ×d k are the FFN layers that map the latent features to the reduced d k -dimensional single-head attention space, as in (Vaswani et al., 2017). M P ∈ {0, -∞} |V|×|V| is the path attention mask as shown in Figure 3b. ∀u, v ∈ 1 : |V|,\nM P u,v = 0, ∃N P s.t. V u ∈ N P , V v ∈ N P ; -∞, otherwise.\n(4) a P m has non-zero values at positions corresponding to V m 's ancestors or descendants. The single-head attention output of V m becomes\nAttn P m = a P m H P W V . (5\n)\nThe rest of the architecture such as the layer norm and the residual connection is the same as in the Transformer and thus is omitted.\nSibling Attention Although sibling relations are not described by the edges E, encoding them can provide a useful contextual signal based on the observation that sibling PT phrases often form a group. Accordingly, analogous to path attention, we develop sibling attention by imposing an attention mask M S , which forces a node to focus only on its siblings via self-attention. The sibling node set N S helps define the mask. Its calculation is identical to (3)-( 5), except that the variables are superscripted by sibling \"• S \" instead of path \"• P \"." }, { "figure_ref": [ "fig_1" ], "heading": "Node Positional Encodings", "publication_ref": [ "b29", "b21" ], "table_ref": [], "text": "Different from graphs, a DOM tree is acyclic and heterogeneous; the order of nodes influences their relations and how the elements are rendered. As Transformers do not encode such node order, positional embeddings are critical to capture such positioning. (Yun et al., 2020). We consider three types of absolute indices: global, level and sibling positional indices, as shown in Figure 3a. The global positional index i G m represents the position of each node in the tree in the depth-first order. It helps TRENC understand how the nodes are organized in the rendered HTML pages. The level index i L m and sibling index i S m on the other hand are developed to assist the path and sibling attentions. i L m describes the level or depth of a node, to help distinguish a parent from its children during the path attention, while the i S m captures the relative order among siblings within the sibling attention.\nWe encode positional indices by first applying sinusoid functions (Vaswani et al., 2017) \nto convert them to vectors i G m , i L m , i S m ∈ [0, 1] d model\n, followed by applying an affine transformation that maps each of them into distinct latent spaces:\nîG m = W G i G m ; îL m = W L i L m ; îS m = W S i S m ,\nwhere W ∈ R d model ×d model are FFN parameters." }, { "figure_ref": [], "heading": "TRENC Layers", "publication_ref": [], "table_ref": [], "text": "In each layer, the path and sibling signals are modeled by two parallel branches, which are identical except for the positional embeddings and attention mechanisms (Figure 2). Denoting the input feature of layer l by H (l) ∈ R |V|×d model , we have3 \nH P m = H (l) m + îL m ; H S m = H (l) m + îS m ,(6)\nwhich are passed into the attention sublayers (3)-( 5) for message passing. 4 The branch outputs ĤP and ĤS are aggregated by a gating layer that generates the layer output Ĥ(l) :\nĤ(l) m =g( ĤP m , ĤS m ) ⊙ ĤP m + (1 -g( ĤP m , ĤS m )) ⊙ ĤS m .(7)\nThe input of the first layer is the summation of the node embedding and global positional embedding H\n(1) m = e m + îG m , while the last output Ĥ(N) is fed into a classification layer to predict node labels, assuming the model has N layers in total." }, { "figure_ref": [], "heading": "Training and Inference", "publication_ref": [], "table_ref": [], "text": "We use binary cross-entropy as our training objective. Suppose the predicted logit is ŷ, then the loss at the level of a DOM tree is calculated as\nℓ = - |V| m=1 y m log σ(ŷ m )+(1-y m ) log σ(1-ŷ m ).\nDuring inference, we use 0.5 as a hard classification threshold for the predicted probability σ(ŷ)." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe a new dataset of interests and their associated webpages, specifically created to benchmark methods for the PT extraction problem. We then evaluate TRENC on the same, pitting it against a range of applicable baselines. Finally, we look at the effectiveness of various model components via ablation studies." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b6" ], "table_ref": [], "text": "Dataset We constructed a dataset containing 95 shopping interests and queried Google for hub pages using automatically selected prompts such as \"[hiking] equipment list\", where \"hiking\" is the SI. For each SI, we downloaded the top 100 returned pages and labeled them with PT nodes using a semi-automatic process. First, we applied simple heuristic rules to create noisy PT labels, based on structure and tag matching. Thereafter, for each SI, we presented roughly 5 webpages having a noisy label to a human annotator to further refine the labels. Even so, the dataset is not entirely noise-free given the subjective nature of the labeling process, with many ambiguous cases, such as deciding whether a software such as \"VSCode\" makes a valid product type. The pages without any positive human label were discarded. This process ultimately resulted in a collection of 453 HTML webpages having 94,167 nodes, among which 12,548 nodes are positive. Further details are described in appendix A.\nSetup We focus on a zero-shot setup w.r.t. SIs since our goal is to evaluate various methods on SIs not seen during training. Therefore, we split the collection of webpages stratified by their associated SIs (recall that a webpage is assumed to be associated with only one SI) into training (75%), validation (10%) and test partitions (15%), ensuring that no SI is shared across partitions. As our dataset is small, we randomly split the collection 5 times and generated 5 distinct datasets, each with the three partitions. This approach is aimed to mitigate the impact of random factors while measuring real model performance. We identify the datasets as WEBPT-n, where n ∈ 1 : 5 is the split index.\nBaselines We consider the following simple to complex methods. 1) Heuristic rules are heuristic functions we manually designed to locate PT nodes from the DOM trees, which were also used to generate the initial, noisy node labels. 2) Text similarity decides whether a node is positive based on the cosine similarity between text and SI embeddings. 3) Fine-tuned BERT (BERT-FT) fine-tunes a BERTbase model to independently classify each tree node based on its text. 4) Multilayer perceptron (MLP) also classifies each node independently, but with Metrics We evaluate each model with the F 1 scores corresponding to each split WEBPT-i and the macro-averaged F 1 score F1 = 1 5 5 n=1 F 1n with the corresponding macro precision and recall.\nAll trainable methods are equipped with early stopping techniques based on validation F 1 scores. To further reduce the influence of random factors without increasing training pressure, we store 5 snapshots of the models that perform the best on the validation dataset during training. During the test, we predict 5 sets of labels from the model snapshots and use the majority-voted labels as the final model predictions. It can be regarded as a simplified model ensemble method often used to improve model robustness (Dong et al., 2020)." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 1 shows the results of our comparative evaluation. As seen, TRENC outperforms all methods, exceeding the strongest baseline, GCN, by a mar-gin of 2.37 absolute F 1 on average. Considering the small size of our datasets, it is not surprising that the test F 1 scores have relatively large variation across different data splits, as the correlation of data distributions of the training and test sets is susceptible to random factors. Nonetheless, TRENC achieves the best performance on 4 out of 5 splits as well as exceeds by a good margin on average, which strengthens the confidence of evaluation. Surprisingly, Graphormer underperforms GNN models and barely outperforms BERT-FT, a model that treats nodes independently without considering the tree structures. It indicates that models designed for other graphs such as molecular graphs are not directly applicable to our case. Instead of helping, the features Graphormer emphasizes prevent the model from learning a reasonable representation of the DOM tree. Table 1 also shows that the cosine similarity between SI and PT embeddings does not present a good performance. This is not unexpected as SI and PTs are not usually semantically similar, making it a sub-optimal way to directly compare their embeddings.\nWe also compare TRENC with GCN at varying levels of DOM tree complexity. Figure 4a shows tree-level F 1 scores of each DOM tree against its depth, which is the average depth of its nodes\n1 |V| |V| m=1 i L\nm and roughly echos the tree complexity. Figure 4b divides the depth equally into 5 levels and presents the average F 1 for each level. As seen, TRENC has better overall performance than GCN at all depths. In addition, the gap between TRENC and GCN increases when the tree is deeper, which indicates that TRENC can better encode complex trees due to the global message-passing ability of the self-attention mechanism." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We ablate input features and model components from TRENC to understand their effectiveness. Ta- ble 2 shows the ablation results.\nInput Features As seen, although removing any input feature ( § 4.1) impairs the model performance, text sequence is the most critical feature for TRENC. We further notice that without text sequence, TRENC performs quite close to the heuristic rules that utilize very limited lexical features (Table 1). This may indicate that TRENC exhausts the structural information available in a DOM tree.\nAlthough not as significant as text sequences, incorporating SIs and tags does enhance the model performance. Injecting SIs turns the model's attention to their correlation with PTs. But such improvement is limited as the correlation is not strong, as discussed in § 5.2." }, { "figure_ref": [], "heading": "Model Components", "publication_ref": [ "b13", "b19", "b5" ], "table_ref": [ "tab_2" ], "text": "We investigate the functionalities of model components by removing them separately. The Transformer model discards edges E and treats the tree as a linearized sequence of nodes arranged by their global positional indices i G . Although it learns certain structural dependencies, as indicated by its advance in comparison with MLP (Table 1), missing explicit edge knowledge still affects the model's judgment.\nThe results also show that path attention, sibling attention and positional encodings all contribute to better tree encoding. The row \"w/o pos enc\" removes the level and sibling encodings i L , i S but keeps the global encoding i G . Without i L and i S , the model cannot properly identify the hierarchy and sibling order between nodes and therefore performs worse. Compared to path attention, sibling attention demonstrates a higher importance in context understanding, even though removing path We also test other pre-trained language models, including BERT-large, RoBERTa (Liu et al., 2019) and Sentence-BERT (Reimers and Gurevych, 2019), which is designed for comparing the sequence similarities and claims better sentence embedding performance than BERT. However, Table 2 shows that none outperforms BERT-base (Devlin et al., 2019). The reason might be the incompatibility of their training corpus and objective to our task. The results indicate that choosing an encoding model is vital to have good performance." }, { "figure_ref": [], "heading": "Case Studies on Classification Mistakes", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Table 3 shows a few false positive (FP) and false negative (FN) examples to illustrate certain text sequence patterns where TRENC fails. As seen from FP cases, TRENC either struggles to determine whether it is a broad PT category (1 st row), has challenges discerning a PT from a specific product (2 nd row), or makes mistakes when unavoidable non-purchasable items are mentioned on the page along with other valid PTs (3 rd row). From the FN cases, we conjecture that long descriptions may overwhelm the textual semantics and deviate its embedding, thereby preventing TRENC predict correctly (4 th & 5 th rows). The reason might be that TRENC have a stronger dependency on the node semantics than the structure, which is also indicated by the ablation results, and properly balancing the conditional terms may mitigate this issue." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we consider a new problem of extracting product types from the Web pages that are relevant to broad shopping interests such as camping. We model the problem as a node classification task and propose TRENC, a Transformer encoder-based model that leverages unique characteristics of DOM trees to perform product type extraction. In addition to the node-level signals including HTML tags, text sequences and shopping interest semantics, TRENC design path and sibling attention mechanisms based on DOM tree's ancestor-descendant and sibling relations. Together with the tree-based positional embeddings, the structural attention mechanisms promote the tree architecture understanding and make the classification more effective. Zero-shot experiments on a new dataset TRENC containing 95 shopping interests and 453 pages show that TRENC outperforms the baseline graph encoding models. This work pushes the frountier of researches of a more organized and intuitive result recommendation for middle-funnel customers." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b23", "b4", "b5" ], "table_ref": [], "text": "Apart from the issues mentioned in § 5.4, another limitation of TRENC is that it does not integrate any pre-training process such as BERT, which is effective in increasing the language understanding ability and adopted by previous works focusing on token-level classification tasks (Wang et al., 2022;Deng et al., 2022). Two factors lead to this decision. First, we use DOM nodes instead of tokens as the classification object and focus on relations between nodes rather than tokens. As the node text sequence is a composition of an arbitrary number of tokens, adopting the conventional masked language modeling (MLM) training objective (Devlin et al., 2019) seems impractical since there is no direct mapping from an embedding vector, one-hot encoded or not, to a sentence. The second reason is simply that we do not possess the corpus or computation resources for model pre-training. In fact, we expect a properly designed pre-training scheme to bring better node semantics representation and SI-PT relation modeling. It is an interesting topic and deserves further study.\n<root> \"\" <p2> \"Hiking equipment list\" <ul> \"\" <li> \"long-sleeve shirt\" <li> \"Trekking poles\" <p2> \"Contact\" <root> \"\" <ul> \"\" <li> \"long-sleeve shirt\" <li> \"Trekking poles\" <p2> \"Contact\" <p2> \"Hiking equipment list\" <footer> \"Follow\" <div> \"Facebook\" <div> \"Twitter\" <div> \"\" <div> \"\"" }, { "figure_ref": [], "heading": "Original DOM Tree", "publication_ref": [], "table_ref": [], "text": "Processed DOM Tree A Dataset Details" }, { "figure_ref": [ "fig_2" ], "heading": "A.1 Dataset Construction", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "We build WEBPT to realize a quantitative analysis of different PT extraction methods. WEBPT is a collection of hub pages relevant to a set of pre-defined SIs. Its construction process mainly consists of 5 steps: 1) defining SIs; 2) crawling hub pages; 3) processing HTML documents; 4) labeling documents; and 5) splitting data points.\nDefining Shopping Interests As the first step, we establish a set of SIs through brainstorming.\nParticularly, we focus on popular activities, sports, hobbies and special events. Please check Table 6 and 7 for a complete list of SIs.\nCrawling Hub Pages The hub pages are the webpages, each providing PTs related to a specific SI. Due to the variety of SIs, it is infeasible to focus on one or several websites for hub page collection. For example, a website specializing in sports will not provide information on \"sewing\" with a high chance and vice versa. In addition, gathering information from different websites may eliminate the bias probably existing in one website, according to the law of large numbers.\nConsidering this situation, we take advantage of Google Search with a simple query selector to locate the hub pages. Each SI C is combined with suffices \"equipment list\", \"supply list\", \"tool list\" and \"checklist\" before being fed into the search engine for querying. The system selects the combination with the largest number of results, whose top-100 query results are saved for later usage. We keep only the HTML pages and discard other documents such as PDFs or CSVs, so the actual number of saved documents may vary.\nProcessing HTML Documents This step aims to simplify the DOM tree structure to facilitate PT extraction. The RAW DOM tree is complicated <root> \"\" <ul> \"\" <li> \"long-sleeve shirt\" <li> \"Trekking poles\" <p2> \"Contact\" <p2> \"Hiking equipment list\" <div> \"\" Original DOM Tree <ul> \"\" <li> \"Sunscreen\" <li> \"First-aid kit\" <li> \"Sunglasses\" <root> \"\" <ul> \"\" <li> \"long-sleeve shirt\" <li> \"Trekking poles\" <p2> \"Hiking equipment list\" <div> \"\" <root> \"\" <p2> \"Contact\" <div> \"\" <ul> \"\" <li> \"Sunscreen\" <li> \"First-aid kit\" <li> \"Sunglasses\" with decorative and supporting scripts irrelevant to the content, which easily submerges the useful information we want to extract and decreases the false positive rate. We prune the trees by removing all headers, footers, and leaf nodes with empty text sequences. Then, we replace the nodes with only one child by their children to decrease the tree depth. To reduce the tree depth, we delete the nodes with only one child and then connect their children with subsequent subtrees directly with their parents. The process is illustrated in Figure 5. Experiments show that this HTML processing strategy successively simplifies the DOM structure without sacrificing any targeted content." }, { "figure_ref": [], "heading": "Separated DOM Trees", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Labeling Documents and Splitting Data Points", "publication_ref": [], "table_ref": [ "tab_2", "tab_11", "tab_12" ], "text": "These two steps are sufficiently discussed in § 5.1 as will not be repeated. The only supplement is that the heuristic method used for initializing the noisy labels and compared in Table 1 is empirically developed. We omit its discussion since it is complex and not the focus of this paper. The detailed dataset splits are presented in Table 6 and7. Data Processing for Transformers One limitation of the Transformer models such as BERT and TRENC is that they need to set a constraint to the length of the input sequence |V| since the complexity of the self-attention mechanism is O(|V| 2 ) and easily explodes when |V| is too large. Considering this drawback, for the node Transformers including Graphormer and TRENC, we set 512 as the maximum size of a DOM tree and split those that exceed this size. In addition, we guarantee that each split tree has 64 nodes at minimum. Figure 6 shows an example of the separation process." }, { "figure_ref": [], "heading": "A.2 Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "We present the dataset statistics in " }, { "figure_ref": [], "heading": "A.3 Labeling Quality", "publication_ref": [], "table_ref": [], "text": "The dataset is labeled by one individual as the task is straightforward. To investigate the labeling quality, we randomly select 25 DOM trees, removing their original labels and presenting them to 2 individuals for re-labeling. In case 1, annotators may regard \"men's shirt\" and \"woman's shirt\" as negative as they are subcategories of the PT \"shirt\"; in case 2, the latter \"shirt\" may be regarded as negative as it is a repetition surrounded by long descriptive sentences." }, { "figure_ref": [], "heading": "A.4 Data Usage", "publication_ref": [ "b21" ], "table_ref": [], "text": "All Web pages used by WEBPT are included in the Common Crawl repository. 6 They are intended to provide information on a topic or interest, so consistent with that idea, we labeled the product types on each page. The labels do not contain any personally identifiable information. We are making the annotated dataset available to encourage further research on the product type extraction problem. The WEBPT dataset is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. The classification layer consists of 2 FFN sublayers that first downscale the TRENC layer output to 16-dimensional and then to the 1-dimensional output logits ŷ. We use the same activation functions and dropout strategy as described in (Vaswani et al., 2017). Our experiments show that the performance remains similar when we use 6 or 8 as the number of heads or use model dimensionality d model = 512." }, { "figure_ref": [], "heading": "B Implementation Details", "publication_ref": [ "b15", "b24", "b18" ], "table_ref": [], "text": "We train the model using 10 -4 as the peak learning rate of the AdamW optimizer (Loshchilov and Hutter, 2019) with linear scheduler with 0.1 warmup ratio. The batch size is 8 and the random seed is 42. We do not take multiple runs for each model on each dataset as our dataset and evaluation strategies ( § 5.1) can minimize the impact of random factors. Using another random seed (0) only changes the F1 scores of TRENC and GCN by 0.03 and 0.05, respectively. The model is implemented with the \"Transformers\" library (Wolf et al., 2020) in Py-Torch (Paszke et al., 2019). The hyper-parameters not mentioned above keep their default values." }, { "figure_ref": [], "heading": "B.2 Baseline Methods", "publication_ref": [ "b8", "b28" ], "table_ref": [], "text": "Text Similarity We adopt the same approach as described in § 4.1 with the uncased BERT-base model to generate the text sequence embedding e m of each node V m and the concept embeddings c. Then, we compute their cosine similarity through sim m ∈ (0, 1) = e T m c ∥e m ∥∥c∥ .\nWe decide the classification threshold by exhausting possible values with 0.01 interval within (0, 1) and select the one that gives the largest F 1 score. Notice that this threshold searching method is only applied to the text similarity baseline. Others take a constant threshold 0.5, as described in § 4. MLP MLP can be considered as a TRENC model without TRENC layers. In other words, it directly feeds the node embeddings e ( § 4.1) into the classification layer (Figure 2) without considering any inter-dependencies between nodes. We increase its classification layer depth until the validation F 1 stops improving for a fair comparison.\nGNNs Similar to MLP, GNN models substitute the TRENC layers in the TRENC model with the GCN and GAT layers, respectively. The GNN layers are implemented with the \"PyTorch Geometric\" library (Fey and Lenssen, 2019). The number of GNN layers is fine-tuned according to the validation performance.\nGraphormer We take the original implementation of Ying et al. (2021) and keep all model components. 7 The differences are that we initialize the node features with node embeddings e instead of atom categories, and we train the model with node classification instead of graph classification. We keep its scheme for encoding edges but introduce only one edge category representing the ancestordescendant relationship. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by Amazon.com Services LLC, NSF IIS-2008334, IIS-2106961, and CAREER IIS-2144338.\nWe would like to thank Xian Li, Binxuan Huang, Chenwei Zhang, Yan Liang, and Jingbo Shang for their insightful advice on this work." } ]
Recommending a diversity of product types (PTs) is important for a good shopping experience when customers are looking for products around their high-level shopping interests (SIs) such as hiking. However, the SI-PT connection is typically absent in e-commerce product catalogs and expensive to construct manually due to the volume of potential SIs, which prevents us from establishing a recommender with easily accessible knowledge systems. To establish such connections, we propose to extract PTs from the Web pages containing hand-crafted PT recommendations for SIs. The extraction task is formulated as binary HTML node classification given the general observation that an HTML node in our target Web pages can present one and only one PT phrase. Accordingly, we introduce TRENC, which stands for Tree-Transformer Encoders for Node Classification. It improves the inter-node dependency modeling with modified attention mechanisms that preserve the long-term sibling and ancestor-descendant relations. TRENC also injects SI into node features for better semantic representation. Trained on pages regarding limited SIs, TRENC is ready to be applied to other unobserved interests. Experiments on our manually constructed dataset, WEBPT, show that TRENC outperforms the best baseline model by 2.37 F 1 points in the zero-shot setup. The performance indicates the feasibility of constructing SI-PT relations and using them to power downstream applications such as search and recommendation.
Extracting Shopping Interest-Related Product Types from the Web
[ { "figure_caption": "Figure 1 :1Figure 1: Example of the system and results to deliver as a response to searching for the SI \"hiking\".", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An example of the model inputs, including the DOM tree components, positional indices and attention masks. The path sets and sibling sets in (b) are defined by the global positional indices in (a). In the attention masks, white elements have values 0 and the black ones are -∞.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example of HTML processing.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An example of separating a DOM tree.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Examples of typical ambiguous annotation cases.In case 1, annotators may regard \"men's shirt\" and \"woman's shirt\" as negative as they are subcategories of the PT \"shirt\"; in case 2, the latter \"shirt\" may be regarded as negative as it is a repetition surrounded by long descriptive sentences.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "B.1 TRENC Hyper-Parameters We set the model dimensionality d model = 128 and the number of TRENC layers N = 12. Each attention branch has 4 attention heads, and the single-head attention dimensionality d k = 32. The feed-forward layer above the attention layer (Figure 2) first maps the features from d model to a 512dimensional latent space and then maps it back.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Test F 1 scores on each dataset WEBPT-n and the macro-averaged results (in %).", "figure_data": "ModelsWEBPT-1 WEBPT-2 WEBPT-3 WEBPT-4 WEBPT-5 F1 ( precision / recall )HeuristicSimilarity40.1239.1435.8436.5533.8037.09 ( 28.52 / 52.44 )MethodsRules56.5362.4456.9059.6858.2858.77 ( 44.20 / 88.02 )MLP66.6566.2866.3174.7161.9067.17 ( 72.11 / 63.38 )BERT-FT72.5071.6373.0377.8765.6972.14 ( 68.32 / 76.65 )SupervisedGraphormer71.0981.7675.7366.8169.6773.01 ( 76.61 / 70.89 )MethodsGAT71.3185.4574.8378.4067.8475.57 ( 77.07 / 74.28 )GCN76.1384.0779.1681.5071.9278.56 ( 84.44 / 73.57 )TRENC79.6588.2678.9982.4075.3580.93 ( 84.06 / 77.81 )TrENCGCNTrENCGCN1.0F 1 scores0.5F 1 scores0.6 0.80.02.55.0 Average depth 7.5 10.012 Average depth level 3 45(a) F1 vs. DOM tree depth(b) Average F1 vs. depth levelFigure 4: Test F 1 scores against the DOM tree depths.", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples of common mistakes made by TRENC. FP/FN indicates false positives/negatives. attention means a node no longer has access to any other nodes from the tree. Sequence Encoding In our implementation, we use the uncased BERT-base model with d BERT = 768 as our encoders for sequence embeddings e and concept embeddings c. 5 The embeddings are fixed during the training process.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "DOM trees are larger than molecular graphs but significantly smaller than knowledge graphs.", "figure_data": "AttributeValue# Shopping Interests95# DOM Trees453# Total Nodes94,167# Leaf Nodes70,161# Positive PT Nodes12,548Average # Nodes per Tree207.87Maximum # Nodes in a Tree2,748Minimum # Nodes in a Tree19Median # Nodes in a Tree156Average Tree Depth7.06Maximum Tree Depth18Minimum Tree Depth3Median Tree Depth7Average # Trees per SI4.77Average # Nodes per SI991.23Maximum # Nodes for an SI3,050Minimum # Nodes for an SI363Median # Nodes for an SI935", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Dataset statistics.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table5presents the statistics and results. It shows that our labeling quality is decent despite some inevitable disagreements on ambiguous cases, as exampled in Figure7.", "figure_data": "AttributeValue# DOM Trees25# Total Nodes5,938# Positive PT Nodes683# Disagreement87# Fleiss' κ98.53", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Annotation quality investigation.", "figure_data": "Ambiguous Case 1Ambiguous Case 2<root> \"\"<root> \"\"<p2> \"Shirt\"<p2> \"Shirt\"<div> \"\"<div> \"\"<strong> \"Men's shirt\"<p> \"...\"<span> \"...\"<p> \"...\"<strong> \"Women's shirt\"<strong> \"shirt\"<span> \"...\"<p> \"...\"", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "3.BERT-FT BERT-FT classifies each node V m independently by fine-tuning the uncased BERT-base model with the sequence classification task. The model input is the combination of the sequence S m and the concept C, i.e., \"[CLS] S m [SEP] C [SEP]\". It does not consider the tag t m . We append a one-layer FFN to the embedding corresponding to the [CLS] token to map it to a 1dimensional logit. The training objective is minimizing the binary cross-entropy.", "figure_data": "", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Shopping interests and their splits in each dataset. \"Tr\", \"vl\" and \"tt\" represent \"training\", \"validation\" and \"test\" respectively.", "figure_data": "SI1WEBPT-n 2 3 453d-printingtrtrtrtrtrairsoft-paintballtrtttrtttrarcherytrtrtt vl trastronomytrtrtrtrtrat-home-fitnesstr vl trttttat-home-spatttrtr vl trbadmintontr vl trtr vlbakingtrtr vl trtrbartendingvl trtrtrtrbaseballtrtrtrtrtrbasketballtrtr vl trttbilliards-pooltt vl vl tttrbird-watchingtrtttrtttrboatingtrtrtttrtrbowlingtrtttttr vlboxingtrtrtrtrtrcalligraphytrtr vl tr vlcampingtrtrtrtrtrcandle-makingtrtrtrtt vlcanoeingtrtttrtrtrcheerleadingtrtrtrtrtrcleaningtrtrtrtrtrclimbingtttrtrtr vlcoffeetrtrtrtrtrcomics-mangatrtrtrtrtrcontent-creationtt vl tttr vlcrickettrtr vl tttrcrossfitvl trtrtrtrcyclingtrtrtttrtrdigital-arttrtrtr vl trdiy-home-improvement trtrtrtrtrdjtrtrtrtrtrdrag-queentrtrtrtrttdrawing-and-sketchingtrtrtttrtrfencingtrtrtttrtrfield-hockeytr vl vl trtrfishingtt vl trtrtrfloral-arrangingtrtrtrtrtrfootballvl trtrtrtrgamingtttrtrtrtrgardeningtrtt vl vl vlgolfingtrtrtrtrtrgymnasticsvl trtrtrtrhair-caretrtr vl vl tthikingtttrtrtrtrhockeytr vl trtrtrhome-entertainmenttt vl trtrtthome-schoolingtrtrtrtrtrhorse-ridingtrtrtrtrtt", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "SIs and splits (cont.).", "figure_data": "", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" } ]
Yinghao Li; Colin Lockard; Prashant Shiralkar; Chao Zhang
[ { "authors": "Michele Banko; Michael J Cafarella; Stephen Soderland; Matt Broadhead; Oren Etzioni", "journal": "", "ref_id": "b0", "title": "Open information extraction from the web", "year": "2007" }, { "authors": "Shaked Brody; Uri Alon; Eran Yahav", "journal": "", "ref_id": "b1", "title": "How attentive are graph attention networks?", "year": "2022-04-25" }, { "authors": "Chia-Hui Chang; M Kayed; M R Girgis; K F Shaalan", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b2", "title": "A survey of web information extraction systems", "year": "2006" }, { "authors": "Pu-Chin Chen; Henry Tsai; Srinadh Bhojanapalli; Hyung Won Chung; Yin-Wen Chang; Chun-Sung Ferng", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "A simple and effective positional encoding for transformers", "year": "2021" }, { "authors": "Xiang Deng; Prashant Shiralkar; Colin Lockard; Binxuan Huang; Huan Sun", "journal": "", "ref_id": "b4", "title": "DOM-LM: learning generalizable representations for HTML documents", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Xibin Dong; Zhiwen Yu; Wenming Cao; Yifan Shi; Qianli Ma", "journal": "Frontiers Comput. Sci", "ref_id": "b6", "title": "A survey on ensemble learning", "year": "2020" }, { "authors": "Vijay Prakash; Dwivedi ; Xavier Bresson", "journal": "", "ref_id": "b7", "title": "A generalization of transformer networks to graphs", "year": "2020" }, { "authors": "Matthias Fey; Jan E Lenssen", "journal": "", "ref_id": "b8", "title": "Fast graph representation learning with PyTorch Geometric", "year": "2019" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b9", "title": "Long Short-Term Memory", "year": "1997" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "", "ref_id": "b10", "title": "Open graph benchmark: Datasets for machine learning on graphs", "year": "2020-12-06" }, { "authors": "Zhiheng Huang; Wei Xu; Kai Yu", "journal": "", "ref_id": "b11", "title": "Bidirectional LSTM-CRF models for sequence tagging", "year": "2015" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b12", "title": "Semisupervised classification with graph convolutional networks", "year": "2017-04-24" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b13", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Colin Lockard; Prashant Shiralkar; Xin ; Luna Dong; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "ZeroShotCeres: Zeroshot relation extraction from semi-structured webpages", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b15", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Lukasz Maziarka; Tomasz Danel; Slawomir Mucha; Krzysztof Rataj; Jacek Tabor; Stanislaw Jastrzebski", "journal": "", "ref_id": "b16", "title": "Molecule attention transformer", "year": "2020" }, { "authors": "Jinyoung Park; Seongjun Yun; Hyeon-Jin Park; Jaewoo Kang; Jisu Jeong; Kyung-Min Kim; Jung-Woo Ha; Hyunwoo J Kim", "journal": "", "ref_id": "b17", "title": "Deformable graph transformer", "year": "2022" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Köpf; Edward Z Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b18", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019-12-08" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019-11-03" }, { "authors": "Hassan A Sleiman; Rafael Corchuelo", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b20", "title": "A survey on region extractors from web documents", "year": "2013" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b21", "title": "Attention is all you need", "year": "2017-09" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b22", "title": "Graph attention networks", "year": "2018" }, { "authors": "Qifan Wang; Yi Fang; Anirudh Ravula; Fuli Feng; Xiaojun Quan; Dongfang Liu", "journal": "Association for Computing Machinery", "ref_id": "b23", "title": "Webformer: The web-page transformer for structure information extraction", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Qitian Wu; Wentao Zhao; Zenan Li; David Wipf; Junchi Yan", "journal": "", "ref_id": "b25", "title": "Nodeformer: A scalable graph structure learning transformer for node classification", "year": "2022" }, { "authors": "Yongqin Xian; Christoph H Lampert; Bernt Schiele; Zeynep Akata", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b26", "title": "Zero-shot learning -A comprehensive evaluation of the good, the bad and the ugly", "year": "2019" }, { "authors": "Huimin Xu; Wenting Wang; Xin Mao; Xinyu Jiang; Man Lan", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Scaling up open tagging from tens to thousands: Comprehension empowered attribute value extraction from product title", "year": "2019" }, { "authors": "Chengxuan Ying; Tianle Cai; Shengjie Luo; Shuxin Zheng; Guolin Ke; Di He; Yanming Shen; Tie-Yan Liu", "journal": "", "ref_id": "b28", "title": "Do transformers really perform badly for graph representation?", "year": "2021-12-06" }, { "authors": "Chulhee Yun; Srinadh Bhojanapalli; Ankit Singh Rawat; Sashank J Reddi; Sanjiv Kumar", "journal": "", "ref_id": "b29", "title": "Are transformers universal approximators of sequenceto-sequence functions?", "year": "2020-04-26" }, { "authors": "Guineng Zheng; Subhabrata Mukherjee; Xin ; Luna Dong; Feifei Li", "journal": "Association for Computing Machinery", "ref_id": "b30", "title": "Opentag: Open attribute value extraction from product profiles", "year": "2018" }, { "authors": "Yichao Zhou; Ying Sheng; Nguyen Vo; Nick Edmonds; Sandeep Tata", "journal": "", "ref_id": "b31", "title": "Simplified dom trees for transferable attribute extraction from the web", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 79.75, 714.8, 200.51, 34.74 ], "formula_id": "formula_0", "formula_text": "s m = W seq (GELU( 1 |S m | + 2 |Sm|+1 i=0 w m,i ))," }, { "formula_coordinates": [ 4, 332.42, 585.47, 192.72, 10.67 ], "formula_id": "formula_1", "formula_text": "g(x 1 , x 2 ) = σ(W 1 x 1 + W 2 x 2 + b), (2)" }, { "formula_coordinates": [ 4, 357.04, 664.71, 116.48, 14.19 ], "formula_id": "formula_2", "formula_text": "s ′ m = g(c, s m ) ⊙ c + s m ," }, { "formula_coordinates": [ 4, 359.86, 759.72, 110.84, 15.55 ], "formula_id": "formula_3", "formula_text": "e m = W emb [s ′ m T ; t T m ] T ," }, { "formula_coordinates": [ 5, 70.87, 85.97, 182.59, 12.13 ], "formula_id": "formula_4", "formula_text": "W emb ∈ R d model ×2d model is an FFN layer." }, { "formula_coordinates": [ 5, 384.26, 96.83, 135.08, 28.27 ], "formula_id": "formula_5", "formula_text": "H P m W Q (H P W K ) T √ d k + M P m )." }, { "formula_coordinates": [ 5, 308.84, 217.07, 211.69, 28.02 ], "formula_id": "formula_6", "formula_text": "M P u,v = 0, ∃N P s.t. V u ∈ N P , V v ∈ N P ; -∞, otherwise." }, { "formula_coordinates": [ 5, 364.59, 311.64, 156.31, 14.19 ], "formula_id": "formula_7", "formula_text": "Attn P m = a P m H P W V . (5" }, { "formula_coordinates": [ 5, 520.9, 314.48, 4.24, 9.46 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 6, 70.87, 115.37, 218.27, 24.89 ], "formula_id": "formula_9", "formula_text": "to convert them to vectors i G m , i L m , i S m ∈ [0, 1] d model" }, { "formula_coordinates": [ 6, 73.61, 177.1, 212.26, 14.56 ], "formula_id": "formula_10", "formula_text": "îG m = W G i G m ; îL m = W L i L m ; îS m = W S i S m ," }, { "formula_coordinates": [ 6, 84.48, 316.77, 205.38, 14.56 ], "formula_id": "formula_11", "formula_text": "H P m = H (l) m + îL m ; H S m = H (l) m + îS m ,(6)" }, { "formula_coordinates": [ 6, 101.78, 406.38, 188.09, 32.89 ], "formula_id": "formula_12", "formula_text": "Ĥ(l) m =g( ĤP m , ĤS m ) ⊙ ĤP m + (1 -g( ĤP m , ĤS m )) ⊙ ĤS m .(7)" }, { "formula_coordinates": [ 6, 70.87, 594.37, 218.27, 34.6 ], "formula_id": "formula_13", "formula_text": "ℓ = - |V| m=1 y m log σ(ŷ m )+(1-y m ) log σ(1-ŷ m )." }, { "formula_coordinates": [ 7, 307.34, 594.12, 54.8, 16.64 ], "formula_id": "formula_14", "formula_text": "1 |V| |V| m=1 i L" } ]
10.18653/v1/2022.naacl-main.207
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b18" ], "table_ref": [], "text": "Performing complex reasoning over long input documents often requires forming high-level abstractions of the text (e.g., plots and themes in a narrative) and then conducting a variety of inferences on top of those abstractions (Graesser et al., 1994). Consider the following question about the story \"Breakaway\" from the QuaLITY dataset (Pang et al., 2022):\nMine helpful actions from training set questions DEFINE(X), COMPARE(X,Y), FIND_EMOTION(X),..." }, { "figure_ref": [], "heading": "Action Mining", "publication_ref": [], "table_ref": [], "text": "Execute the plan step-by-step Plan Execution open_conv = \"In the initial conversation, Phil Conover is excited about his upcoming mission to be the first man to see the other side of the moon ....\"" }, { "figure_ref": [], "heading": "Given a question, generate plan of mined actions Plan Generation", "publication_ref": [], "table_ref": [], "text": "Question: What part of the final scene best connects to the story's opening conversation?\n1.open_conv = FIND_ELEMENT(CTX,\"opening conver..\") 2.final_scene = SUMMARIZE_X(CTX, \"final_scene\")" }, { "figure_ref": [], "heading": "3.reflection = FIND_RELATION(init_conv, final_scene)", "publication_ref": [ "b20" ], "table_ref": [], "text": "Figure 1: High-level overview of our framework PEARL. Each stage in PEARL is achieved via zero-shot or fewshot prompting of an LLM (in our work, GPT-4). We also provide example outputs from each stage.\nWhat part of the final scene best connects to the story's opening conversation?\nTo answer this question, we need to gather, evaluate, and synthesize information from across the story, which motivates decomposing the question into a plan of actions, as in:\n1. Identify all participants in initial conversation. 2. Summarize the initial conversation. 3. Summarize events and themes of final scene. 4. Summarize roles of conversation participants in final scene. 5. Identify and rank connections between conversation and final scene.\nEach action in the above plan varies in complexity, from simple lookup-style actions (Step 1) to more challenging query-focused summarization (Steps 2-4) and conceptual linking (Step 5) actions that require deep narrative understanding.\nGiven the rapidly advancing capabilities of large language models (LLMs), how can we use them to answer questions like these? While we could directly prompt LLMs to generate the answer, prior work on simpler reasoning-based tasks shows that this method is inferior to Chain-of-Thought prompting (Wei et al., 2022, CoT), which encourages the LLM to provide step-by-step explanations and intermediate outputs before producing the answer. Unfortunately, CoT is not well-suited for tasks involving complex reasoning over long input documents, as both the decomposition of the original question and the intermediate outputs of each step are non-trivial to obtain, as in the above example.\nGiven the difficulty of obtaining plans and intermediate explanations for long documents, one potential solution is to delegate this task to smaller executable modules instead of forcing the LLM to come up with all of them at once. In this work, we introduce PEARL, a framework that combines Planning and Executable Actions for Reasoning over Long documents. Each stage of PEARLaction mining, plan decomposition, and plan execution -is implemented by applying zero-shot or few-shot prompting to an LLM. The stages (Figure 1) can concisely be described as follows:\n1. Action mining: An LLM is prompted to come up with simple actions that can help solve questions from an input training dataset. Unlike predefined \"toolboxes\" in methods such as Toolformer (Schick et al., 2023) or Re-ACT (Yao et al., 2023b), the action set in PEARL is also generated by an LLM.\n2. Plan generation: Given an input test question, an LLM generates an executable plan consisting of a series of actions selected from the action set produced in the previous stage. The plan is formatted as a simple program in which the execution result of one action can serve as an argument to future actions, which enables complex composition." }, { "figure_ref": [], "heading": "Plan execution:", "publication_ref": [ "b18", "b30" ], "table_ref": [], "text": "The LLM executes the plan action-by-action via a prompt template that includes an action and the long-form input document. Note that this is the only stage that includes the document, as the other stages operate over just questions.\nWe demonstrate PEARL's effectiveness on a challenging subset of QuALITY (Pang et al., 2022), a reading comprehension dataset that contains questions about long-form articles. While QuALITY is originally a multiple-choice dataset, we reformulate it into a generation task: given a question and an article, an LLM is asked to generate a free-form answer. As a proxy for measuring answer correctness, we adopt a similar approach to Wang et al. (2020) by asking the LLM to map its generated answer to one of the multiple choice options, which allows us to compute its accuracy.\nPrompting LLMs with PEARL yields more accurate and comprehensive answers than those generated by directly prompting the LLM to answer the question, particularly for questions that require reasoning over the full long document. This result is particularly impressive given the potential for error propagation in the PEARL framework: as each stage is implemented via an LLM, errors in plan formulation or execution can significantly affect the output answer. To further verify the integrity of the plans, we perform human evaluation by asking annotators to provide feedback and ratings; annotators generally find the plans to be reasonable, although a small percentage contain unnecessary actions or omit critical actions. Overall, we hope PEARL further opens the door towards using LLMs for complex reasoning over long documents." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b0", "b37", "b29", "b24", "b17", "b4", "b33", "b3", "b20", "b14", "b19", "b7", "b11", "b22", "b15", "b13", "b36", "b38" ], "table_ref": [ "tab_0" ], "text": "Our work builds on recent LLM prompting research and also connects to work on reasoning over long documents. Before describing PEARL, we first survey related papers to contextualize our work within this fast-moving field.\nPrompting methods: Recently, the capabilities of large language models (Brown et al., 2020;Zhang et al., 2022;Touvron et al., 2023) have significantly increased as a result of learning from instructions or feedback (Stiennon et al., 2022;Ouyang et al., 2022;Chung et al., 2022) to better align their outputs to human preferences. When provided with well-crafted prompts, such as chainof-thought (Wei et al., 2022) explanations, these state-of-the-art models exhibit impressive reasoning abilities. A plethora of new prompting techniques (Table 1) has been introduced lately to unlock more capabilities of LLMs via leveraging exteral tools (Chen et al., 2022;Schick et al., 2023;Lu et al., 2023), problem decomposition (Press et al., 2022;Dua et al., 2022;Khot et al., 2023;Yao et al., 2023b), self-reflection and self-refinement (Huang et al., 2022;Shinn et al., 2023;Madaan et al., 2023), planning (Yao et al., 2023a;Wang et al., 2023a;Long, 2023), and other techniques (Yoran et al., 2023;Wang et al., 2023b;Zhou et al., 2023)." }, { "figure_ref": [], "heading": "Prompting Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Explicit plan", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Iterative prompting", "publication_ref": [], "table_ref": [], "text": "Does not rely on external tools" }, { "figure_ref": [], "heading": "Long documents", "publication_ref": [ "b33", "b3", "b19", "b20", "b10", "b5", "b28", "b16", "b6", "b21", "b26", "b12", "b23", "b2", "b27", "b25" ], "table_ref": [], "text": "Chain-of-Thought (Wei et al., 2022) ✗ ✗ ✓ ✗ Program-of-Thought (Chen et al., 2022) ✗ ✗ ✗ ✗ Self-Ask (Press et al., 2022) ✗ ✓ ✗ ✗ Toolformer (Schick et al., 2023) ✗ ✗ ✗ ✗ ReAct (Yao et al., 2023b) ✗ ✓ ✗ ✗ Plan-and-Solve (Wang et al., 2023a) Reasoning over long documents: Large language models have showcased remarkable reasoning capabilities (Huang and Chang, 2022), including mathematical reasoning (Cobbe et al., 2021), commonsense reasoning (Talmor et al., 2019), and symbolic reasoning (Nye et al., 2021). Most of these tasks do not involve long context inputs, and thus they are able to benefit from few-shot in-context CoT prompting. In this paper, we primarily focus on tasks that contain long input contexts (Kočiský et al., 2018;Dasigi et al., 2021;Shaham et al., 2022;Sun et al., 2022), specifically generative question answering based on long input articles. To address the absence of reliable evaluation for long-form QA (Krishna et al., 2021), Stelmakh et al. (2022) proposes automatic metrics for evaluating the correctness of the answer, whereas in this work, we use LLM-based evaluation by taking advantage of the multiple-choice setup of existing QA dataset. Prior to the shift to prompting-based methods, approaches including contrastive learning-based sequence-level objectives (Caciularu et al., 2022), iterative hierarchical attention (Sun et al., 2021), and joint modeling of machine reading and answer generation (Su et al., 2022) have been employed to enhance long-context question answering.\n✓ ✗ ✓ ✗ PEARL (this work) ✓ ✓ ✓ ✓" }, { "figure_ref": [], "heading": "PEARL: Planning and Executing Actions for Reasoning over Long Documents", "publication_ref": [], "table_ref": [], "text": "We are interested in using LLMs to solve tasks that require complex reasoning over long documents. 2In this paper, we focus on the task of answering questions about long-form narratives. Most prompt- " }, { "figure_ref": [], "heading": "Instructions and demonstrations:", "publication_ref": [], "table_ref": [], "text": "{Natural language instructions} {Human-written few-shot demonstrations}\nGiven a question about a long document and the seed action set, come up with new actions that could help to answer the question..." }, { "figure_ref": [], "heading": "Output:", "publication_ref": [], "table_ref": [], "text": "FIND_MISSION(CTX, X) : Find the mission of character X from the input context CTX..." }, { "figure_ref": [], "heading": "Input question: {Question from training set}", "publication_ref": [], "table_ref": [], "text": "What is the alien's mission?\nFigure 2: Prompt sketch for action mining. It comprises human-written seed actions set and instructions, as well as question for which LLM will extract action(s) from. Finally, we also present an example mined action. More details can be found in the Appendix D.\ning strategies that aim to improve the reasoning abilities of LLMs (e.g., CoT) are not applicable to this task due to the length and complexity of the input document. In this section, we specify our PEARL framework, which consists of three LLM-implemented stages that mine actions from a training corpus, formulate plans to answer held-out questions, and then execute the resulting plans to obtain answers." }, { "figure_ref": [], "heading": "Action mining", "publication_ref": [], "table_ref": [], "text": "In many prior prompting techniques such as Re-ACT and Toolformer, the LLM is able to query external APIs (e.g., Wikipedia search or a calculator) to solve a given task. Unlike these works, which assume a predefined action space, PEARL mines actions directly from data of similar distribu- After a full pass over example questions in the training data, we obtain a final set of actions and their corresponding definitions which are then incorporated into the prompt of the next stage. " }, { "figure_ref": [], "heading": "Plan generation", "publication_ref": [], "table_ref": [], "text": "A plan serves as the guiding framework or outline for answering complex questions that may involve multi-step reasoning and/or global understanding of long documents. Given a question, as shown in Figure 3, we prompt an LLM to generate a plan based on the previously-mined action set. Each step of the plan is formatted as\noutput = ACTION(arg1, arg2, . . . ),\nwhere the output variable stores the result of the current ACTION , and the arguments can be (1) the input document, (2) a string, or (3) an output variable from previous steps of the plan. When generating the plan, we do not show the LLM the entire document as input, which provides ample space for incorporating few-shot in-context examples. Similar to the seed actions in the previous stage, we provide a small seed set of plans and allow the model to generate more demonstrations automatically. We provide more details in Section 4 about controlling the quality of model-generated in-context demonstrations." }, { "figure_ref": [ "fig_1" ], "heading": "Plan execution", "publication_ref": [], "table_ref": [], "text": "In the previous stage, the LLM generates a plan that serves as a blueprint for producing a response. To execute each step in the plan, we prompt the LLM with a template filled with output from previous stages. Concretely, as shown in Figure 4, to execute the action FIND_BEHAVIOR_REASON, the model fills in the prompt template with (1) the planned action and definition, (2) current action with specific input argument (e.g., aspirin_event) , (3) assignment of argument name with output from previous stage (e.g., aspirin_event = \"in the beginning of the story, ...\"), and (4) a one-sentence instruction for the current step, all of which are generated by LLM. As the long input article is involved during this stage, the prompt is executed in a zero-shot manner." }, { "figure_ref": [], "heading": "Self-correction and self-refinement", "publication_ref": [], "table_ref": [], "text": "Since the plans are generated by an LLM, they may be incorrectly formatted or of otherwise low quality. To address this issue, similar to Shinn et al.\n(2023), we include a self-correction step prior to plan execution and a self-refinement step before incorporating model-generated plans as in-context few-shot examples. We implement a plan parser that returns relevant error messages when the plan does not conform to the defined format. The invalid plan as well as the error message are then passed into the LLM for correcting the plan's grammar. To ensure the quality of model-generated in-context examples, we validate them by executing the plan and evaluating the generated answer with a task-specific scoring function (more details in Section 4.1). If the answer is rejected by the evaluation in the end, we pass the plan to LLM for further self-refinement before being included in the context as few-shot examples." }, { "figure_ref": [ "fig_2" ], "heading": "Experiments", "publication_ref": [ "b18", "b21" ], "table_ref": [], "text": "We compare PEARL to baseline methods (zero-shot answering and zero-shot CoT) on a challenging subset of the QuALITY Question-Answering dataset that requires reasoning over long articles of several thousands tokens. In this section, we describe our dataset selection, experimental setup, and model configurations.\nDataset selection: We focus on the QuALITY QA dataset (Pang et al., 2022), which is a multiplechoice QA task in the SCROLLS benchmark (Shaham et al., 2022). However, to better simulate LLMs usage in real-world scenarios, we turn this dataset into a generative task4 in which an LLM does not have access to the choices and must instead generate a long-form answer. Then, we automatically map the generated answer back to one of the choices with an LLM to evaluate the accuracy as shown in Figure 5. 5 The accuracy of mapped answers serves as a proxy for assessing the correctness of the provided answer. QuALITY contains a diverse variety of questions, each of which is annotated with the amount of context from the document needed to answer the question. In contrast to questions that can be correctly answered with local context once a piece of information is located, as in Who found Retief and Magnan in the trees?\nwe are more interested in questions that require reasoning over long context, as in:\nHow would you describe the changes in tone throughout the passage?\nThese questions constitute an interesting and difficult subset that, unlike more straightforward information seeking questions, require global understanding and reasoning over the document to provide accurate answers. Therefore, we select a subset of questions rated as requiring long contexts to answer. In total, we create a dataset of 1K examples divided into two splits:6 (1) Long: 330 examples from the dev set, 368 examples from training set, and (2) Short: 302 examples from dev set that do not require long contexts to answer; the latter forms a control dataset to make sure our methods do not overly worsen performance on simpler questions." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [], "table_ref": [], "text": "As each of the stages in PEARL has critical hyperparameters and implementation details, we describe our specific configurations here." }, { "figure_ref": [], "heading": "Action mining:", "publication_ref": [], "table_ref": [], "text": "We provide an LLM with seven seed actions and two in-context examples to demonstrate the required format for generating new actions. 7 We collect new actions by passing all training set questions into the model, excluding those questions in our evaluation set. Ultimately, we obtain 407 actions and corresponding definitions, of which several are duplicates or overly specific, and Long denotes the split where the questions require reasoning over long contexts to answer accurately. As we only evaluate on a subset, we also provide p-values to verify statistical significance against the zero-shot GPT-4 baseline. Given the article and question, we prompt an LLM with PEARL to generate a long-form answer, which is later mapped to one of QuALITY's multiple-choice options by the LLM itself.\nin total exceeds GPT-4's maximum context window of 8K tokens. As such, we instruct the LLM to simplify and abstract over existing actions in order to reduce the total number of actions. After repeating this process twice,8 we reduce the number of actions to 81, which forms the final action set for PEARL.\nSelf-correction retry limit: Despite utilizing self-correction to validate the generated plan's syntax, it is still possible that the model fails to generate a plan in the correct format. In such cases, we force the model to revert to the zero-shot baseline approach. Out of 1K examples across various PEARL variants, only 4 examples failed to parse within the retry count limit, which is within an acceptable range of failed examples." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b17" ], "table_ref": [], "text": "As existing sophisticated prompting methods require few-shot examples in-context, which is not feasible when long document is involved, we compare PEARL with simple zero-shot baselines (GPT-4 (OpenAI, 2023) and GPT-3.5 (Ouyang et al., 2022)), where we directly prompt the model to provide a detailed free-form answer. Additionally, we also evaluate zero-shot chain-of-thought prompting for GPT-4 by adding \"Let's think step-by-step,\" to the prompt." }, { "figure_ref": [ "fig_3" ], "heading": "Main results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We discover that PEARL significantly outperforms competing prompting methods on questions that require reasoning over long contexts, which demonstrates the utility of the planning module. We also observe a small drop in accuracy on questions that require only short contexts, possibly because the plans end up over-complicating what is a simple reasoning process. In this section, we dig deeper into the main results of our experiments, which are presented in Table 2.\nPEARL improves accuracy on long-document QA: Overall, PEARL's accuracy is higher than that of all competing methods, particularly for the QuALITY split annotated by humans as requiring long contexts to answer (Long). Furthermore, we observe in Figure 6 that for questions marked by QuALITY workers as requiring the longest possible context, PEARL improves substantially compared to the zero-shot GPT-4 baseline (72.4% vs 61.9%). Our method's slightly diminished performance on the short split is likely due to both \"overthinking\" these simpler questions, as well as error propagation from plan execution steps as highlighted in Section 6. Finally, we point out that all methods achieve higher accuracies on the Short split compared to the Long split, indicating the challenging nature of this set of questions." }, { "figure_ref": [ "fig_4" ], "heading": "Number of actions impacts performance:", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In Figure 7, we show that the size of the action set is an important factor in PEARL's performance. With just a single action (i.e., EXECUTE a free-form natural language instruction),10 PEARL's accuracy on the Long subset drops to 64%. With too many actions (140 in the plot), its accuracy also degrades, likely because the action space is too fine-grained for the model to properly execute all actions. We note that the optimal number of actions likely differs from task to task, so it is an important hyperparameter to consider when tuning PEARL.\nAction execution is necessary: Do we actually need to execute the generated plans to answer these questions? Feeding just the generated plan to the model along with the question (minus any execution results) may still encourage the LLM to follow the plan's reasoning steps and generate a better answer. However, we observe that removing the execution results from the model's input reduces absolute accuracy by around 3 points, which suggests that it is important to perform multiple passes over the document to execute each action before answering the original question. With that said, we do observe a modest improvement over the GPT-4 zero-shot and CoT baselines (∼ 2 absolute points), which suggests that the plan itself is also valuable.\nSelf-refinement improves performance: To reduce human input, the majority of the plan generation demonstrations for PEARL are generated by the LLM with self-refinement. We observe that self-refinement is critical to performance: without it, the overall accuracy drops nearly 3 absolute points (see ablations in Table 2), which highlights the importance of high-quality few-shot examples for plan generation." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [ "tab_6", "tab_12" ], "text": "In this section, we analyze the behavior of PEARL by diving into the composition of its generated plans, its most preferred actions, and what types of questions it improves most on. We also offer a qualitative error analysis as well as a human evaluation on the correctness of the generated plans. quired to answer them. 11 Table 4 shows that PEARL significantly improves three reasoning types: why questions (reasoning about a cause), person questions (reasoning about the person(s) involved in an event), and not/except questions (e.g., \"which of the following is not a reason for...\").\nC O N C A T F IN D _ C H A R A C T E R F IN D _ E L E M E N T ID E N T IF Y _ E L E M E N T F IN D _ E V E N T F IN D _ B E H A V IO R _ R E A S O N F IN D _ R E L A T IO N F IN D _ E M O T\nPEARL is significantly slower than zeroshot prompting: This analysis reveals two key advantages of PEARL. First, while zero-shot prompting is reasonably good at finding salient information from the input document, its generative answers tend to be based only on local context around this information. For instance, when asked about the number of wives the character \"Dan Merrol\" has, the baseline successfully identifies six names that appear to be Dan's wives. However, PEARL takes into account the revelation that these names \"were actually memories from the brain donors whose parts were used to reconstruct his brain\" and thus correctly reasons that Dan only has one wife. In this case, PEARL provides answer that demonstrates a more comprehensive understanding of the entire article. Second, PEARL generates more detailed and thorough answers. For instance, given the question \"Why is Kumaon a good region for potential forest preservation?\", the zero-shot answer considers only one aspect of the reason, whereas PEARL elaborates on multiple aspects. This allows PEARL's answer to be mapped to the correct option (\"All other choices\"), while the zero-shot answer maps to the option corresponding to the single aspect.\nWhere • Other: Some QuALITY questions are heavily dependent on the options; that is, the correct answer can only be determined after examining all the options. For instance, Table 11 presents a question asking who would enjoy the story the most of the given options. Although PEARL offers an answer based on the story's genre-which is not incorrect-it is not as accurate as the gold label. Furthermore, there are instances where the model's freeform answers lack sufficient details and can thus be mapped to more than one option or no options at all. We classify these responses as a separate category. Human evaluation of model-generated plans:\nThe quality of plans generated by PEARL is critical, as they serve as the basis for the plan execution stage. To gain further insight on the quality of these plans, we perform a human evaluation by hiring annotators on Upwork12 to provide feedback on the generated plans. 13 Concretely, we ask annotators to assess (1) the correctness of the plans (binary choice), assuming error-free execution at each step, and (2) provide free-form feedback on any flaws or potential improvements. On average, annotators regard over 97% of all plans as correct, with over 94% confidence, although these numbers are inflated because the annotators do not have access to the long story when making these judgments. More interestingly, Table 5 displays their feedback aggregated over common themes, which shows that the primary issue with existing plans is the presence of unnecessary steps (10% of the total annotated plans). Annotators also notice that GPT-4 can be inattentive to subtle details while generating plans. For example, given the question \"Do you think it would be fun to live in the universe in which this story takes place?\", the model decides to \"evaluate the pros and cons of living in the universe based on the features found in the input article\". However, human annotator argues that \"just because something is positive doesn't necessarily mean it is \"fun\". Any pros on the list might outweigh the dangers noted, resulting in an incorrect answer of 'yes'...\"." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce PEARL, a framework for tackling complex reasoning over long documents.\nTo answer a question, PEARL first proposes a plan based on a set of actions mined from a training set, and then it executes the plan step by step via prompting itself with a template filled with output from previous stages. We demonstrate the effectiveness of PEARL on a challenging subset of QuAL-ITY. Experiments and analysis show that prompting GPT-4 with PEARL yields more accurate and comprehensive answers than zero-shot and chainof-thought prompting, and human annotators judge the generated plans to be reasonable." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "While PEARL shows promising results for long document reasoning, there are several limitations to our approach. Like other prompting methods, PEARL is susceptible to generating misinformation or hallucinations. It is also more time-consuming and computationally costly than the baseline approach of directly prompting an LLM to answer the question. Moreover, PEARL may over-complicate simple questions that only need superficial reasoning over long-form narratives. Finally, PEARL is still bounded by the maximum context window size of the LLMs. Overall, our work leaves many interesting directions in this space (e.g., new datasets, modules, stage refinements) open for exploration." }, { "figure_ref": [], "heading": "A GPT-4 Multiple-choice setup performance", "publication_ref": [], "table_ref": [], "text": "While our primary focus is on the generative QA setup in the main text, we provide GPT-4's performance under the standard multiple-choice setup here in the Appendix. On the entire QuALITY dev set, GPT-4 achieves an accuracy of 84.4%. For the 1000 challenging question set, GPT-4 reaches an accuracy of 78.7%, nearly 10 points higher than the GPT-4 zero-shot generative baseline. This result suggests that there is still room for improvement in GPT-4's generative answers. We also observe that GPT-4 is sensitive to the ordering of the provided options. We further evaluate GPT-4 with three shuffled versions of the options (swap A and D, B and C; swap A and C, B and D; swap A and B, C and D). While the overall accuracy of these versions remains similar, the questions that are consistently answered correctly across all four option orderings drop to 68.7%. This result raises the question of whether GPT-4 truly \"understands\" the question and further motivates the generative QA setup." }, { "figure_ref": [], "heading": "B Verify Accuracy of Answer Mapping", "publication_ref": [], "table_ref": [], "text": "As demonstrated in Section 6, the mapping stage is not always reliable. To understand the frequency of mapping errors, we conduct a small-scale human answer mapping study. We recruit three professionals on Upwork. We randomly select 50 questions and ask annotators to read PEARL output and then map it to one of the provided options. On average, annotators agree with ∼83% of GPT-4 mappings, with inter-annotator agreement on four-class settings of κ = 0.677. For questions where annotators disagree with each other or do not concur with GPT-4, they tend to be those that can be mapped to than one option or none of the options. We believe this level of accuracy is decent enough to let GPT-4 perform the mapping step for evaluation.\nC Can PEARL benefit from more human-written examples? EXCEPT(CTX, LIST) # Find the item that is not mentioned in the input CTX but is present in the given.. EXPLAIN_PROCESS(CTX, X) # Provide a detailed explanation of the process X given the input CTX. FIND_BARRIERS_CAUSES(CTX, X) # Find and summarize the remaining barriers or causes related to X given the input CTX. FIND_BEHAVIOR_REASON(CTX, X) # Find the reason behind the behavior X given the input CTX. FIND_BENEFIT(CTX, X) # Find the direct benefit of X given the input CTX. FIND_BEST(CTX, X, Y) # Find the best X in the context of Y given the input CTX. FIND_CHARACTER(CTX, X) # Find and summarize the character traits, transformation, and changes of X given the input CTX. FIND_COMMON(CTX, X, Y, Z) # Find the common ground, characteristics, or commonalities between X, Y, and Z given the input CTX. FIND_CONDITION(CTX, X, Y) # Find the condition, outcome, or consequences related to X and Y given the input CTX. FIND_CONFLICT_CONCERN(CTX, X, Y) # Find the conflict, concern, or disagreement between X and Y given the input CTX. FIND_CONSISTENCY(CTX, X) # Determine if X is consistent throughout the input CTX. FIND_DECISION(CTX, X) # Find the decision, factor, or event that influenced X's decision in the input CTX. FIND_DESCRIPTION(CTX, X) # Find all descriptions, characteristics, or words that describe X given the input CTX. FIND_DETAILS(CTX) # Find all the details about a topic (e.g., contract, city-state) discussed in the input CTX. FIND_DIALOGUE(CTX, X, Y) # Find the dialogue between X and Y in the input CTX. FIND_DIFFICULTY_DANGER(CTX, X) # Find the most difficult aspect, challenge, or danger faced by X in the given input CTX.\nFIND_ELEMENT(CTX, X, Y) # Find the element X related to Y given the input CTX. This function can cover message, method, metrics, mismatch, mission, mistake, most likely, motif, motivation, nationalities, negative critique, negative effect, next event, normal, objective, obstacles, ... FIND_EMOTION(CTX, X, Y) # Find the emotion or feeling X feels towards Y given the input CTX. FIND_ENDING(CTX, X) # Find the ending or conclusion of X's story or the input CTX. FIND_EVENT(CTX, X) # Find the event involving X in the input CTX (e.g., betrayal, change, climax). FIND_EVIDENCE_EXAMPLE(CTX, X) # Find evidence or an example supporting X given the input CTX. FIND_EXCEPTION(CTX, X, Y, Z) # Find the exception or characteristic that is not common among X, Y, and Z given the input CTX. FIND_EXPECTATION(CTX, X) # Find the expectation, assumption, or impact about X given the input CTX. FIND_EXPLANATION(CTX, X) # Find the most likely explanation, critique, or doubt for X given the input CTX. FIND_FACT_FALSE(CTX, X) # Find a definite fact or false statement about X given the input CTX. FIND_FEARS_DISTRACTIONS(CTX, X) # Find the fears, concerns, or distractions of X given the input CTX. FIND_FEATURES(CTX, X) # Find all the features that X cares about given the input CTX. FIND_FIRST_INSTANCE(CTX, X) # Find the first instance of X happening in the input CTX. FIND_FLAW(CTX, X) # Find the greatest flaw of X given the input CTX. FIND_FOCUS(CTX, X) # Find the person or object that is focused on the most in the input CTX, given a list of X. FIND_FORESHADOW(CTX, X, Y) # Find the instance where X foreshadows Y in the input CTX. FIND_FUTURE(CTX, X) # Find the future, predicted outcome, or action of X given the input CTX. FIND_GRIEVANCE(CTX, X) # Find and summarize the grievance X has against something or someone in the input CTX. FIND_HALO_EFFECT(CTX, X) # Find and summarize one halo effect of X given the input CTX. FIND_HUMBLENESS(CTX, X) # Find the instances of humbleness presented by X in the input CTX. FIND_HYPOTHETICAL(CTX, X) # Find the hypothetical outcome or consequence of X given input CTX. FIND_IMAGINATION(CTX, X) # Find and summarize how X imagines something in the input CTX. FIND_IMPACT(CTX, X, Y) # Find the event or experience that had the strongest impact on X's Y given the input CTX. ... [Question] Now you are given a question about an article: {question} Please provide a plan (sequence of actions) that can arrive to the answer after reading the article. As the corresponding options are not provided for the question, when the question is not answerable without the options, simply collect as much information as possible from the input such that it will be answerable with the options. Make sure the plan you generate is valid and faithful to the question.\n[Answer] ans = CONCAT(terrans_battle, ter-rans_close_win) : Combine the Terrans' involvement in the battle and the events where they come close to winning to form the final answer\nStep 2 and 3 can be combined: Find and summarize the Terrans' battle event within the story in the input article What level of depth does the author provide on the subjects they use to make their case? A: Language is really the only thing covered in any depth B: A broad, but not very deep assessment C: They provide the reader with deeper arguments about the monetary system and striking tendencies than anything else D: They provide deep, explanatory statistics to most arguments 1. author = IDENTIFY_ELEMENT(CTX, \"author\") : Identify the author of the article 2. subjects = FIND_ELEMENT(CTX, \"subjects\", author) : Find and list all the subjects the author uses to make their case in the input article 3. depth_analysis = ANALYZE(CTX, subjects, author) : Analyze the level of depth the author provides on the subjects they use to make their case in the input article 4. ans = CONCAT(subjects, depth_analysis) : Combine the subjects and the depth analysis to form the final answer for comparing with the options Very good plan. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "done during an internship at Microsoft." }, { "figure_ref": [], "heading": "Prompt for Action Mining", "publication_ref": [], "table_ref": [], "text": "[Actions] -CONCAT(S1, S2, ...) : Concatenate the input S1, S2, ... -EXTRACT(CTX, X) : Extract the exact wording that X is referring to from input CTX.\n-FIND_X(CTX, X): Find and summarize all relevant information about X in the input CTX.\n-FIND_REASON(CTX, X) : Find and summarize the cause or reason of X given input CTX.\n-FIND_MORAL(CTX) : Find the intended lesson or moral of the input CTX.\n-SUMMARIZE(CTX): Provides a general summary about the given CTX.\n-SUMMARIZE_X(CTX, X) : Provides a summary about X given the provided input CTX.\n[Instructions] Suppose you are given a question about an article as well as a list of actions that you can execute to solve the question (shown above). You can imagine the actions as functions in a program, where you have input arguments and output. The output of an action can be fed as input to another action. The output of the final action will be the answer to the given question. Suppose you haven't read the article yet, please present a sequence of actions that you would use to answer the question.\nHere are a few examples: Question: What is the \"space cafard\" that Si describes? My new actions: -COMPREHEND(CTX, X) : Provide a detailed comprehension of X given the input CTX." }, { "figure_ref": [], "heading": "My sequence of actions:", "publication_ref": [], "table_ref": [], "text": "1. snippet = EXTRACT(CTX, \"space cafard\") : Extract the exact wording regarding \"space cafard\" from the input CTX. 2. ans = COMPREHEND(CTX, X) : Provide a detailed comprehension of the input X given the input CTX. Your answer must follow the following rules: 1. The present sequence should be minimal, i.e., no unnecessary actions. 2. The sequence of actions should be specific and cover every detail about the question. 3. The sequence of actions should use as many as existing actions as possible. 4. It is fine to create new actions, however, the created new actions should be maximally reusable and generalizable to other reading comprehension questions. 5. The arguments should cover all the details of the given question.\n[Question] {Question} {Value assignment of input argument(s)} X = \"In the story, Kolin is a steward from the Planetary State of Haurtoz who is part of a scouting party sent to explore a planet after their ship, the Peace State, is damaged. Kolin is unhappy with the oppressive regime on Haurtoz and dreams of escaping it. While exploring the planet, he encounters a tree named Ashlew, which is actually a man who has transformed into a tree. Ashlew tells Kolin about the Life, a powerful entity on the planet that can help individuals change their form...{Output from previous step.}\" Y = \"becoming a tree\"\n[Answer] {A brief description of current step.} (Find the emotion or feeling Kolin has towards becoming a tree himself in the input article) Table 9: Prompt for executing a step in a plan. Prompt of this step is a template with placeholders which will be filled with the output from previous step(s). " }, { "figure_ref": [], "heading": "Prompt for Answer Mapping", "publication_ref": [], "table_ref": [], "text": "" } ]
Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.
PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
[ { "figure_caption": "3See prompt for QuALITY action mining in Appendix D", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Prompt sketch for plan execution. This prompt contains multiple {placeholders} that will be filled with output from previous stages.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Generic illustration of our evaluation setup.Given the article and question, we prompt an LLM with PEARL to generate a long-form answer, which is later mapped to one of QuALITY's multiple-choice options by the LLM itself.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Accuracy by the amount of required context to answer, 9 as annotated by humans in QuALITY.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: PEARL accuracy given in-context action sets of various sizes. Having too few or too many actions impairs the performance.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Top-10 most frequently used actions by PEARL.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "1. output_1 = action_1(here goes arguments) : [one-sentence explanation] 2. output_2 = action_2(here goes arguments) : [one-sentence explanation] do Ross and Mehta view Brown's acquisition of the magazine?\" Answer: New actions: -FIND_OPINION(CTX, X, Y) : Find the opinion of X about Y given the input CTX 1. ross = FIND_CHARACTER(CTX, \"Ross\") : Identify who Ross is in the input article 2. mehta = FIND_CHARACTER(CTX, \"Mehta\") : Identify who Mehta is in the input article 3. brown = FIND_CHARACTER(CTX, \"Brown\") : Identify who Brown is in the input article 4. magazine_acquisition = FIND_EVENT(CTX, \"Brown's acquisition of the magazine\") : Find the event of Brown's acquisition of the magazine in the input article 5. ross_opinion = FIND_OPINION(CTX, ross, magazine_acquisition) : Find the opinion of Ross about Brown's acquisition of the magazine 6. mehta_opinion = FIND_OPINION(CTX, mehta, magazine_acquisition) : Find the opinion of Mehta about Brown's acquisition of the magazine 7. ans = CONCAT(ross_opinion, mehta_opinion) : Combine the opinions of Ross and Mehta on Brown's acquisition of the magazine to form the final answer ... {more few-shot examples} ...", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "= FIND_DETAILS(CTX) : Find all the details about the classes and their intensity levels mentioned in the input article 2. least_intense_class = FIND_LEAST_DESCRIBING_WORD (classes, \"intense\") : Find the class that is least intense in the given input CTX 3. ans = CONCAT (least_intense_class, classes) : Combine the least intense class and the intensity levels of all classes to form the final answer Adding the details for the other classes is unnecessary since the question is looking for a single answer, the least intensive class. Do the Terrans ever come close to winning the battle within the story? A: No, they continually lose B: They win the whole battle with less casualties C: Yes, by the surprise squadron Evelyn leads D: Yes, by Evelyn cloning soldiers into battle 1. terrans = IDENTIFY_ELEMENT(CTX, \"Terrans\") : Identify who the Terrans are in the input article 2. battle = FIND_EVENT(CTX, \"battle\") : Find and summarize the battle event within the story in the input article 3. terrans_battle = FIND_RELATION(CTX, terrans, battle) : Find and summarize the Terrans' involvement in the battle from the input article 4. terrans_close_win = FIND_CONDITION (CTX, \"Terrans\", \"close to winning\") : Find the condition or events where the Terrans come close to winning the battle in the input article 5.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of PEARL to other recently-proposed prompting techniques. PEARL is the only one designed for and evaluated on tasks that require complex reasoning over long documents.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "{Natural language instructions}{Human-written few-shot demonstrations}Figure 3: Prompt sketch for plan generation. In theprompt, we include the list of actions mined from previ-ous stage in-context, natural language detailing the task,and few-shot examples guiding the plan generation.tion (in our case, training set questions of QuAL-ITY). As shown by prior research (Graesser et al.,1994), answering complex queries over long doc-uments requires specific reasoning techniques; asfurther evidence, Xu et al. (2022) demonstrates thepresence of various discourse structures in good an-swers to long-form questions on Reddit. Learningdataset-specific actions enables PEARL to scale todifferent domains and tasks, as user queries maydiffer considerably in terms of complexity. More-over, mining actions from training set can reducehuman efforts in designing new actions. In thiswork, we define an \"action\" as a basic unit for longdocument reasoning. To obtain these actions, wefirst manually create a small set of seed actions touse as demonstrations. 3 Then, as shown in Figure 2,given an example question, we feed it along withthe seed actions and instructions to the LLM togenerate more task-specific actions. Each ACTIONis formatted as a programmatic function with inputarguments and is followed by a model-generatedfunction definition in natural language. Below isan example action generated by the LLM:ANALYZE(CTX, X, Y) # Analyze the rela-tionship, attitude, or feelings between X and Ygiven the input context CTX", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "We present baseline and PEARL as well as ablation results on our generative subset of QuALITY questions.", "figure_data": "QUALITY LONGQUALITY SHORTALL p-val", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of errors exhibited by PEARL answers.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Accuracy by reasoning types.", "figure_data": "The improved performance ofPEARL comes at the cost of longer running timeand cost. With an average of 30 examples, PEARLneeds to handle 4.4 times more tokens in theprompt and generate 1.3 times more tokens ow-ing to the intermediate steps.11 We prompt GPT-4 with the definition of each reasoningtype presented in QuALITY's Appendix (Pang et al., 2022)and ask it to label each question with up to two reasoningtypes.", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Out of 40 examples, 6 fall into this Other category.", "figure_data": "Human annot. category# of plansUnnecessary steps15Steps can be merged2Plan misses information3Plan may lead to incorrect answer4Plan needs slight edit7Table 5: human freeform feedback aggregation", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A subset of mined actions from training set questions. Analyze the relationship, attitude, or feelings between X and Y, or the character, language, tone, or symbolism of X given the input CTX. COMPARE(CTX, X, Y, Z) # Compare X and Y in the context of Z, considering aspects such as abilities, assets, attractiveness, behavior, concerns, contributions, cultures, events, experiences, feelings, focus, intelligence, irony, nationalities, performance, praise, reactions, reviews, secretiveness, time periods, treatment, truth, or worlds given the input CTX. Suppose you are given a question about an article, as well as a list of potential actions (shown above) that you can execute to solve the question . You can imagine the actions as functions in a program, where you have input arguments and output. The output of an action can be fed as input to another action. Please present a sequence of actions that you would use to answer the question after you read the article. The sequence of actions should be specific and cover all the details about the question. Please prioritize using the actions presented in the list above. If you need to add new actions, please follow the format below. Please assign the output of each action with a distinct name, which can be passed into other actions as argument. Think twice before you provide your answer. Make sure your answer is valid, clear, and easy to understand. Keep the answer simple and remove any unnecessary steps. Do not use list comprehension or dictionary comprehension. Keep each action minimally simple. If a question is unanswerable (e.g., requires options), collect as much information as possible from the input such that it will be answerable when provided with options. Your answer should follow the format:", "figure_data": "Prompt for Generating Plan[Actions]ANALYZE(CTX, X, Y) # COMPREHEND(CTX, X) # Provide a detailed comprehension of X given the input CTX.CONCAT(S1, S2, ...)DEFINE(CTX, X) # Provide the definition of X given the input CTX.DESCRIBE(CTX, X, Y) # Provide a description of X in terms of Y, such as character, genre, or introduction given the inputCTX.EVALUATE(CTX, X, Y) # Evaluate aspects such as feeling, outcome, performance, personalities, risk, or truth of X in relationto Y given the input CTX....{List of Actions as shown in Table 7}[Instructions]", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Prompt for generating plan given a question, which is filled in the placeholder {question}.", "figure_data": "D Prompts and templates used in PEARLE Human feedbacks on model-generatedplan", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Example human feedback from annotators on PEARL-generated plans.", "figure_data": "", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" } ]
Simeng Sun; Yang Liu; Shuohang Wang; Chenguang Zhu; Mohit Iyyer
[ { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": "Avi Caciularu; Ido Dagan; Jacob Goldberger; Arman Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Long context question answering via supervised contrastive learning", "year": "2022" }, { "authors": "Wenhu Chen; Xueguang Ma; Xinyi Wang; William W Cohen", "journal": "", "ref_id": "b3", "title": "Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b4", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Mark Chen; Heewoo Jun; Lukasz Kaiser; Matthias Plappert; Jerry Tworek; Jacob Hilton; Reiichiro Nakano; Christopher Hesse; John Schulman", "journal": "", "ref_id": "b5", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Pradeep Dasigi; Kyle Lo; Iz Beltagy; Arman Cohan; Noah A Smith; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "A dataset of information-seeking questions and answers anchored in research papers", "year": "2021" }, { "authors": "Dheeru Dua; Shivanshu Gupta; Sameer Singh; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Successive prompting for decomposing complex questions", "year": "2022" }, { "authors": "Murray Arthur C Graesser; Tom Singer; Trabasso", "journal": "Psychological review", "ref_id": "b8", "title": "Constructing inferences during narrative text comprehension", "year": "1994" }, { "authors": "Jiaxin Huang; Shixiang Shane Gu; Le Hou; Yuexin Wu; Xuezhi Wang; Hongkun Yu; Jiawei Han", "journal": "", "ref_id": "b9", "title": "Large language models can self-improve", "year": "2022" }, { "authors": "Jie Huang; Kevin Chen; -Chuan Chang", "journal": "", "ref_id": "b10", "title": "Towards reasoning in large language models: A survey", "year": "2022" }, { "authors": "Tushar Khot; Harsh Trivedi; Matthew Finlayson; Yao Fu; Kyle Richardson; Peter Clark; Ashish Sabharwal; ; ; Karl Moritz Hermann; Gábor Melis; Edward Grefenstette", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b11", "title": "Decomposed prompting: A modular approach for solving complex tasks", "year": "2018" }, { "authors": "Kalpesh Krishna; Aurko Roy; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Hurdles to progress in long-form question answering", "year": "2021" }, { "authors": "Jieyi Long", "journal": "", "ref_id": "b13", "title": "Large language model guided tree-ofthought", "year": "2023" }, { "authors": "Pan Lu; Baolin Peng; Hao Cheng; Michel Galley; Kai-Wei Chang; Ying Nian Wu; Song-Chun Zhu; Jianfeng Gao", "journal": "", "ref_id": "b14", "title": "Chameleon: Plug-and-play compositional reasoning with large language models", "year": "2023" }, { "authors": "Aman Madaan; Niket Tandon; Prakhar Gupta; Skyler Hallinan; Luyu Gao; Sarah Wiegreffe; Uri Alon; Nouha Dziri; Shrimai Prabhumoye; Yiming Yang; Sean Welleck; Prasad Bodhisattwa; Shashank Majumder; Amir Gupta; Peter Yazdanbakhsh; Clark", "journal": "", "ref_id": "b15", "title": "Self-refine: Iterative refinement with self-feedback", "year": "2023" }, { "authors": "Maxwell Nye; Michael Henry Tessler; Joshua B Tenenbaum; Brenden M Lake", "journal": "", "ref_id": "b16", "title": "Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning", "year": "2021" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b17", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Richard Yuanzhe; Pang ; Alicia Parrish; Nitish Joshi; Nikita Nangia; Jason Phang; Angelica Chen; Vishakh Padmakumar; Johnny Ma; Jana Thompson; He He; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "QuALITY: Question answering with long input texts, yes!", "year": "2022" }, { "authors": "Ofir Press; Muru Zhang; Sewon Min; Ludwig Schmidt; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b19", "title": "Measuring and narrowing the compositionality gap in language models", "year": "2022" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b20", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Uri Shaham; Elad Segal; Maor Ivgi; Avia Efrat; Ori Yoran; Adi Haviv; Ankit Gupta; Wenhan Xiong; Mor Geva; Jonathan Berant; Omer Levy", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "SCROLLS: Standardized CompaRison over long language sequences", "year": "2022" }, { "authors": "Noah Shinn; Beck Labash; Ashwin Gopinath", "journal": "", "ref_id": "b22", "title": "Reflexion: an autonomous agent with dynamic memory and self-reflection", "year": "2023" }, { "authors": "Ivan Stelmakh; Yi Luan; Bhuwan Dhingra; Ming-Wei Chang", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "ASQA: Factoid questions meet long-form answers", "year": "2022" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeff Wu; Daniel M Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul Christiano", "journal": "", "ref_id": "b24", "title": "Learning to summarize from human feedback", "year": "2022" }, { "authors": "Dan Su; Xiaoguang Li; Jindi Zhang; Lifeng Shang; Xin Jiang; Qun Liu; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Read before generate! faithful long form question answering with machine reading", "year": "2022" }, { "authors": "Haitian Sun; William Cohen; Ruslan Salakhutdinov", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "ConditionalQA: A complex reading comprehension dataset with conditional answers", "year": "2022" }, { "authors": "Haitian Sun; William W Cohen; Ruslan Salakhutdinov", "journal": "", "ref_id": "b27", "title": "Iterative hierarchical attention for answering complex questions over long documents", "year": "2021" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b29", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Alex Wang; Kyunghyun Cho; Mike Lewis", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Asking and answering questions to evaluate the factual consistency of summaries", "year": "2020" }, { "authors": "Lei Wang; Wanyu Xu; Yihuai Lan; Zhiqiang Hu; Yunshi Lan; Roy ; Ka-Wei Lee; Ee-Peng Lim", "journal": "", "ref_id": "b31", "title": "Plan-and-solve prompting: Improving zeroshot chain-of-thought reasoning by large language models", "year": "2023" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; V Quoc; Ed H Le; Sharan Chi; Aakanksha Narang; Denny Chowdhery; Zhou", "journal": "", "ref_id": "b32", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b33", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Fangyuan Xu; Junyi ; Jessy Li; Eunsol Choi", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "How do we answer complex questions: Discourse structure of long-form answers", "year": "2022" }, { "authors": "Shunyu Yao; Dian Yu; Jeffrey Zhao; Izhak Shafran; Thomas L Griffiths; Yuan Cao; Karthik Narasimhan; ; Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Yuan Karthik R Narasimhan; Cao", "journal": "", "ref_id": "b35", "title": "Tree of thoughts: Deliberate problem solving with large language models", "year": "2023" }, { "authors": "Tomer Ori Yoran; Ben Wolfson; Uri Bogin; Daniel Katz; Jonathan Deutch; Berant", "journal": "", "ref_id": "b36", "title": "Answering questions by meta-reasoning over multiple chains of thought", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b37", "title": "Opt: Open pretrained transformer language models", "year": "2022" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Claire Cui; Olivier Bousquet; Ed H Quoc V Le; Chi", "journal": "", "ref_id": "b38", "title": "Least-to-most prompting enables complex reasoning in large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 87.96, 177.24, 402.85, 24 ], "formula_id": "formula_0", "formula_text": "✓ ✗ ✓ ✗ PEARL (this work) ✓ ✓ ✓ ✓" }, { "formula_coordinates": [ 4, 327.96, 455.3, 168.09, 7.86 ], "formula_id": "formula_1", "formula_text": "output = ACTION(arg1, arg2, . . . )," }, { "formula_coordinates": [ 8, 80.77, 423.62, 143.25, 48.44 ], "formula_id": "formula_2", "formula_text": "C O N C A T F IN D _ C H A R A C T E R F IN D _ E L E M E N T ID E N T IF Y _ E L E M E N T F IN D _ E V E N T F IN D _ B E H A V IO R _ R E A S O N F IN D _ R E L A T IO N F IN D _ E M O T" } ]
10.18653/v1/2021.emnlp-main.468
2023-05-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b50", "b2", "b24", "b31", "b59", "b47" ], "table_ref": [], "text": "Question answering (QA) is a pivotal area of research in NLP that evaluates the language understanding and reasoning capabilities of language models. To this end, the NLP community has developed numerous QA datasets that span various domains, question-answer formats, and reasoning skills (Rogers et al., 2022). Consequently, there is an increasing demand for a Unified QA system that can manage mixed batches of instances from different datasets and tasks during training and inference (Liu et al., 2022). Such a system would eliminate the need for manual tuning or per-task adjustments, * work done during internship at Salesforce Research (Asai et al., 2022) (a complex prompt transfer learning approach) for Unified QA on 16 QA datasets in several few-shot scenarios using T5-Base as the backbone model. Init refers to prompt initialization while MT stands for multitasking. The results show that prompt-tuning with prior is a promising alternative to multi-task full-model finetuning, especially in limited data scenarios, and that ATTEMPT does not provide any additional advantage.\nenabling seamless integration of new datasets. This would contribute to the development of efficient QA models with minimal computational and storage costs, enhanced generalization capabilities, and greater practicality for real-world use cases.\nThe success of transformer-based models in textto-text generation has led to a growing interest in Unified QA systems. Khashabi et al. (2020) proposed Unified-QA, a single QA model pretrained on diverse datasets that outperforms formatspecialized models. While prompt-tuning methods (Lester et al., 2021;Vu et al., 2022) have emerged as a promising alternative to fine-tuning, (Zhong et al., 2022a) proposed to model the commonalities and distinguish task differences through a structurally designed prompt-based input schema. However, these approaches have limitations related to scalability, expensive pre-training requirements, and the need for tens of thousands of training ex-amples for each task. Moreover, the performance of pre-trained QA models significantly degrades when only a few question-answering examples are available (Ram et al., 2021). While Unified QA approaches have shown success in high-data scenarios, their efficacy in more practical scenarios with limited training examples remains unexplored.\nThis paper aims to explore the potential of two different paradigms of tuning, model, and prompts, for unified question answering under a low resource setting. Despite the importance of this problem, there have been no previous studies investigating the effectiveness of these paradigms for this task. In response, we conduct an exhaustive analysis of the applicability of these two paradigms to a unified question-answering system. To do so, we evaluate their promise, effectiveness, and trade-offs using a set of 16 QA datasets, covering diverse domains and a wide range of skills and formats.\nOur empirical study reveals several key findings, including (i) prompt tuning can perform just as well as model tuning under a low resource regime, given a good initialization, (ii) parameter-sharing results in superior few-shot performance, but the trends are reversed in the full-shot setting, (iii) simple knowledge transfer techniques for prompt initialization can be as effective as more complex methods in the few-shot setting, without introducing additional parameters, and (iv) prompt tuning achieves a significant performance boost from pretraining in a low resource regime while increasing model size does not significantly affect prompt tuning with initialization. In addition, we perform a systematic quantitative and qualitative study to provide insights into the advantages and limitations of prompt tuning for unified QA with an emphasis on the behaviors in the few-shot setting. Overall, our research aims to contribute to the development of effective and efficient unified question-answering systems in low-resource scenarios." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b40", "b45", "b5", "b29", "b64", "b19", "b31", "b19", "b41", "b51", "b3", "b31", "b62", "b61", "b60", "b45", "b24", "b43", "b61", "b44", "b25", "b0", "b69", "b59", "b2", "b2", "b59", "b47", "b7", "b2", "b24", "b45", "b1", "b31", "b38", "b59", "b59", "b2" ], "table_ref": [], "text": "Parameter-efficient tuning. Large-scale pretrained language models fine-tuned on specific target datasets have shown remarkable performance for several downstream tasks in NLP (Devlin et al., 2019;Liu et al., 2019;Raffel et al., 2022;Brown et al., 2020;He et al., 2021b;Lan et al., 2019;Yang et al., 2019). However, standard fine-tuning approaches update all the model parameters, which can often lead to deployment challenges. Recent research (Houlsby et al., 2019;He et al., 2021c;Lester et al., 2021;Li and Liang, 2021a) has shown that similar performance can be obtained by updating or adding a few trainable parameters while keeping pre-trained language model parameters frozen. Several approaches have been proposed in this direction: Adapter-based methods (Houlsby et al., 2019;mahabadi et al., 2021;Rücklé et al., 2021) insert small trainable feed-forward networks (modules) between layers of pre-trained language models while BitFit (Ben Zaken et al., 2022) updates only the language model biases. Another computationally efficient approach is prompttuning (Lester et al., 2021) and prefix-tuning (Li and Liang, 2021a), which concatenate trainable continuous embeddings to the input. These trainable parameters, called soft prompts, can be used as plug-ins with a frozen LM to capture task-specific, domain-specific, or language-specific knowledge. He et al. (2021a) presents a unified view of different parameter-efficient training (PET) approaches.\nMulti-task transfer learning. Efficient task transferability in NLP has been extensively studied (Wang et al., 2019;Liu et al., 2021a;Vu et al., 2020Vu et al., , 2021)). With T5 (Raffel et al., 2022) demonstrating the capabilities of using existing downstream task datasets to learn a new task, proposing efficient methodologies for unifying NLP models has become a promising research paradigm in the community. Following this, (Khashabi et al., 2020) proposed UnifiedQA, a single QA model pre-trained on datasets involving diverse formats and reasoning skills. Transfer learning has been demonstrated to be effective from rich data sources (Phang et al., 2018), between similar target tasks (Vu et al., 2020), and for tasks that require similar reasoning skills (Pruksachatkun et al., 2020). However, this approach would require updating/retraining the model on a new task or a different domain, which could lead to catastrophic forgetting (Kirkpatrick et al., 2017). Moreover, Aghajanyan et al. (2021) showed approaches towards unifying NLP models suffer from negative interference to less represented tasks and between dissimilar tasks.\nMost recently, Liu et al. (2022) validates that parameter-efficient tuning methods can perform well with mixed task batches. Zhong et al. (2022b) takes the first step towards building unified QA models utilizing structural prompt tuning. Along these lines, Vu et al. (2022); Asai et al. (2022) integrates both the paradigms of parameter-efficient tuning and unifying NLP models to propose a single pre-trained model for different downstream tasks by learning target task-specific prompts from the source task prompts. Asai et al. (2022) demonstrates transfer using the attention module, while Vu et al. (2022) facilitates prompt transfer by learning the target prompt initialized from similar source prompts. These approaches require fewer than 0.1% of trainable LM parameters with little tradeoff in performance.\nFew-shot question answering. Ram et al. (2021) has identified a discrepancy between current pretraining objectives and QA, as standard models perform poorly when fine-tuned with few examples. They propose recurring span selection as a pretraining scheme tailored for question answering. Chada and Natarajan (2021), on the other hand, proposes a fine-tuning framework aligned with the pretraining framework.\nHowever, there have been no studies focusing on the viability of prompt tuning for unified QA under low-resource settings. To address this gap, we follow prior works, (Liu et al., 2022;Asai et al., 2022;Khashabi et al., 2020), and extensively study the viability and trade-offs of prompt tuning and prompt-based transfer learning in comparison to approaches that involve full-model fine-tuning for few-shot unified QA. As a result of our comprehensive experiments, we offer essential guidelines in the form of valuable insights into the advantages and limitations of prompt tuning with respect to model tuning for unified QA in both full and fewshot scenarios.\n3 Candidates for universal QA approach Finetuning pre-trained language models (FT) on specific datasets yields specialized models that cater to individual tasks. However, a more efficient approach is to build a unified QA model that can perform multiple tasks without manual tuning or per-task adjustments. One of the significant advantages of such approaches is that they seamlessly support mixed-task batch inference (Liu et al., 2022), where a single model can handle diverse tasks, reducing computation, storage, and maintenance costs.\nThis study seeks to assess the suitability of two prevalent training paradigms for NLP, namely model-tuning and prompt-tuning, as potential approaches for developing a unified questionanswering (QA) model. Our investigation centers around four essential criteria we look for in an effective unified QA model: (1) the ability to utilize a single model to address a range of different QA tasks, (2) effective knowledge transfer from multiple relevant tasks, (3) while minimizing the risk of negative interference, and (4) extensibility to new tasks without requiring expensive retraining. In this study, our goal is to investigate the potential of soft prompt-tuning extensively and to better understand its benefits and drawbacks in comparison with model-tuning-based approaches for building a unified QA system grounded on the aforementioned four principles. In particular, we further center the study around understanding these tradeoffs in the few-shot learning scenarios, which is a realistic and more practical challenge.\nModel-tuning This paradigm involves the finetuning of all the parameters of a language model to cater to a specific task or a set of tasks. Although fine-tuning (FT) on a particular dataset is an effective strategy, it is not suitable for unified QA because it requires specialized models for each dataset during inference, which is counter-intuitive to the concept of a unified QA model. In contrast, multi-task learning via fine-tuning (FT-MT) (Raffel et al., 2022;Aribandi et al., 2021) involves the joint learning of a single model on multiple datasets by sharing all the trainable model parameters across different tasks. By training on multiple datasets, FT-MT allows for knowledge transfer from relevant tasks during inference. However, sharing all the parameters often leads to negative transfer from unrelated tasks. Incorporating additional tasks into existing models requires retraining the model with all previous tasks and the new ones, making them computationally expensive to scale and more prone to negative interference.\nPrompt-tuning This paradigm involves learning soft-prompt tokens added to the input while the backbone language model remains frozen. We follow the approach proposed by Lester et al. (2021) to train soft prompts for each task, where prompts are initialized from random words in the vocabulary (PT-R). This vanilla prompt-tuning approach is parameter-efficient and easy to scale. Since taskspecific knowledge is captured in a different set of parameters (i.e., the prompts), this approach avoids negative interference to a great extent. With a single backbone model, we can use these prompts for different tasks. However, this approach does not leverage knowledge from other tasks not already captured in the backbone model.\nPrompt initialization is a technique that addresses the issue of knowledge transfer from source tasks in vanilla prompt-tuning while retaining the benefits of a single model, minimal negative transfer, and extensibility. Previous studies (Li and Liang, 2021b;Liu et al., 2023;Vu et al., 2022) have shown that prompt-tuning methods are often sensitive to initialization, particularly in low data settings. However, the impact of different initialization methods on QA datasets has not been well studied. Inspired by (Vu et al., 2022), we initialize the target prompt by taking the average of the top-3 source task prompts most similar to the prompt trained on the target dataset. We employ two distinct approaches to this initialization process: (i) selecting source task prompts with the same answer format as that of the target dataset (PT-F), and (ii) selecting source task prompts from the complete set of source prompts (PT-C).\nApart from prompt initialization, another way to transfer knowledge from multiple tasks is through the composition of their corresponding prompts. To this end, Asai et al. (2022) proposes AT-TEMPT, a transfer learning method that learns new task-specific target prompts by computing weighted combinations of source prompts using a sub-network-based attention module trained on a single or set of tasks. We distinguish between two settings: ATT-MT, where attention modules are shared across tasks and trained in a multi-task manner, and ATT-ST, where attention module parameters are not shared. While ATT-MT provides a single model for transferring knowledge from source prompts and is easily scalable to new target tasks, sharing attention modules across tasks may result in some negative transfer, compared to more straightforward prompt-tuning methods." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b50" ], "table_ref": [ "tab_1" ], "text": "In their recent study, Rogers et al. (2022) highlight a significant increase in the number of questionanswering and reading comprehension datasets, spanning various domains, formats, and reasoning abilities. This study aims to evaluate and finetune a range of models, leveraging a collection of datasets referred to as \"source datasets\" for pretraining, and a distinct set of datasets known as \"target datasets\" for evaluation. This paper includes datasets that cover a wide range of reasoning skills and complex linguistic phenomena, including conversational, temporal, causal, and coreference reasoning, among others, enabling a more comprehensive evaluation of training paradigms on question-answering datasets and facilitating analysis of cross-skill transfer. This broader coverage across reasoning skills not only enables a more thorough evaluation of training paradigms on QA datasets but also facilitates analysis of cross-skill transfer. Table 2 presents an overview of the datasets employed in our study, detailing their size, domain, and associated primary reasoning skill.\nSource Datasets. This study leverages source datasets for two primary purposes: pre-training models through model tuning and training source prompts via prompt-tuning approaches. The source datasets employed in our research comprise over 30,000 training instances. They aim to encompass essential reasoning skills such as reading comprehension, conversational and commonsense reasoning, as well as discrete and numerical reasoning necessary for question answering. Source datasets cover a wide range of domains, including knowledge bases, news, web documents, and Wikipedia.\nTarget Datasets. We employ target datasets to fine-tune models using the model-tuning paradigm, or to train target prompts for prompt-tuning approaches. Target datasets are typically small in size, containing fewer than 30,000 training instances, and are designed to cover complex and specialized reasoning skills like temporal commonsense, causal reasoning, and logical and inferential reasoning, among others that are crucial for question answering. This split includes various specific domains like Twitter, TOEFL, law books, and personal narratives, which can leverage broader do- mains covered in the source split. In some contexts, certain tasks require multiple types of reasoning. For instance, the ShARC dataset necessitates a combination of conversational and causal reasoning, while the COPA dataset entails the application of commonsense causal reasoning. Therefore, natural language processing models may face additional challenges in performing these tasks due to the integration of multiple reasoning skills. To assess the effectiveness of a unified QA system, we perform experiments on the test set of the target datasets." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Experiments", "publication_ref": [ "b24", "b59" ], "table_ref": [ "tab_5", "tab_6", "tab_6" ], "text": "We employ the T5-base model for all our experiments, unless stated otherwise. Source prompts are trained independently for each task, while the pre-trained language model (PrLM) and attention modules for ATTEMPT are trained jointly on all the source tasks. For target datasets, we randomly select a small number of instances for few-shot training and evaluation. The hyperparameters for training are presented in section A.2. Table 6 details the initialization used for different target tasks in both PT-F and PT-C. We select the best checkpoint based on the validation set performance, with FT-MT and ATT-MT using a single validation set comprising of all the target tasks, and PT-R, PT-F, and PT-C using a validation set for each target task individually. We evaluate the best checkpoint on the test set of each target dataset using F1 as the metric for extractive and abstractive QA datasets, and accuracy for MCQ and Yes/No QA datasets.\nIn cases where a test set is unavailable, we use the development set to report our model's performance and create a small subset from the training set for hyperparameter tuning and checkpoint selection. We report the aggregate results of three seeds. Table 3 summarizes the experimental results comparing the model-tuning and prompt-tuning paradigms for a unified QA system. In the rest of this section, we share our key findings and insights that can hopefully help guide which paradigm to prefer under which scenarios.\nPre-training improves performance in few-shot scenarios, particularly in the lower range, with significant benefits observed in prompt-tuning. Following Unified-QA (Khashabi et al., 2020), we observe that pre-training the T5-base model on diverse source datasets with varying formats and skill requirements (as shown in ference, resulting in a reversal of the trend where FT outperforms FT-MT. Similarly, sharing attention modules across all target tasks in multi-task prompt transfer learning approaches (ATT-MT) leads to a comparable trend.\nFormat-based prompt initialization achieves comparable performance to more complex prompt-transfer approaches. The Prompttuning paradigm has emerged as a highly effective approach for fine-tuning pre-trained language models for specific tasks. However, it has been shown that the success of this paradigm can be highly sensitive to initialization. To address this issue, we drew inspiration from the work of (Vu et al., 2022) and explored the use of two different initialization techniques for the target prompt (PT-F and PT-C). Our results demonstrated that both initialization techniques outperformed random initialization by 6% with 32 examples, and this gap increased to approximately 20% with 1024 examples. Notably, we found that the simpler format-based heuristic initialization was just as effective as the more complex cosine-based search over the entire prompt pool. Furthermore, our results revealed that both prompt initialization approaches were competitive with the sophisticated attention-module-based prompt-transfer learning approach ATT-MT.\nOur analysis further revealed that the performance of PT-F and PT-C varied based on the skill or domain of the dataset (see Table 7). Evaluation on datasets from specific domains (Figure 10) reveals that in low-regime scenarios, PT-F outperformed PT-C in the Web+Social and domainspecific book domains, while PT-C was more effective for Knowledge graphs and Wikipedia domains. However, in high-range scenarios, all models performed similarly. Furthermore, our analysis from a skill perspective, as depicted in Figure 11, indicated that PT-F performed better in Dialog reasoning in the low range and in commonsense reasoning in the high range. On the other hand, PT-C was better suited for causal reasoning in the low range. More detailed information on our findings can be found in the appendix in Table 7.\nBest Candidate for Unified QA FT-MT, PT-R, PT-F, PT-C, ATT-MT are potential candidates for the unified question answering task. In low-resource scenarios, all candidates perform similarly, but PT-F and PT-C stands out due to its low number of trainable parameters and ease of scaling to new datasets. As the number of training instances increases, FT-MT outperforms other approaches, while prompt-tuning approaches remain competitive. Our findings suggest that a simple approach like PT-F is on par with more sophisticated prompt-transfer learning approaches like ATT-MT. Our experiments showed that a pre-trained language model (PrLM) trained on various source tasks can consistently improve PT-R performance by 25%. Initializing a pre-trained backbone model with a soft prompt did not lead to any improvement. Additionally, our findings indicate that FT-MT performs well in the lower range, with an improvement of 25%, but experiences a sharp decrease in performance to only 6% in the higher range. These results suggest that using a PrLM can be an effective approach to improving PT-R performance. " }, { "figure_ref": [], "heading": "Variation with model size", "publication_ref": [ "b39" ], "table_ref": [], "text": "Recent studies have shown that the performance gap between prompt-tuning and fine-tuning reduces as the model size increases (Liu et al., 2021b). In this work, we conduct experiments comparing the performance of base vs large variants of T5 for a range of different fine-tuning methods as shown in Table 4. Unless otherwise specified, we use the T5-base model for our experimentation. We observe a consistent improvement in performance with large language models. Specifically, modeltuning approaches achieve a consistent improvement of approximately 10 points across 32 to 1024 training instances, while prompt-tuning without initialization achieves an improvement of roughly 6 points. However, prompt-tuning with initialization and ATTEMPT do not show significant improvement in performance with large models, and this improvement diminishes as the number of training instances increases. The limited performance gain from large models leads us to conclude that multitask model-tuning outperforms prompt-tuning and ATTEMPT-MT in a few-shot setting. Nevertheless, prompt-tuning with initialization remains comparable to ATTEMPT-MT, and both methods significantly outperform prompt-tuning without initialization. Overall, these findings suggest that the effectiveness of different candidates for unified QA may depend on the size of the model and the number of training instances available." }, { "figure_ref": [ "fig_1" ], "heading": "Qualitative Analysis", "publication_ref": [], "table_ref": [], "text": "Do different models agree on their answers? Fig 3 shows the average agreement of different models on all the tasks across different few-shot scenarios. We find that PT-C and PT-F have the highest agree- ment scores. We partly attribute this to the high overlap of initialization prompts of format and logically similar tasks (PT-C, PT-C). However, as the number of shots increases the overall agreement decreases across different modes. Furthermore, we investigate if different modes can be complementary to each other by evaluating the union of their predictions across different shots. We find that finetuning (FT) and model tuning models (FT-MT) are complementary to each other at low resource settings whereas the gains from PT-R to other modes are minimum. For the complete results, refer to Appendix (Figure 6). This might indicate that prompt tuning may not be practical without good initialization for extremely low-resource QA scenarios. For further discussions around few shot analysis, refer to Appendix A.1.\nA closer look at the task-level performance across different few-shot settings reveals counter-intuitive behaviors. We find that under low resource settings (< 256 shot) good initialization helps significantly for target tasks that are similar to source tasks (e.g: OBQA, BoolQ, IIRC), and the performance gain decreases as we increase the number of shots. As seen from Figure 4, for similar tasks PT-C and model tuning FT-MT performed significantly better than PT-R. However, in cases where there is little domain overlap (ShaRC), initializations do not contribute substantially to the overall performance of the model. Interestingly, in some cases, we find counter-intuitive results where performance remains flat (ShaRC) from Figure 4) or zig-zag (Ropes) pattern is observed across different shots. We point the reader to Appendix (Figures 7,8,4) for performance across different modes against different shots." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we explore the viability of prompttuning as a solution to unified QA and conduct a thorough analysis of its promise, effectiveness, and trade-offs compared with the model-tuning paradigm on a set of 16 QA datasets, focusing particularly on several few-shot scenarios. As a result, we obtain several key findings and insights that hopefully will inform which paradigm to prefer under which scenarios. Prompt tuning is quite competitive with model-tuning in the lower extreme of the few-shot scenarios, given a good initialization.\nWhile parameter-sharing leads to superior performance in the few-shot setting, the trends flip in the full-shot setting, A simple knowledge transfer approach (i.e., an average of relevant prompts) is as effective as complex methods without introducing additional parameters. Pre-training the backbone model on the source tasks significantly benefits prompt tuning. While initializing from a strong prior is very helpful for prompt tuning, its benefit is not as substantial when using a larger backbone model, especially when the number of training examples exceeds a certain threshold." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b31", "b59", "b2" ], "table_ref": [], "text": "Our work has several limitations: (1) since fewshot experiments are prone to have considerable variance due to the randomly sampled few training examples, we repeat all the experiments using three randomness seeds for the T5-base backbone. However, since the number of experiments per seed is more than 1500, we were able to run the same experiments with a T5-large backbone using only one seed and excluding specific few-shot settings due to computational limitations, especially given the latter model has 3.5 times more parameters. Although our comparisons of the two models are still presented in an entirely fair fashion using the same single seed, it would have been more strongly conclusive to test our findings with a T5base backbone on the larger model to the same extent. That is also the reason why the current version of our study does not include comparisons with even larger models such as T5-3b or T5-11b.\n(2) We explore a limited number of prompt-tuning methods both in terms of how the soft prompts are injected in the model architecture following (Lester et al., 2021) and how the knowledge from source tasks are used to inform target tasks following (Vu et al., 2022;Asai et al., 2022). For example, Liu (2022) proposes a parameter-efficient fine-tuning alternative to soft prompt-tuning in recent work, while (Zhong et al., 2022a) shows the benefits of prompt-based pretraining. Although the key takeaways in the current version of our study are supported by sufficient empirical evidence, incorporating the aforementioned recent developments may prove even further promise and evidence for the prompt-based approaches towards few-shot unified QA.\n(3) Our study is currently limited to English-QA datasets, hindering our findings to be generally valid for cross-lingual and/or cross-model questionanswering systems. Therefore, we need to consider how our findings would generalize to other languages and modalities." }, { "figure_ref": [], "heading": "Ethical Statement", "publication_ref": [], "table_ref": [], "text": "We observe a preference for multiple-choice question (MCQ) answer formats across various question-answering (QA) datasets with varying levels of reasoning ability. Additionally, the majority of the source datasets were sourced from Wikipedia, which may contain gender or political bias that could be further perpetuated by models. The T5 model, which was used for pre-training, may also have biases due to its pre-training data.\nHowever, the study did not conduct stress tests to identify potential biases, and users should be cautious when implementing the provided models.\nThe current models' results may not align with the facts in input documents, potentially leading to the spread of false information online. This is a common issue in all current QA models, and further research is needed in this area. The study's experiments were primarily conducted using A100 GPUs and consumed a significant amount of GPU time when repeated across random seeds. Nevertheless, our findings can benefit subsequent studies and applications by providing valuable insights, thus avoiding the need for extensive repetition of these comparisons." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "A.1 Qualitative Study Do the same model across different few-shot settings agree on its answers? Figure 5 presents the overall agreement of different models for a single task under different shot settings. We observe patterns of high level agreement between adjacent shots that gradually decrease with an increase in the number of shots in fine-tuning and prompt tuning with initialization mode. However, prompt tuning with random initialization has an agreement percentage of 50% across different shots and has no clear distinction of high agreement between the adjacent shots as found in other settings.\nTable 5 presents a few qualitative examples across different shots and modes. We find prompt tuning with good initialization to leverage world knowledge better (e.g: Arctic Circle with cold weather) even in low resource settings while prompt tuning struggles in predicting local contextbased reasoning tasks (e.g: taking photos of home does not associate with new home)." }, { "figure_ref": [], "heading": "A.2 Hyper-parameters", "publication_ref": [], "table_ref": [], "text": "After extensive tuning, we selected a learning rate of 1e-5 for the backbone model, along with a maximum source length of 512, a gradient accumulation step of 2, and a batch size of 16. During training, we saved and evaluated checkpoints every 500 steps, and trained the model for 100K steps with patience. For all experiments, the prompts consisted of k = 100 tokens with a hidden dimension of d = 768. 256 No No Yes Yes My parents wanted me to go to college, but I never applied. General Instructions: ... who has been accepted or enrolled in an accredited degree program, in the field of health care, and you or your family member. Question: Do I qualify for this benefit program? Context: cold temperatures cause animals to shiver Question: Where would animals shiver the most? Options: (A) Arctic Circle (B) Sumatra (C) Java (D) tropical rainforest 512 Arctic Circle Sumatra Arctic Circle Java Context: The house painters finished...While I would not say they were not the greatest guys... they did do a nice job and the house looks so much better. Here are some photos ... " } ]
Question-answering (QA) tasks often investigate specific question types, knowledge domains, or reasoning skills, leading to specialized models catering to specific categories of QA tasks. While recent research has explored the idea of unified QA models, such models are usually explored for high-resource scenarios and require re-training to extend their capabilities. To overcome these drawbacks, the paper explores the potential of two paradigms of tuning, model, and prompts, for unified QA under a low-resource setting. The paper provides an exhaustive analysis of their applicability using 16 QA datasets, revealing that prompt tuning can perform as well as model tuning in a few-shot setting with a good initialization. The study also shows that parameter-sharing results in superior few-shot performance, simple knowledge transfer techniques for prompt initialization can be effective, and prompt tuning achieves a significant performance boost from pre-training in a low-resource regime. The research offers insights into the advantages and limitations of prompt tuning for unified QA in a few-shot setting, contributing to the development of effective and efficient systems in low-resource scenarios.
Few-shot Unified Question Answering: Tuning Models or Prompts?
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison of Multi-Task Model-Tuning, Prompt-Tuning, and ATTEMPT(Asai et al., 2022) (a complex prompt transfer learning approach) for Unified QA on 16 QA datasets in several few-shot scenarios using T5-Base as the backbone model. Init refers to prompt initialization while MT stands for multitasking. The results show that prompt-tuning with prior is a promising alternative to multi-task full-model finetuning, especially in limited data scenarios, and that ATTEMPT does not provide any additional advantage.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Table 3 :3Comparison of Model-Tuning and Prompt-Tuning Paradigms with Different Backbone Models in Few-Shot and Full-Shot Settings: Model-tuning approaches include FT and FT-MT, while PT-R represents vanilla prompt tuning and PT-F and PT-C correspond to prompt tuning with initialization. ATT-ST and ATT-MT are single-task and multi-task variants of ATTEMPT, a prompt transfer learning approach. Bold values indicate the best model with a T5-base backbone for the k-shot scenario, while underline represents the second-best. PrLM represents a backbone model pre-trained on all source tasks.creases (potentially due to overfitting). Specifically, PT-R yields a change in improvement from 36% to 24% as the number of training instances increases from 16 to 1024, while improvement in FT-MT drastically reduces from 27% to 7%. We note that ATT-MT follows a similar pattern to that of Model Tuning (MT). Moreover, our findings indicate that datasets such as COSMOSQA, OBQA, DREAM, MCTest, IIRC, and BoolQ exhibit substantial performance gains through pre-training, likely due to their similarity to some of the source datasets. On the other hand, datasets such as McTACO, QuaRel, ShARC, and PIQA, which are less closely related to the source datasets, do not exhibit significant improvements with pre-training. Parameter-sharing results in superior few-shot performance; however, the trends are reversed in the full-shot setting. Multi-task fine-tuning (FT-MT) that employs the parameter-sharing technique yields superior few-shot learning performance than traditional finetuning (FT). The extent of improvement increases with the number of training examples and starts decreasing at a threshold of approximately 512 examples on an aggregate level. However, this can vary across different target datasets. For instance, datasets such as TweetQA, PIQA, and ReClor exhibit this behavior beyond 512 examples, while OBQA, MCTest, and Com-monsenseQA realize this at around 128. Increasing training examples from 16 to 512 leads to a boost from ∆ = 0.50 to ∆ = 2.94 due to transfer learning from other tasks. However, raising the number of examples to 1024 results in a drop in improvement to ∆ = 1.77. In the full-shot setting with unbalanced training samples, employing parameter sharing among all tasks can lead to negative inter-", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Improvement (%) of pre-trained backbone model (PrLM) over T5-base observed across different training approaches in few-shot setting.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Heatmaps showing agreement matrix of different modes under different few shot settings", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5: Heatmaps showing agreement of different shots for each mode", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Question: What may have caused you to take photos of your house? Options: (A) It got a new coat of color. (B) It was my new house. (C) I wanted to show off 128 of color. (D) None of the above choices.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Question Answering (QA) datasets used as source and target datasets in this study. For each dataset, the table provides details on associated reasoning skills, domain, and the number of training examples available.", "figure_data": "Source SplitTarget SplitReasoning Skill DatasetDomain# train Reasoning Skill DatasetDomain# trainReading Com-SQuAD (Rajpurkar et al., 2016) Wikipedia87KReading Com-TweetQA (Xiong et al., 2019)Twitter10.6Kprehension (RC)prehension (RC)SearchQA (Dunn et al., 2017)J! Archive140KIIRC (Ferguson et al., 2020)Wikipedia13KNewsQA (Trischler et al., 2017) news articles76KMCTest (Richardson et al., 2013) stories1.4KTriviaQA (Joshi et al., 2017)web documents,96KRC (Inferential) BoolQ (Clark et al., 2019)Wikipedia9KWikipediaNatural Questions (KwiatkowskiWikipedia104K RC (logical)ReClor (Yu et al., 2020)Web ,book5Ket al., 2019)NQOpen (Lee et al., 2019)Wikipedia79KConversationalDREAM (Sun et al., 2019)TOEFL6KRACE (Lai et al., 2017)Exams87KShARC (Saeidi et al., 2018)law rule books4KDuoRC (Saha et al., 2018)movie plots130K Co-referenceQuoref (Dasigi et al., 2019)Wikipedia22KRC (Scientific)PubMedQA (Jin et al., 2019)Pubmed211K CommonsenseCOSMOSQA (Huang et al.,Spinn3r/ personal25KReasoning2019)narrativesRC (Long com-NarrativeQA (Kočiský et al.,books, movies65KPIQA (Bisk et al., 2020)News, Encyclope-16.1Kprehension)2018)diaRC (Multihop)HotpotQA (Yang et al., 2018)Wikipedia90KCommonsenseQA (Talmor et al.,Concept Net9.7K2019)ConversationalCoQA (Reddy et al., 2019)News, Wikipedia,120K Temporal Com-McTACO (Zhou et al., 2019)multiple13KBooks, etc.monsenseQuAC (Choi et al., 2018)Wikipedia83KCausal Reason-ROPES (Lin et al., 2019)sciencetext/10KingWikipediaCommonsenseReCORD (Zhang et al., 2018)news101KOBQA (Mihaylov et al., 2018)science books6KSIQA (Sap et al., 2019)Commonsense33.4KQuaRel (Tafjord et al., 2018)science,eco-2.2Kknowledge basenomics, etc.DiscreteDROP (Dua et al., 2019)Wikipedia77KCOPA (Gordon et al., 2012)personal stories400", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ") can boost", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table presenting qualitative examples showing model predictions across different shots for different tasks. Few shot column shows the shot until which the predictions in the table hold.", "figure_data": "TargetFormat-based (Pt-F)Complete Set (PT-C)Datasetropessearchqa, newsqa, quacdrop, siqa, quacdreamrecord, race, siqasearchqa, quac, siqasharcduorc, coqa, nar_qaquac, coqa, nar_qaboolqpubmed_qarace, newsqa, pubmed_qapiqarecord, siqa, racequac, siqa, nar_qaquorefquac, newsqa, nqduorc, nar_qa, dropcosmos_qarecord, siqa, racenar_qa, siqa, racetweet_qanq_open, duorc, nar_qa nq_open, duorc, nar_qaCQArecord, race, siqanewsqa, nar_qa, siqaobqarecord, race, siqanewsqa, race, siqareclorrecord, race, siqaduorc, siqa, nq_openquarelrecord, race, siqaduorc, quac, nar_qamctestrecord, race, siqanq, race, siqamc_tacopubmed_qapubmed_qa, siqa, nar_qacopapubmed_qasearchqa, race, siqaiirchotpotqa, newsqa, nqnewsqa, drop, nq", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Source Prompts most similar to target prompts for format-based and complete-set initialization corresponding to PT-F and PT-C respectively. Bold indicates source tasks common in both partitions. Although some source prompts are shared across target tasks, Quoref and COPA have none in common. The SIQA and RACE source prompts are typically used for initialization, but we found that lifting the constraint of choosing prompts from the same format allowed for successful cross-format initialization at the reasoning skill or domain level. For example, the DREAM dataset (MCQ) was initialized with QuAC (ExtQA), which is reasonable since both involve conversational data. The IIRC dataset was also initialized with the most relevant source task, HotpotQA. Yes/No questions strongly prefer PubmedQA as a format.", "figure_data": "Skill-basedDomain-basedMachine Reading ComprehensionWeb & Social Mediaiirc, tweet_qa, mctest, boolq, reclortweet_qa, piqa, reclorCommonsense ReasoningWikipediacosmos_qa, piqa, commonsense_qa, mc_taco iirc, ropes, quoref, boolqDialog ReasoningKnowledge Graphsharc, dream, quorefcommonsense_qa, cosmos_qaCausal Reasoningdomain specific bookropes, obqa, quarel, copasharc, obqa, quarel", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Categorization of Target Datasets Based on Domain and Reasoning Skill", "figure_data": "Modeltweet_qa ropes cosmos_qa piqaCQA dream obqa reclor sharc quarel mctest mc_taco boolq copa quoref iircagg16 ExamplesFT65.2641.87 35.0055.51 41.63 37.52 30.20 25.27 38.13 48.08 52.33 66.0350.70 58.00 43.64 32.80 45.12FT-MT60.7748.76 35.9054.68 39.78 40.33 31.60 24.27 37.62 48.32 50.33 65.4854.93 59.33 42.17 35.72 45.62PT-R39.9740.22 25.9352.39 35.68 35.92 25.80 25.80 23.10 47.48 53.50 66.1337.90 51.33 38.26 23.22 38.92PT-F68.1537.04 35.1056.09 43.35 40.67 34.53 26.13 39.68 47.24 58.17 65.3637.83 55.00 42.34 27.00 44.61PT-C68.1557.67 36.0754.35 41.71 36.31 31.47 26.27 38.94 49.88 52.67 65.3142.75 57.00 43.03 27.18 45.55att-st63.3831.94 34.8254.91 40.57 36.90 32.93 27.20 39.17 50.12 54.00 66.2051.12 61.67 45.80 40.49 45.70ATT-MT60.5232.82 37.1055.08 40.90 39.54 30.80 26.20 38.65 48.20 45.83 64.4259.51 58.00 45.05 41.06 45.2332 ExamplesFT65.5549.74 36.0855.42 41.85 37.43 30.60 24.33 41.84 48.08 52.17 66.1460.52 60.67 44.77 26.56 46.36FT-MT61.0153.55 36.0555.68 41.61 42.53 32.40 25.33 39.42 46.76 50.83 64.9558.20 53.33 44.28 46.00 47.00PT-R35.7239.50 32.2353.01 35.76 36.00 27.33 25.27 39.34 46.52 51.33 66.0661.80 53.67 41.25 39.35 42.76PT-F68.6146.41 34.8056.22 42.97 41.44 40.20 27.07 40.00 46.64 59.17 66.0937.83 55.00 42.51 26.97 45.75PT-C68.6162.64 37.7255.33 43.19 36.24 31.53 26.47 39.63 48.56 54.83 65.0843.28 61.67 42.83 32.37 46.88att-st63.8050.97 34.5255.11 41.88 36.88 33.20 27.20 44.18 49.64 53.50 66.3153.96 64.67 46.46 40.69 47.68ATT-MT64.1155.53 36.9455.89 42.56 41.55 30.87 27.13 44.33 46.76 45.33 63.7259.82 53.00 43.72 47.20 47.4064 ExamplesFT66.1748.65 36.7355.53 43.65 38.42 29.73 24.93 42.97 48.08 52.67 66.1365.31 60.67 46.85 42.83 48.08FT-MT62.9451.62 37.9155.91 44.44 44.56 36.33 28.33 40.90 48.20 53.33 64.4262.61 59.33 44.90 50.13 49.12PT-R57.1540.45 31.6553.12 37.51 36.57 27.13 25.93 40.12 47.48 53.00 66.1262.39 53.67 39.11 38.03 44.34PT-F68.2947.50 37.8456.96 43.60 42.09 41.80 26.27 39.69 45.80 62.67 65.7863.24 56.67 43.10 48.04 49.33PT-C68.2958.97 38.4355.01 43.73 37.21 31.53 26.33 37.56 49.88 57.83 65.3561.22 62.33 43.02 44.82 48.85att-st63.2653.20 37.4555.89 42.56 37.14 32.80 26.27 43.11 49.04 53.50 65.6158.27 63.67 46.96 48.25 48.56ATT-MT63.8655.02 41.6855.46 39.97 40.51 34.20 27.67 48.91 48.08 52.83 64.6063.47 57.33 41.01 50.73 49.08128 ExamplesFT66.4248.92 40.7655.22 46.44 37.71 33.87 25.33 44.62 47.96 53.00 67.6472.10 57.33 50.14 54.79 50.14FT-MT64.9152.98 41.0357.00 46.00 47.89 43.33 30.13 43.62 52.52 59.17 66.2167.75 61.00 48.43 54.38 52.27PT-R58.2635.09 32.7353.23 37.10 36.27 26.53 26.07 38.65 49.04 52.83 64.9462.15 54.33 41.53 41.87 44.41PT-F68.7947.89 39.0356.58 44.25 43.99 42.33 25.13 41.26 46.28 66.00 65.4964.90 57.67 42.74 52.66 50.31PT-C68.7957.61 40.8656.53 43.13 40.02 40.27 25.80 39.89 50.00 63.83 65.8868.01 59.33 43.10 50.30 50.83att-st63.4852.39 38.3254.99 42.70 36.67 32.93 26.60 44.76 50.24 53.50 66.2161.45 57.67 47.33 50.85 48.76ATT-MT65.3446.84 42.1454.28 42.81 42.27 41.33 29.27 47.99 51.32 57.83 67.1169.23 53.33 42.01 51.91 50.31256 ExamplesFT67.6448.42 42.0055.82 49.39 40.51 42.07 25.07 48.99 49.28 61.83 71.3374.78 58.67 52.92 58.62 52.96FT-MT66.1353.98 41.8658.00 48.27 50.96 50.13 30.20 46.21 54.20 66.50 70.6173.29 64.00 51.27 60.55 55.39PT-R63.2644.25 36.3653.39 38.08 36.24 26.47 27.00 39.65 49.16 55.50 66.1462.31 46.67 43.08 44.80 45.77PT-F69.0351.80 42.2757.40 46.98 50.96 57.93 26.47 45.46 46.04 67.00 66.5167.34 55.33 45.05 54.77 53.15PT-C69.0350.37 42.2156.89 44.55 41.36 45.40 28.73 43.58 48.80 68.67 66.3671.21 60.33 45.54 53.96 52.31att-st62.7954.77 39.7955.91 46.25 37.34 36.67 26.40 46.70 49.40 51.67 66.0965.73 63.00 47.57 53.34 50.21ATT-MT66.6853.33 41.4354.39 45.70 42.86 43.40 29.20 49.36 53.12 61.17 67.5572.75 61.00 45.21 55.80 52.68512 ExamplesFT68.1154.84 45.4556.91 51.76 45.08 53.40 26.73 53.11 48.44 69.17 74.1176.47 61.67 57.89 62.39 56.60FT-MT67.9858.25 42.7558.74 51.35 56.23 60.00 34.33 52.02 58.99 72.50 74.9975.46 65.67 57.36 66.02 59.54PT-R65.4445.15 36.4953.66 38.74 34.20 33.73 27.47 40.16 48.44 54.33 66.0762.51 51.33 45.06 49.77 47.03PT-F67.2854.33 43.9657.13 48.98 52.92 61.33 25.73 40.65 48.32 71.00 67.0775.64 54.33 51.72 58.23 54.91PT-C67.2856.10 43.9156.75 48.40 49.35 58.60 28.60 50.03 48.80 71.83 66.2172.74 56.33 53.80 56.63 55.34att-st61.5655.95 39.3555.51 46.74 40.26 43.47 27.73 47.60 50.00 56.17 66.0968.73 57.67 48.32 54.04 51.20ATT-MT68.2358.65 44.3256.29 47.67 46.72 53.80 31.33 51.87 53.60 68.33 72.7150.52 58.33 50.24 62.52 54.701024 ExamplesFT68.9052.90 46.2156.46 54.57 52.17 60.60 27.93 56.12 52.52 75.50 78.7676.99 65.00 63.15 67.55 59.71FT-MT70.5958.57 42.9158.74 54.38 59.22 64.80 35.80 55.87 61.15 76.33 76.8476.27 63.67 61.28 67.26 61.48PT-R64.5839.94 35.8852.97 40.87 36.03 33.60 26.53 42.41 46.64 53.67 66.5462.15 46.67 46.98 52.16 46.73PT-F67.3955.21 42.6656.40 52.33 52.19 62.80 30.00 53.53 49.52 77.33 66.8676.45 59.33 58.50 64.57 57.82PT-C67.3951.05 44.4656.49 51.30 51.86 63.93 26.40 54.28 51.44 75.17 68.0274.51 56.00 58.53 66.67 57.34att-st69.6957.27 40.2255.77 50.01 47.60 55.27 26.87 53.74 49.64 62.50 66.5975.02 60.67 50.83 59.28 55.06ATT-MT69.4659.06 44.8856.13 51.35 50.15 59.00 31.13 54.03 54.32 71.00 75.4175.52 61.33 56.74 66.69 58.51FullFT77.4059.16 69.8267.63 62.74 66.32 74.20 47.20 67.36 67.99 78.00 99.4182.97 67.00 71.18 70.57 70.56FT-MT76.3458.53 67.3067.03 61.18 67.40 74.20 36.20 63.35 59.35 73.50 92.2180.58 66.00 70.19 69.88 67.70PT-R75.6355.32 55.5860.94 59.38 60.49 68.40 40.60 58.72 62.23 74.00 95.9580.61 55.00 68.48 69.20 65.03PT-F73.3151.31 49.7858.11 56.59 62.65 71.20 35.00 57.24 56.12 78.50 78.8779.54 59.00 64.10 68.64 62.50PT-C75.2352.13 58.6960.93 58.07 64.12 71.40 41.60 58.53 62.95 76.50 96.4379.76 62.00 67.58 69.55 65.97att-st55.3160.29 59.4579.33 62.62 68.68 61.94 75.71 58.56 70.22 69.00 97.9066.91 77.50 64.00 42.60 66.88ATT-MT74.3757.02 58.6361.53 58.89 61.67 67.80 39.60 60.11 57.19 77.00 94.1980.86 65.00 67.71 68.89 65.65", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Complete set of results for comparison between model-tuning and prompt-tuning approaches on 16 target QA datasets with T5-base as pre-trained language model.", "figure_data": "Backbone : T5-BaseBackbone : PrLMk-shot FTMTPT-RPT-FPT-BATT-ST ATT-MT FTMTPT-RATT-MT164.072.746.051.452.183.493.291.842.114.182.24321.282.716.112.903.113.502.851.681.602.771.85644.602.532.023.293.032.423.211.131.872.352.421282.252.222.124.134.011.732.761.341.392.802.812563.362.401.973.254.233.123.631.411.902.842.935122.621.354.221.352.043.772.450.831.222.233.1710241.731.714.312.622.213.952.931.352.092.782.01", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Table displays the aggregate standard deviation of target tasks with different seeds. Increasing training instances reduces standard deviation, improving model robustness and reducing sensitivity to minor variations. PrLM reduces standard deviation across all approaches, leading to stable performance and better generalization while addressing overfitting. Prompt tuning has a higher deviation due to initialization sensitivity. Parameter-sharing and prompt initialization techniques reduce deviation, leveraging knowledge from other tasks for stable performance, especially in low-resource scenarios, and mitigating overfitting. .33 68.02 67.47 35.80 40.14 47.12 86.50 67.03 73.59 68.00 49.88 45.75 58.06 FT-MT 71.44 52.13 47.18 57.44 51.02 68.61 67.40 37.27 40.68 48.68 84.83 67.40 76.15 65.33 48.61 43.90 58.00 PT-R 61.75 52.32 42.64 55.97 40.92 59.00 59.07 36.27 40.36 48.20 78.17 66.13 68.71 56.33 48.20 35.83 53.12 ATT-MT 69.37 49.36 47.07 56.09 52.47 65.78 66.60 36.40 41.61 50.12 85.83 67.14 71.67 67.67 47.09 45.18 57.47 .26 70.20 68.80 45.40 60.82 69.42 84.50 89.65 82.05 65.00 72.13 70.67 69.45", "figure_data": "Modeltweet_qa ropes cosmos_qa piqaCQA dream obqa reclor sharc quarel mctest mc_taco boolq copa quoref iircagg16 ExamplesFT 55.79 5232 Examples 73.17 50.92 47.52FT73.3251.69 48.6155.77 52.69 68.02 67.87 36.27 42.45 48.92 86.67 66.2374.22 70.33 51.08 50.59 59.05FT-MT71.4652.67 46.5857.02 51.00 68.64 68.47 36.80 42.94 50.00 84.67 67.4179.18 63.67 50.53 48.21 58.70PT-R61.6451.26 43.2756.13 41.96 63.68 63.07 35.60 39.79 47.96 80.83 66.1465.48 52.67 49.42 35.02 53.37ATT-MT70.0950.39 48.9455.79 51.73 66.23 67.67 35.80 38.81 48.68 86.17 66.2972.30 65.00 51.26 46.89 57.6364 ExamplesFT74.1751.56 49.3955.59 53.26 68.69 68.80 35.73 43.21 47.48 86.50 66.5180.88 66.33 53.92 58.00 60.00FT-MT72.0652.55 48.1556.78 52.17 68.82 67.73 37.87 40.10 49.16 84.83 66.7080.10 64.00 54.09 52.26 59.21PT-R66.8353.82 46.9155.53 42.37 63.15 64.20 36.13 40.76 48.20 80.50 66.2274.07 58.33 49.93 41.75 55.54ATT-MT71.6151.48 48.1455.98 52.33 66.06 68.33 36.07 40.15 50.36 85.67 63.5777.32 66.33 52.39 46.09 58.24128 ExamplesFT74.0051.63 50.7356.51 54.79 68.30 69.00 35.80 43.89 48.44 86.33 67.7181.70 66.67 56.16 64.60 61.02FT-MT72.6951.04 49.9657.18 54.00 69.12 68.13 37.60 44.25 52.52 85.33 68.3881.53 63.67 56.61 59.04 60.69PT-R71.7054.12 45.6256.35 42.34 66.01 63.87 36.47 39.19 48.08 83.83 65.8172.94 58.33 50.79 44.39 56.24ATT-MT71.7652.32 48.3956.18 51.05 66.86 68.80 34.73 40.33 50.36 85.33 62.2078.50 59.00 54.49 52.64 58.31256 ExamplesFT74.0052.58 52.2356.20 55.88 69.26 69.67 37.53 46.86 49.76 86.83 69.7582.02 65.00 60.49 67.60 62.23FT-MT72.2853.47 50.9758.14 54.05 69.87 67.53 40.20 48.83 54.44 82.83 70.8581.73 65.33 59.20 64.95 62.17PT-R71.7152.73 48.1555.75 47.94 67.19 65.40 34.27 39.75 48.32 84.17 65.8175.92 55.00 50.96 50.33 57.09ATT-MT71.4152.20 49.6556.33 52.06 66.21 69.73 37.73 44.07 52.52 86.33 67.9678.50 62.67 55.05 56.64 59.94512 ExamplesFT74.3455.50 53.4558.54 57.41 68.84 70.67 36.40 55.78 51.08 86.67 75.7982.14 65.33 63.96 69.18 64.07FT-MT72.7955.85 52.7458.89 55.26 70.18 69.40 39.13 53.94 54.92 85.50 71.9381.61 66.67 62.65 67.49 63.68PT-R71.9854.72 48.0855.84 48.38 66.67 69.93 34.60 37.24 50.48 84.33 66.1574.74 60.33 51.66 47.99 57.70ATT-MT73.2753.55 51.6655.91 51.13 62.53 67.93 37.00 49.49 55.04 85.17 70.2381.36 60.33 60.66 65.41 61.291024 ExamplesFT73.8357.80 57.2058.29 58.75 70.85 70.67 42.27 59.91 53.48 87.33 80.2482.20 66.00 66.28 69.26 65.90FT-MT73.8759.29 54.8459.65 56.21 70.39 71.13 43.33 58.16 62.59 83.33 79.1681.27 64.00 65.04 68.25 65.66PT-R71.7952.84 48.5956.13 51.11 66.63 68.53 34.27 38.71 49.40 85.33 65.7077.19 61.33 55.15 49.38 58.26ATT-MT74.7859.91 54.4255.93 51.82 67.21 70.67 39.87 56.24 56.95 86.50 75.0981.00 65.67 62.51 68.78 64.21FullFT79.0565.52 71.0667.85 62.16 73.24 72.20 54.40 66.99 73.74 85.00 99.4082.75 69.00 72.78 72.19 72.96FT-MT77.1661.79 69.8267.25 61.75 73.33 72.20 42.60 65.94 69.42 83.00 93.3782.91 69.00 71.61 70.62 70.74PT-R77.0462.60 66.5761.70 60.85 70.69 70.60 42.80 59.38 70.86 85.50 95.5982.35 61.00 71.28 70.84 69.35ATT-MT77.3463.56 67.1463.22 61", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Complete set of results for comparison between model-tuning and prompt-tuning approaches on 16 target QA datasets with PrLM as pre-trained language model trained on source tasks.", "figure_data": "", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" } ]
Srijan Bansal; Semih Yavuz; Bo Pang; Meghana Bhat; Yingbo Zhou
[ { "authors": "Armen Aghajanyan; Anchit Gupta; Akshat Shrivastava; Xilun Chen; Luke Zettlemoyer; Sonal Gupta", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Muppet: Massive multi-task representations with pre-finetuning", "year": "2021" }, { "authors": "Vamsi Aribandi; Yi Tay; Tal Schuster; Jinfeng Rao; Huaixiu Steven Zheng; Sanket Vaibhav Mehta; Honglei Zhuang; Q Vinh; Dara Tran; Jianmo Bahri; Jai Ni; Kai Gupta; Sebastian Hui; Donald Ruder; Metzler", "journal": "", "ref_id": "b1", "title": "Ext5: Towards extreme multi-task scaling for transfer learning", "year": "2021" }, { "authors": "Akari Asai; Mohammadreza Salehi; Matthew E Peters; Hannaneh Hajishirzi", "journal": "", "ref_id": "b2", "title": "Attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing", "year": "2022" }, { "authors": "Elad Ben Zaken; Yoav Goldberg; Shauli Ravfogel", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2022" }, { "authors": "Yonatan Bisk; Rowan Zellers; Le Ronan; Jianfeng Bras; Yejin Gao; Choi", "journal": "AAAI Press", "ref_id": "b4", "title": "PIQA: reasoning about physical commonsense in natural language", "year": "2020-02-07" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b6", "title": "", "year": "" }, { "authors": "Rakesh Chada; Pradeep Natarajan", "journal": "", "ref_id": "b7", "title": "Fewshotqa: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models", "year": "2021" }, { "authors": "Eunsol Choi; He He; Mohit Iyyer; Mark Yatskar; Wentau Yih; Yejin Choi; Percy Liang; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "QuAC: Question answering in context", "year": "2018" }, { "authors": "Christopher Clark; Kenton Lee; Ming-Wei Chang; Tom Kwiatkowski; Michael Collins; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "BoolQ: Exploring the surprising difficulty of natural yes/no questions", "year": "2019" }, { "authors": "Pradeep Dasigi; Nelson F Liu; Ana Marasović; Noah A Smith; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Quoref: A reading comprehension dataset with questions requiring coreferential reasoning", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Dheeru Dua; Yizhong Wang; Pradeep Dasigi; Gabriel Stanovsky; Sameer Singh; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs", "year": "2019" }, { "authors": "Matthew Dunn; Levent Sagun; Mike Higgins; V Ugur Güney; Volkan Cirik; Kyunghyun Cho", "journal": "", "ref_id": "b13", "title": "Searchqa: A new q&a dataset augmented with context from a search engine", "year": "2017" }, { "authors": "James Ferguson; Matt Gardner; Hannaneh Hajishirzi; Tushar Khot; Pradeep Dasigi", "journal": "", "ref_id": "b14", "title": "IIRC: A dataset of incomplete information reading comprehension questions", "year": "2020" }, { "authors": "Andrew Gordon; Zornitsa Kozareva; Melissa Roemmele", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning", "year": "2012" }, { "authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "", "ref_id": "b16", "title": "Towards a unified view of parameter-efficient transfer learning", "year": "2021" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b17", "title": "{DEBERTA}: {DECODING}-{enhanced} {bert} {with} {disentangled} {attention}", "year": "2021" }, { "authors": "Ruidan He; Linlin Liu; Hai Ye; Qingyu Tan; Bosheng Ding; Liying Cheng; Jiawei Low; Lidong Bing; Luo Si", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "On the effectiveness of adapter-based tuning for pretrained language model adaptation", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b19", "title": "Parameter-efficient transfer learning for NLP", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "Lifu Huang; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Cosmos QA: Machine reading comprehension with contextual commonsense reasoning", "year": "2019" }, { "authors": "Qiao Jin; Bhuwan Dhingra; Zhengping Liu; William Cohen; Xinghua Lu", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "PubMedQA: A dataset for biomedical research question answering", "year": "2019" }, { "authors": "Mandar Joshi; Eunsol Choi; Daniel Weld; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension", "year": "2017" }, { "authors": "Daniel Khashabi; Sewon Min; Tushar Khot; Ashish Sabharwal; Oyvind Tafjord; Peter Clark; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "UNIFIEDQA: Crossing format boundaries with a single QA system", "year": "2020" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b25", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "Tomáš Kočiský; Jonathan Schwarz; Phil Blunsom; Chris Dyer; Karl Moritz Hermann; Gábor Melis; Edward Grefenstette", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "The NarrativeQA reading comprehension challenge", "year": "2018" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee; Kristina Toutanova; Llion Jones; Matthew Kelcey; Ming-Wei Chang; Andrew M Dai; Jakob Uszkoreit; Quoc Le; Slav Petrov", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b27", "title": "Natural questions: A benchmark for question answering research", "year": "2019" }, { "authors": "Guokun Lai; Qizhe Xie; Hanxiao Liu; Yiming Yang; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "RACE: Large-scale ReAding comprehension dataset from examinations", "year": "2017" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b29", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Kenton Lee; Ming-Wei Chang; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Latent retrieval for weakly supervised open domain question answering", "year": "2019-07-28" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Lisa Xiang; Percy Li; ; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021-08-01" }, { "authors": "Kevin Lin; Oyvind Tafjord; Peter Clark; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Reasoning over paragraph effects in situations", "year": "2019" }, { "authors": "Evelyn ; Kai-Yan Liu", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Low-resource neural machine translation: A case study of Cantonese", "year": "2022" }, { "authors": "Haokun Liu; Derek Tam; Mohammed Muqeeth; Jay Mohta; Tenghao Huang; Mohit Bansal; Colin Raffel", "journal": "", "ref_id": "b36", "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning", "year": "2022" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "", "ref_id": "b37", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Comput. Surv", "ref_id": "b38", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b39", "title": "P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b40", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Rabeeh Karimi Mahabadi; James Henderson; Sebastian Ruder", "journal": "", "ref_id": "b41", "title": "Compacter: Efficient low-rank hypercomplex adapter layers", "year": "2021" }, { "authors": "Todor Mihaylov; Peter Clark; Tushar Khot; Ashish Sabharwal", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "year": "2018" }, { "authors": "Jason Phang; Thibault Févry; Samuel R Bowman", "journal": "", "ref_id": "b43", "title": "Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks", "year": "2018" }, { "authors": "Yada Pruksachatkun; Jason Phang; Haokun Liu; Mon Phu; Xiaoyi Htut; Richard Yuanzhe Zhang; Clara Pang; Katharina Vania; Samuel R Kann; Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Intermediate-task transfer learning with pretrained language models: When and why does it work", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b45", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2022" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Ori Ram; Yuval Kirstain; Jonathan Berant; Amir Globerson; Omer Levy", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Few-shot question answering by pretraining span selection", "year": "2021" }, { "authors": "Siva Reddy; Danqi Chen; Christopher D Manning", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b48", "title": "CoQA: A conversational question answering challenge", "year": "2019" }, { "authors": "Matthew Richardson; J C Christopher; Erin Burges; Renshaw", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "MCTest: A challenge dataset for the open-domain machine comprehension of text", "year": "2013" }, { "authors": "Anna Rogers; Matt Gardner; Isabelle Augenstein", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b50", "title": "Qa dataset explosion: A taxonomy of nlp resources for question answering and reading comprehension", "year": "2022" }, { "authors": "Andreas Rücklé; Gregor Geigle; Max Glockner; Tilman Beck; Jonas Pfeiffer; Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "AdapterDrop: On the efficiency of adapters in transformers", "year": "2021" }, { "authors": "Marzieh Saeidi; Max Bartolo; Patrick Lewis; Sameer Singh; Tim Rocktäschel; Mike Sheldon; Guillaume Bouchard; Sebastian Riedel", "journal": "", "ref_id": "b52", "title": "Interpretation of natural language rules in conversational machine reading", "year": "2018" }, { "authors": "Amrita Saha; Rahul Aralikatte; M Mitesh; Karthik Khapra; Sankaranarayanan", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "DuoRC: Towards complex language understanding with paraphrased reading comprehension", "year": "2018" }, { "authors": "Maarten Sap; Hannah Rashkin; Derek Chen; Ronan Le Bras; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Social IQa: Commonsense reasoning about social interactions", "year": "2019" }, { "authors": "Kai Sun; Dian Yu; Jianshu Chen; Dong Yu; Yejin Choi; Claire Cardie", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b55", "title": "DREAM: A challenge data set and models for dialogue-based reading comprehension", "year": "2019" }, { "authors": "Oyvind Tafjord; Peter Clark; Matt Gardner; Wen Tau Yih; Ashish Sabharwal", "journal": "", "ref_id": "b56", "title": "Quarel: A dataset and models for answering questions about qualitative relationships", "year": "2018" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Adam Trischler; Tong Wang; Xingdi Yuan; Justin Harris; Alessandro Sordoni; Philip Bachman; Kaheer Suleman", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "NewsQA: A machine comprehension dataset", "year": "2017" }, { "authors": "Tu Vu; Brian Lester; Noah Constant; Rami Al-Rfou; ' ; Daniel Cer", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "SPoT: Better frozen model adaptation through soft prompt transfer", "year": "2022" }, { "authors": "Tu Vu; Minh-Thang Luong; Quoc Le; Grady Simon; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "STraTA: Self-training with task augmentation for better few-shot learning", "year": "2021" }, { "authors": "Tu Vu; Tong Wang; Tsendsuren Munkhdalai; Alessandro Sordoni; Adam Trischler; Andrew Mattarella-Micke; Subhransu Maji; Mohit Iyyer", "journal": "Association for Computational Linguistics", "ref_id": "b61", "title": "Exploring and predicting transferability across NLP tasks", "year": "2020" }, { "authors": "Alex Wang; Jan Hula; Patrick Xia; Raghavendra Pappagari; R Thomas Mccoy; Roma Patel; Najoung Kim; Ian Tenney; Yinghui Huang; Katherin Yu; Shuning Jin; Berlin Chen; Benjamin Van Durme; Edouard Grave; Ellie Pavlick; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling", "year": "2019" }, { "authors": "Wenhan Xiong; Jiawei Wu; Hong Wang; Vivek Kulkarni; Mo Yu; Shiyu Chang; Xiaoxiao Guo; William Yang; Wang ", "journal": "Association for Computational Linguistics", "ref_id": "b63", "title": "TWEETQA: A social media focused question answering dataset", "year": "2019" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "", "ref_id": "b64", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b65", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Weihao Yu; Zihang Jiang; Yanfei Dong; Jiashi Feng", "journal": "", "ref_id": "b66", "title": "Reclor: A reading comprehension dataset requiring logical reasoning", "year": "2020" }, { "authors": "Sheng Zhang; Xiaodong Liu; Jingjing Liu; Jianfeng Gao; Kevin Duh; Benjamin Van Durme", "journal": "", "ref_id": "b67", "title": "Record: Bridging the gap between human and machine commonsense reading comprehension", "year": "2018" }, { "authors": "Wanjun Zhong; Yifan Gao; Ning Ding; Yujia Qin; Zhiyuan Liu; Ming Zhou; Jiahai Wang; Jian Yin; Nan Duan; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b68", "title": "ProQA: Structural promptbased pre-training for unified question answering", "year": "2022" }, { "authors": "Wanjun Zhong; Yifan Gao; Ning Ding; Yujia Qin; Zhiyuan Liu; Ming Zhou; Jiahai Wang; Jian Yin; Nan Duan", "journal": "", "ref_id": "b69", "title": "Proqa: Structural prompt-based pre-training for unified question answering", "year": "2022" }, { "authors": "Ben Zhou; Daniel Khashabi; Qiang Ning; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b70", "title": "going on a vacation\" takes longer than \"going for a walk\": A study of temporal commonsense understanding", "year": "2019" } ]
[]
10.18653/v1/D16-1047
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b29", "b44", "b25", "b28", "b42", "b27", "b48", "b41", "b12", "b49", "b25", "b32", "b13", "b3", "b31" ], "table_ref": [], "text": "Pre-trained language models with Transformers have achieved breakthroughs in many natural language processing (NLP) tasks (Devlin et al., 2019;Liu et al., 2019). One of the key advantages of Transformers over traditional feature-engineered NLP pipelines is that Transformers enable endto-end training from vast amount of data to automatically learn the optimal language representation (Mikolov et al., 2013b). However, most recent language models still require a separate preprocessing stage known as tokenization. Tokenization is a process that splits raw text parts into a list of discrete tokens from a fixed vocabulary. This † Work done during internship at Microsoft.\nPyTorch-like pseudocode is available in Appendix A.3. Table 1: Examples of sub-word tokenization of misspelled words (with their correct spellings in parentheses) and words from domain-specific corpus (e.g. business documents). Pre-trained sub-word tokenizer tends to over-segment words from these two categories, which produces less meaningful tokens.\npre-defined vocabulary remains as an important bottleneck preventing truly end-to-end training of language models (Tay et al., 2021;Islam et al., 2022).\nBased on the granularity of the basic token units, tokenization methods can be divided into three categories: character-based, subword-based and wordbased. A word-based tokenizer segments sentence into smaller chunks of words. Due to language complexity and memory limit, a word-based vocabulary can not represent all possible words. Wordlevel tokenization thus frequently runs into the issue of out-of-vocabulary words. A character-based tokenizer simply splits the text into a sequence of its characters. It is flexible to encode arbitrary words, but character-level tokenization produces long sequences, which is undesirable as the computational cost of Transformers grows quadratically with the sequence length. To strike a good balance between time and space complexity, most stateof-the-art pre-trained language models thus adopt sub-word tokenization. Data-driven sub-word tokenizers (Kudo and Richardson, 2018;Schuster and Nakajima, 2012;Kudo, 2018) are typically pre-trained on a general text corpus to learn a subword vocabulary based on the frequency of word fragments.\nDespite their popularity, sub-word tokenizers limit the robustness and generalizability of the language models built upon them. First, sub-word tokenizers are sensitive to small textual perturbations (Xue et al., 2022). While humans can still comprehend text with subtle misspellings and capitalization variants (Rawlinson, 2007;Davis, 2003), these perturbations can drastically change the tokenization results, potentially leading to a suboptimal text representation. Second, the sub-word vocabulary is pre-built and remains frozen during the language model pre-training and task-specific fine-tuning. Therefore, when adapting a pre-trained language model into a new language context (e.g. biomedical texts and business documents), the tokenizer is prone to excessive fragmentation of subword pieces (Yasunaga et al., 2022;Islam et al., 2022), as illustrated in Table 1. While this issue could be partially remedied by further task-specific pre-training or by collecting more fine-tuning data, such mitigation would be costly and not always possible.\nWe aim to bring the best of both character-based and word-based models to address the challenges discussed above. To this end, we propose a novel pre-trained language model with a hierarchical twolevel architecture. At the word level, we split the text sequence by characters, and introduce an intraword module that uses Transformers to learn a representation for each word in the sequence from the embeddings of their respective characters. At the sequence level, we introduce an inter-word module that contextualizes the embedding for every words in the text sequence. Our method does not require explicit sub-word or word-level vocabulary, and can thus be considered as an open-vocabulary approach (Mielke et al., 2021). By limiting the attention range to characters within the same word rather than the full sequence in the intra-word module, our model remains computationally efficient.\nIn order to validate our model, we comprehensively compare our method with various baseline methods, including the most popular sub-word based model BERT (Devlin et al., 2019), some state-of-the-art character-based models (Clark et al., 2022a;Boukkouri et al., 2020), and an hybrid character/sub-word model (Ma et al., 2020). Besides standard benchmarking, we also test the robustness of the various models in two ways: by introducing spelling noise into the validation set and by testing on cross-domain tasks.\nOur contributions can be summarized as follows:\n• We introduce a novel open-vocabulary pretrained language model with a hierarchical two-level architecture. Our method does not rely on pre-defined word or sub-word vocabulary.\n• We propose a novel adaptive and learnable aggregation method to summarize characterlevel features into word-level representations.\nAn ablation study highlights its effectiveness.\n• We show that our method outperforms strong baselines on multiple benchmarking datasets, while being computationally efficient.\n• We perform quantitative experiments and a case study to show that our model is robust to textual corruption and domain shift.\n2 Related Work" }, { "figure_ref": [], "heading": "Word-level Models", "publication_ref": [ "b36", "b19", "b2", "b37", "b10", "b15" ], "table_ref": [], "text": "Word embedding methods including Word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) led to many early NLP breakthroughs. These methods learn vector space representations of words from large-scale unlabeled corpora, and encode semantic relationships and meanings (Goldberg and Levy, 2014). In order to generalize to rare words, Bhatia et al. (2016) proposed to use LSTM to learn word embedding from both morphological structure and word distribution. While early methods only learned a context-independent word representation, ELMo (Peters et al., 2018) proposed to use a deep bidirectional language model to learn contextualized word representations. In more recent studies, Transformer-XL (Dai et al., 2019) enhanced the Transformer architecture with a recurrence mechanism to learn contextualized word embedding through language modeling. Despite the recent progress, word-level models still face the out-of-vocabulary challenge for noisy text and non-canonical word forms (Eisenstein, 2013)." }, { "figure_ref": [], "heading": "Character-level Models", "publication_ref": [ "b16", "b20", "b26", "b4", "b48", "b44", "b0", "b25", "b44" ], "table_ref": [], "text": "Character-level language models emerged in the early years thanks to their simplicity and ability to better address out-of-vocabulary words compared to word-level models (Elman, 1990;Graves, 2013;Kalchbrenner et al., 2016). While sub-word based approaches gained popularity in language modeling due to their superior performance, recent studies (Choe et al., 2019;Xue et al., 2022) show that character/byte-level models can match the performance of their sub-word counterpart when provided with sufficient parameter capacity. In addition, character-level models have been shown to be more robust to text corruptions (Tay et al., 2021), adversarial attacks, and domain shifts (Aguilar et al., 2020).\nCharacter-level models also show promising results in multilingual settings. While sub-word or word tokenizers require a huge vocabulary to adequately cover various languages, multilingual character-based vocabulary can remain comprehensive and small. The text embedding layer does not eat up most of the model's parameter budget as in the multilingual BERT Base model for instance (up to 52%). More parameters can then be dedicated to the Transformer layers in characterbased approaches. Character-level models have also been shown to perform better on low-resource languages (Islam et al., 2022).\nAn important drawback of character-level models is that they typically require more computations than sub-word and word-level models. This is because character-level tokenization produces longer token sequences compared to sub-word or word based approaches, and the computational and memory demands of the self-attention mechanism grow quadratically with the sequence length. In order to address this challenge, CANINE (Clark et al., 2022b) leverages strided convolution to downsample the character sequence, while Charformer (Tay et al., 2021) uses average pooling. Although these methods improve the computational efficiency of character-level models, they require a predefined static downsampling rate. Such downsampling operation often breaks the boundary of basic linguistic units, including morphemes and words." }, { "figure_ref": [], "heading": "Hybrid Models", "publication_ref": [ "b1", "b31", "b3", "b0" ], "table_ref": [], "text": "Vanilla character-level models do not explicitly extract word or sub-word morpheme representations, which might negatively impact their performance on word-level downstream tasks, including namedentity recognition and extractive question answering. In order to address this issue, there have been efforts to combine character-level and word/subword level approaches to build hybrid models. These works propose to use information from char-acter spelling to inform word representation. For example, Flair (Akbik et al., 2018) proposed to use the internal states of a pre-trained character language model to produce word-level embeddings. CharBERT (Ma et al., 2020) combined sub-word tokens and character tokens and fused their heterogeneous representations. CharacterBERT (Boukkouri et al., 2020) used a CNN to learn word-level representations from the embeddings of their characters, but still requires a word-level vocabulary for pretraining. Char2Subword (Aguilar et al., 2020) proposed a similar approach, where character embeddings are used to mimic pre-trained representation of sub-word tokens with Transformer encoder." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b35" ], "table_ref": [], "text": "Most character-level Transformer encoder models are sub-optimal for two reasons: (1) Dense self-attention on long character sequence is computationally expensive; (2) They do not leverage word boundary, which is an important inductive bias in linguistics. To overcome these challenges, we propose to decompose dense character-level Transformer encoder into two parts: intra-word Transformer encoder and inter-word Transformer encoder. Our hierarchical language model (HLM) adopts an hourglass structure (Nawrot et al., 2022) and contains three main components: (1) an intraword module that learns word embeddings from their characters; (2) an inter-word module which contextualizes the word representations by attending to all words in the input sequence; (3) an intra-word prediction head for character-level pretraining. The overall architecture of our model is shown in Fig. 1. In the following sections, we discuss each component separately." }, { "figure_ref": [], "heading": "Intra-word Module", "publication_ref": [ "b3", "b37", "b3", "b17" ], "table_ref": [], "text": "We aim to learn word-level representations from the embeddings of their characters. An ideal approach should be able to handle words of arbitrary lengths, attend to every character rather than a local window, and remain computationally efficient. Therefore, we choose a shallow (4 layers in our experiments) Transformer encoder to learn contextualized character embeddings, rather than a CNN or a LSTM used by previous methods (Boukkouri et al., 2020;Peters et al., 2018). Either average or max pooling (Boukkouri et al., 2020;Clark et al., 2022b) is commonly used to aggregate contextualized character embeddings and thus reduce the Figure 1: Overview of the proposed method. The intra-word module learns contextualized character embeddings by referring to characters from the same word. A [WORD_CLS] token is inserted at the beginning of each word to learn word-level representations. The inter-word module then learns contextualized word-level features by attending to all words in the sequence. Finally, the word-level and character-level embeddings are concatenated and fed to the intra-word prediction head for the pre-training task of masked character modeling. The prediction head is not used in downstream tasks. sequence length. However, such simple pooling tends to wash out strong signals from particular morphemes (Fathi and Maleki Shoja, 2018). To address this challenge, we propose a novel adaptive and learnable aggregation method. Inspired by the approach of using the hidden state of the [CLS] token as the aggregate sequence-level representation, we insert a special [WORD_CLS] token at the beginning of every word. The embeddings of the [WORD_CLS] tokens are then used as wordlevel representations. Formally, for the i-th word of C i characters in the sequence, we extract its word-level representation r i as:\nh i = f θ (e i 0 ⊕ e i 1 ⊕ . . . ⊕ e i C i ) r i = h i 0 ,\nwhere f θ is the intra-word Transformers that produces a contextualized representation h i for each character of the i-th word, e i 0 is the embedding of the special [WORD_CLS] token, e i c is the c-th character embedding of the i-th word, and ⊕ denotes concatenation along the sequence dimension.\nIn Sec. 4.4, we conduct an ablation study to show that the proposed aggregation method outperforms the standard average or max pooling. By aggregating character-level tokens into word-level tokens, the token sequence length is greatly reduced for the subsequent inter-word module." }, { "figure_ref": [], "heading": "Inter-word Module", "publication_ref": [ "b13" ], "table_ref": [], "text": "After obtaining word-level features, we apply an inter-word module consisting of deep transformer encoder layers to extract contextualized word-level representation by attending to all words in the sequences. Formally, the contextualized representation w i of the i-th word of the sequence of N words is given as:\nw i = f ϕ (r 0 ⊕ . . . ⊕ r N -1 ),\nwhere f ϕ denotes the inter-word Transformers.\nWe set the depth of the inter-word Transformer encoder stack to 12 in order to match the settings of BERT Base (Devlin et al., 2019) and CA-NINE (Clark et al., 2022b). The inter-word module contributes the most to the total model parameters." }, { "figure_ref": [], "heading": "Intra-word Prediction Head", "publication_ref": [], "table_ref": [], "text": "Since we adopt an open-vocabulary approach, we propose to use character-level masked language modeling as pre-training task. To restore the character-level token sequence, we concatenate the contextualized character representations from the intra-word module (the initial [WORD_CLS] token is omitted) with the word-level features from the inter-word module along the sequence dimension. Finally, we apply a lightweight intra-word prediction head to get the posterior token probabilities.\nFormally, the prediction of the C i characters from the i-th word are given by:\nc i = f σ (w i ⊕ h i 1 ⊕ . . . ⊕ h i C i ),\nwhere f σ is the intra-word prediction head, consisting of a single Transformer layer, a fully-connected layer and a Softmax layer. Note that the intra-word prediction head is only used during pre-training for the masked character modeling task. During downstream fine-tuning, similar to CANINE, we concatenate initial word embedding r i and contextualized word representation w i along the feature dimension, and subsequently employ a small feedforward network to integrate both low-level and high-level information for prediction." }, { "figure_ref": [], "heading": "Pre-training Task", "publication_ref": [ "b52" ], "table_ref": [], "text": "Following the practice of BERT, we pre-train our model on English Wikipedia and BookCorpus dataset (19G) (Zhu et al., 2015). We pre-train the model for 3 epochs (3.9M steps with batch size set as 16) on a server with 8 NVIDIA Tesla V100 GPUs, and each epoch takes 137 hours. We adopt whole-word masked language modeling as pre-training task. In detail, we randomly select 15% of words from the input sequence, and mask every characters in the selected word. We replace the character tokens in 80% of the selected masked word with the [MASK] token. For 10% of the selected masked words, we replace their characters with randomly selected characters drawn from our character vocabulary. The remaining 10% words are unchanged. The three main components of our model are jointly trained in end-to-end fashion." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b24", "b22", "b23", "b30" ], "table_ref": [], "text": "We use spaCy (Honnibal et al., 2020) to split sentences into words, which is rule-based using space, punctuation and special rules (e.g. splitting don't into do and n't). We use a case-sensitive character vocabulary of size 1024, which consists of letters, digits and symbols. The maximum sequence length is set to 20 characters for the intra-word module and 512 words for the inter-word module. A [CLS] and a [SEP] token are inserted at the beginning and end of each sequence respectively. The hidden size is set to 768, the number of attention heads is set to 12, the feed-forward dimension in the Transformer encoder is set as 1536 and 3072 for intra-word and inter-word modules respectively. We leverage relative position (He et al., 2021) in our model, and we do not use token type embedding. GELU (Hendrycks and Gimpel, 2016) is used as activation function. Our model contains 125M parameters. We use the AdamW optimizer (Loshchilov and Hutter, 2018) for model pre-training and fine-tuning. For the pre-training, the weight decay is set to 0.01 and the number of warmup steps is set to 10,000. A linear learning rate decay schedule is used, starting at 5e-5. The dropout rate is set to 0.1. More algorithm details can be found in Appendix A.3." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate the performance of our pre-trained model on a wide range of downstream tasks. We compare the performance of our pre-trained hierarchical language model (HLM) with various baseline methods, including the popular sub-word based BERT model, three recent byte/characterlevel models, as well as a hybrid model referred to as CharacterBERT. For BERT, we use the cased BERT Base model (108M parameters) to match our inter-word Transformers module setup. For CA-NINE, we adopt CANINE-C (132M) which also uses a character-level pre-training task. For Charac-terBERT, we use the general version (105M) which is pre-trained on English Wikipedia and OpenWeb-Text. For those baseline models, we use the pretrained weights hosted on Huggingface † or released by the authors. For Charformer (203M) and Bytelevel T5 (200M), we use results of the base version from the original paper as pre-trained weight is not available." }, { "figure_ref": [], "heading": "Evaluation on Standard Benchmarks", "publication_ref": [ "b40", "b39", "b46", "b3", "b31", "b46", "b47", "b14", "b40" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In order to assess our model's performance on general domain, we evaluate our methods on standard English NLP benchmarks, including Stanford Question Answering Dataset (SQuAD) task (Rajpurkar et al., 2016(Rajpurkar et al., , 2018) ) and GLUE tasks (Wang et al., 2018). For the SQuAD task, we benchmark on both SQuAD 1.1 and 2.0 versions. SQuAD 1.1 dataset contains 100,000+ questions with associated context documents, and every question is answerable given the context. SQuAD 2.0 dataset contains an additional 50,000 unanswerable questions. We fine-tune the models for 2 epochs with a batch size of 16, and a learning rate of 3e-5. The evaluation on the validation set is shown in (Clark et al., 2022b) 72.9 82.1 66.6 70.3 84.8 84.6 76.9/78.2 CharacterBERT (Boukkouri et al., 2020) 79.9 87.5 71.5 74.6\n84.1 89.9 81.9/82.6 CharBERT (Ma et al., 2020) 82.9 89.9 75. Table 2: Experimental results on the validation set of question answering and text classification tasks. We report exact match (EM) and F1 scores for SQuAD, and accuracy for text classification tasks.\nthe two evaluation metrics. Our method outperforms all the baseline methods on both SQuAD versions.\nWe also benchmark our model on three text classification tasks from the widely adopted GLUE tasks (Wang et al., 2018), including MNLI (Williams et al., 2018), MRPC (Dolan and Brockett, 2005) and QNLI (Rajpurkar et al., 2016). The MNLI dataset contains 393k training samples with textual entailment annotations. Given a sentence pair containing a premise and an hypothesis, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither. We conduct evaluation in both matched and mismatched settings. The MRPC dataset contains 3.7k of training sentence pairs, and the task is to predict whether the two sentences are semantically equivalent. The QNLI dataset contains 108k training samples of question-paragraph pairs, and the task is to predict whether the context sentence contains the answer to the question. We fine-tune the models on the datasets described above for 5 epochs, with a batch size of 16, and a learning rate of 2e-5. We use the accuracy as the evaluation metric. As shown in Table 2, our proposed method outperforms the baseline methods on all tasks.\nIn order to investigate the model's performance when the size is scaled up, we increase the size of our HLM to match BERT Large and benchmark the performance. The preliminary results can be found in Appendix A.2." }, { "figure_ref": [], "heading": "Robustness to Textual Corruptions", "publication_ref": [ "b11", "b9" ], "table_ref": [], "text": "Humans are prone to making spelling mistakes. For example, 10-15% of web search queries contain misspellings (Dalianis, 2002;Cucerzan and Brill, 2004). In order to test our model's robustness to misspellings, we add synthetic noise to the fine- † Use results from tuning and evaluation set of downstream tasks and re-evaluate all the models. Following the practice of Xue et al. ( 2022), we experiment with three types of noises: (1) Random drop: We randomly delete 10% of characters (spaces and punctuation are included) from the input sequence; (2) Random repeat: We randomly select 20% of characters, then append 1-3 repetitions (with equal probability) after the the selected original characters; (3) Random case: We randomly set the case for each character (upper or lower) in the input sequence.\nWe perform the perturbation experiments on two representative downstream tasks: text classification on MNLI dataset and question answering on SQuAD 2.0. For the MNLI dataset, we add noise to both premise and hypothesis sentences. For the SQuAD 2.0 dataset, we only apply the perturbations to the question sentence, but not to the context paragraph, in order to avoid copying corrupted answer from the context for extractive QA models. The evaluation results are shown in Table 3. We found that BERT's performance significantly 83.5(-0.9) 83.5(-0.7) 76.3(-0.4) 79.3(-0.5)\nTable 3: Evaluation of models under various types of learnable noise. We apply the perturbations to both fine-tuning data and evaluation set. We report the performance value and degradation compared to the standard evaluation in parentheses. Bold face indicates the best absolute performance. We do not report results for randomly switching case for CharacterBERT as it is an uncased model.\ndrops under perturbation, one explanation being that even subtle misspellings would greatly change the sub-word tokenization results. In comparison, character-level models including CANINE degrade less in the presence of noise. We also present the results for unseen perturbation setting in Appendix A.4. Overall, our proposed HLM is robust to different kinds of perturbation and achieves the best performance.\nIn order to access the model's robustness to various magnitude of perturbations, we add different amounts of noise to the QNLI dataset and perform the evaluation. In practice, we randomly sample 5%, 10%, 15%, 20% of characters for each example in the finetuning data and validation set. For each selected character, we either drop the character or repeat the character as mentioned above (equal probability). The accuracy on the validation set is shown in Fig. 2." }, { "figure_ref": [ "fig_3" ], "heading": "Robustness to Domain Shift", "publication_ref": [ "b8", "b21", "b43", "b45" ], "table_ref": [], "text": "Most generic language models are pre-trained on web-crawled text corpora including Wikipedia and Common Crawl. But in real world deployments, models are often used in a different domain, an issue referred to as domain shift. In order to evaluate the robustness to domain shift, we finetune and evaluate the pre-trained models on downstream tasks from specialized domains including biomedicine and social media. For the biomedical field, we perform the evaluation on the NCBIdisease dataset (Crichton et al., 2017;Gu et al., 2021) Table 4: Evaluation results on cross-domain tasks. We report F1 score on the test set as the evaluation metric.\ntask is framed as a named entity recognition (NER) problem where the entities are the disease mentions. We fine-tune the models for 20 epochs, with a batch size of 16, and a learning rate of 2e-5. For the social media experiment, we leverage the W-NUT16 NER shared task (Strauss et al., 2016). This dataset contains 7,244 tweets annotated with 10 NER categories, including person, location, company and others. We fine-tune the models for 5 epochs. The evaluation results on the test sets are shown in Table 4. We use the F1 score as the evaluation metric.\nAs observed, the proposed HLM outperforms the baseline methods, highlighting its higher robustness to domain shift.\nCase study In order to understand the performance gain of our model over sub-word based BERT on cross-domain tasks, we look into the cases where BERT makes incorrect predictions. We found that many of these cases contain excessively fragmented words. Table 5 shows two examples from the NCBI-disease NER task. The word fragility in case 1 is segmented into f, ##rag, ##ility, and the word rupture in case 2 is segmented into r, ##up, ##ture. We think these tokenization results are sub-optimal as they break word morphemes, which possibly explains BERT's mispredictions. In comparison, we use BertViz (Vig, 2019) to visualize the behavior of our HLM model. Specifically, we visualize the attention patterns of the [WORD_CLS] token of the last Transformer layer of our intra-word module. As shown in Fig. 3, the [WORD_CLS] token for the word fragility and rupture are primarily attended by the character string fragil and rupt respectively, which are the stems of the words." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b44" ], "table_ref": [ "tab_6" ], "text": "In this section, we perform an ablation study to compare the effect of different word-level aggrega- tion methods. Specifically, we replace the proposed special token learning-based aggregation with standard aggregation methods such as average pooling and max pooling. We did not implement the strided convolution proposed in CANINE as it can not handle the variable word lengths. We report the validation accuracy on MRPC and the test F1 score on NCBI-disease in Table 6. Our learned aggregation outperforms the standard pooling strategies. Note that average and max pooling are usually performed on a fixed-length window of characters in previous studies (Tay et al., 2021), not adaptively at the word-level as in our ablation study." }, { "figure_ref": [], "heading": "Computational Efficiency", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "In this section, we benchmark the computational efficiency of the proposed model. Specifically, we measure the inference throughput (number of processed samples per second) on the test set of the MRPC dataset, a sub-task of the GLUE benchmark. We evaluate different models on the same server with one NVIDIA Tesla V100 GPU. The batch size is set to 32 and we use single precision. The evalu-ation results are shown in Table 7. While BERT is the most computationally efficient, our HLM also performs competitively, the performance gap being smaller compared to other character-level baseline models. We speculate that this performance gain comes from our hierarchical architecture. By aggregating character tokens into word-level tokens, the sequence length is drastically reduced for the inter-word module which has the deepest Transformer stack. We provide more analysis on the computational complexity in Appendix A.1." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel hierarchical language model for open-vocabulary language understanding. Our method does not rely on explicit sub-word or word vocabulary. We demonstrate that our HLM model outperforms baseline methods on standard benchmarks, and highlight its robustness to spelling errors and domain shifts. In future work, we will expand our language support and explore incorporating a decoder for generative tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b38", "b5" ], "table_ref": [], "text": "This work has two main limitations. First, we only consider baseline models with similar amount of parameters, and pre-trained on similar scale of text corpus for comparison. While we are aware of recent models including T5 (Raffel et al., 2020) and PaLM (Chowdhery et al., 2022), they either use huge corpus like C4 (745GB text) for pre-training or contain significantly more parameters than ours.\nIn the future, we will try to find additional computational resources to scale up our model and pre-train on larger text corpus. Second, we leverage spaCy to segment sentences into words, which is rule-based using spaces, punctuations and other rules. This approach works well on English and many other common languages such as French, German and Spanish. But for a few languages that do not use spaces to split words (e.g. Chinese and Japanese), it will be challenging to retrieve word boundaries. To address this issue, we consider either falling back to character splitting for these languages (similar to multilingual BERT) or employing a more sophisticated word boundary detector in future work." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [ "b50" ], "table_ref": [ "tab_7" ], "text": "A.1 Analysis on Computational Complexity\nLet N denotes the character length of input sequence. Without loss of generality, we assume the words in the sequence are of the same length M . The multi-head self-attention module is the major component of Transformer. While it provides global receptive field, the computational cost and memory footprint grow quadratically with input sequence length (Zeng et al., 2021). Therefore, for a vanilla character-based Transformers with dense self-attention, the computational and space complexity is O(N 2 ).\nFor our proposed HLM, the input sequence is still at the character level. But we sparsity the dense self-attention by introducing a hierarchical architecture. For the intra-word module, each character token only attends to characters from the same word. Since there are N M words in the sequence, the computational and space complexity of the intra-word module is\nO N M • M 2 = O(N M )(1)\nFor the inter-word module, since it only operates on word-level tokens, the computational and space complexity is\nO N 2 M 2(2)\nSince typically N ≫ M , and we have a shallow intra-word module and a deeper inter-word module, Eq. 2 dominates the computational and space complexity of the full model, which is significantly lower than the vanilla character-level model.\nIn comparison to sub-word based models like BERT, our inter-word module operates on wordlevel token sequence, which is always equal or shorter than sub-word level token sequence. Therefore, despite our model has an extra intra-word module, we empirically observe in Table 7 that our HLM is competitive in terms of computational efficiency comparing to sub-word based models." }, { "figure_ref": [], "heading": "A.2 Preliminary Evaluation of Scaled Model", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "In this section, we scale up our model size and benchmark the performance. In order to match BERT Large , we set the number of layers in the inter-word Transformer encoder to 24 and the feedforward dimension of Transformer encoder is set as 2048 and 4096 for intra-word and inter-word modules respectively. We set the number of attention heads as 16 and the hidden size as 1024. The batch size is set as 128. Other hyperparameters are set as the same as HLM Base , described in Section 3. Due to limited access to computational resources, we could only pre-train the model for 370k steps at the camera-ready deadline. In comparison, BERT Large was pretrained for 1M steps with a batch size of 256. Therefore, our computational budget is about 1/6 of BERT's. We benchmark our model's performance on SQuAD dataset. The evaluation on the validation set is shown in Table 8. We use exact match (EM) and F1 scores as the two evaluation metrics. Our model performs competitively compared with BERT Large , despite that our HLM Large has significantly less computational budget for pretraining." }, { "figure_ref": [], "heading": "A.3 Algorithm Details", "publication_ref": [], "table_ref": [], "text": "In this section, we provide algorithm details for our input pre-processing and model algorithm. Our pre-processing consists the following steps. First, we split each sentence into a list of words. Next, we map characters to codepoint indexes using a character-level vocabulary, and insert [WORD_CLS] token at the start of each word. Next, we insert a [CLS] token at the start, and a [SEP] token at the end for each sequence. Then we truncate the token sequence based on both character-level (20 characters for each word) and word-level (512 words per sentence) limits. Next, we compute the maximum number of characters for words in the batch, and pad all words to this length. We also determine the maximum number of words in the sequence batch, and pad all sequences to this length. The pre-processed batch can then be represented as a matrix of shape [batch_size, max_num_word, max_num_char]. Our unique representation of text batch enables us to efficiently switch between performing intraword self-attention and inter-word self-attention by simply reshaping, which is shown in Algorithm 1.\nWe provide pseudocode for pre-training of our HLM in Algorithm 1. For better readability, we omit implementation details including utilizing attention mask which avoids performing attention on the [PAD] tokens and handling for padding words. We recommend padding the input matrix to multiples of 8 for better acceleration on GPU. We also found a residual connection between initial word embedding r i and contextualized word embedding w i improves the performance in a subsequent study." }, { "figure_ref": [], "heading": "A.4 Robustness to Unseen Perturbations", "publication_ref": [], "table_ref": [], "text": "In this section, we benchmark the model's robustness to unseen noise. Specifically, we only add noise to the evaluation set, while using the original fine-tuning data. We experiment with three types of perturbation as introduced in Section 4.2. The results are shown in Table 9. In all three scenarios, our proposed HLM outperforms baseline methods, showing better robustness. Table 9: Evaluation of the models under various types of unseen noise. The perturbations are only applied to the evaluation sets, while the fine-tuning data is left untouched. We report the performance value and degradation compared to the standard evaluation (no perturbation) in parentheses. Bold face indicates the best absolute performance. We do not report results for randomly switching case for CharacterBERT as it is an uncased model." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by NIH Award Number 1R01HL141813-01 and the Pennsylvania Department of Health. We are grateful for the computational resources provided by Pittsburgh Super Computing grant number TGASC170024." } ]
Current state-of-the-art models for natural language understanding require a preprocessing step to convert raw text into discrete tokens. This process known as tokenization relies on a pre-built vocabulary of words or sub-word morphemes. This fixed vocabulary limits the model's robustness to spelling errors and its capacity to adapt to new domains. In this work, we introduce a novel open-vocabulary language model that adopts a hierarchical two-level approach: one at the word level and another at the sequence level. Concretely, we design an intraword module that uses a shallow Transformer architecture to learn word representations from their characters, and a deep inter-word Transformer module that contextualizes each word representation by attending to the entire word sequence. Our model thus directly operates on character sequences with explicit awareness of word boundaries, but without biased sub-word or word-level vocabulary. Experiments on various downstream tasks show that our method outperforms strong baselines. We also demonstrate that our hierarchical model is robust to textual corruption and domain shift.
From Characters to Words: Hierarchical Pre-trained Language Model for Open-vocabulary Language Understanding
[ { "figure_caption": ") ch, ##anga, ##ble (change, ##able) outragous (outrageous) out, ##rag, ##ous (outrage, ##ous) Domain Shift reimbursement re, ##im, ##bur, ##se, ##ment invoice in, ##vo, ##ice", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "to a, ##ort, ##ic r, ##up, ##ture in early adult life Case study of two examples from NCBI-disease NER task. The tagging schema for disease entities are beginning (B), inside (I), and outside (O). Pink/green colors indicate incorrect/correct predictions respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization of the attention patterns at the last layer of our intra-word module. Colored rectangles indicate the 12 attention heads. Color brightness and line weight reflect the attention scores. In the two examples, the [WORD_CLS] token is mainly attended by the stems of the words, fragil and rupt, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "SQuAD 1.1 SQuAD 2.0 MRPC QNLI MNLI (m/mm)ModelsEMF1EMF1AccAccAccBERT (Devlin et al., 2019)81.3 88.7 72.9 76.186.790.083.3/84.2Byte-level T5 † (Xue et al., 2022)----87.388.782.5/82.7Charformer (Tay et al., 2021)----87.389.082.6/82.7CANINE", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Tay et al. (2021) ", "figure_data": "90.0%87.5%Accuracy80.0% 82.5% 85.0%75.0% 77.5%HLM (Ours) BERT CharacterBERT CANINE0%5% Percentage of Perturbed Characters 10% 15%20%", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", which contains 7,287 sentences annotated with disease mentions from PubMed abstracts. The", "figure_data": "ModelNCBI-disease (F1) W-NUT16 (F1)BERT83.845.7CANINE75.232.0CharacterBERT84.734.0HLM (Ours)86.447.9", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Skinfragility in most cases is due to mutations in the gene encoding ... BERT tokens Skin f, ##rag, ##ility in most cases is due to mutations in the gene encoding", "figure_data": "BERTOOOOOO O OOO OOOHLM (Ours)BIOOOO O OOO OOOLabelBIOOOO O OOO OOOText... a disease leading toaorticrupturein early adult life", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Word-level aggregation comparisons. All models are pre-trained for 1.5 epochs.", "figure_data": "DatasetAverage pooling Max pooling OursMRPC (Acc)82.183.686.0NCBI-disease (F1)85.385.986.6ModelThroughput (sample/sec)BERT93.8CANINE44.3CharacterBERT78.4HLM (Ours)90.3", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Evaluation results on computational efficiency.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Experimental results on the validation set of question answering tasks. We report exact match (EM) and F1 scores for SQuAD.", "figure_data": "SQuAD 1.1 SQuAD 2.0ModelsEMF1EMF1BERTLarge84.1 90.9 78.7 81.9HLMLarge (370k steps) 83.4 90.2 78.2 81.3", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Algorithm 1 Pseudocode for HLM, PyTorch-like # embeddings: character-level embedding lookup table # intra_word_encoder: Intra-word Transformer encoder # inter_word_encoder: Inter-word Transformer encoder # intra_word_head: Intra-word prediction head for input_ids, labels in loader: # load a minibatch with n samples input_embeds = embeddings(input_ids) batch_size, num_word, num_char, hidden_size = input_embeds.shape # reshape to let Transformers attend to intra-word tokens rather than full sequence input_embeds = input_embeds.reshape((batch_size*num_word, num_char, hidden_size)) initial_embeds = intra_word_encoder(input_embeds) # extract embedding for [WORD_CLS] token, which is always at the beginning of each word word_embeds = initial_embeds[:,0,:] # reshape and extract contextualized inter-word representation word_embeds = word_embeds.reshape((batch_size, num_word, hidden_size)) word_embeds = inter_word_encoder(word_embeds) word_embeds = word_embeds.reshape((batch_size*num_word, 1, hidden_size)) # concatenate to restore the character-level token sequence char_embeds = concatenate([word_embeds, initial_embeds[:,1:,:]], axis=1) char_logits = intra_word_head(char_embeds) char_logits = char_logits.reshape((batch_size, num_word, num_char, -1)) loss = CrossEntropyLoss(char_logits, labels) # masked character modeling loss loss.backward() # back-propagate # AdamW update update(embeddings, intra_word_encoder, inter_word_encoder, intra_word_head)", "figure_data": "MNLI(m/mm)SQuAD 2.0ModelMatched Acc Mismatched AccEMF1", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" } ]
Li Sun; Florian Luisier; Kayhan Batmanghelich; Dinei Florencio; Cha Zhang
[ { "authors": "Gustavo Aguilar; Bryan Mccann; Tong Niu; Nazneen Rajani; Nitish Keskar; Thamar Solorio", "journal": "", "ref_id": "b0", "title": "Char2subword: Extending the subword embedding space using robust character compositionality", "year": "2020" }, { "authors": "Alan Akbik; Duncan Blythe; Roland Vollgraf", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Contextual string embeddings for sequence labeling", "year": "2018" }, { "authors": "Parminder Bhatia; Robert Guthrie; Jacob Eisenstein", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Morphological priors for probabilistic neural word embeddings", "year": "2016" }, { "authors": "Hicham El Boukkouri; Olivier Ferret; Thomas Lavergne; Hiroshi Noji; Pierre Zweigenbaum; Junichi Tsujii", "journal": "", "ref_id": "b3", "title": "Characterbert: Reconciling elmo and bert for word-level open-vocabulary representations from characters", "year": "2020" }, { "authors": "Dokook Choe; Rami Al-Rfou; Mandy Guo; Heeyoung Lee; Noah Constant", "journal": "", "ref_id": "b4", "title": "Bridging the gap for tokenizer-free language models", "year": "2019" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b5", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jonathan H Clark; Dan Garrette; Iulia Turc; John Wieting", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "Canine: Pre-training an efficient tokenization-free encoder for language representation", "year": "2022" }, { "authors": "Jonathan H Clark; Dan Garrette; Iulia Turc; John Wieting", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b7", "title": "Canine: Pre-training an efficient tokenization-free encoder for language representation", "year": "2022" }, { "authors": "Sampo Gamal Crichton; Billy Pyysalo; Anna Chiu; Korhonen", "journal": "BMC bioinformatics", "ref_id": "b8", "title": "A neural network multi-task learning approach to biomedical named entity recognition", "year": "2017" }, { "authors": "Silviu Cucerzan; Eric Brill", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Spelling correction as an iterative process that exploits the collective knowledge of web users", "year": "2004" }, { "authors": "Zihang Dai; Zhilin Yang; Yiming Yang; Jaime Carbonell; Quoc Le; Ruslan Salakhutdinov", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "year": "2019" }, { "authors": "Hercules Dalianis", "journal": "Springer", "ref_id": "b11", "title": "Evaluating a spelling support in a search engine", "year": "2002" }, { "authors": "Matt Davis", "journal": "", "ref_id": "b12", "title": "Psycholinguistic evidence on scrambled letters in reading", "year": "2003" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Bill Dolan; Chris Brockett", "journal": "IWP", "ref_id": "b14", "title": "Automatically constructing a corpus of sentential paraphrases", "year": "2005" }, { "authors": "Jacob Eisenstein", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "What to do about bad language on the internet", "year": "2013" }, { "authors": " Jeffrey L Elman", "journal": "Cognitive science", "ref_id": "b16", "title": "Finding structure in time", "year": "1990" }, { "authors": "Ehsan Fathi; Babak Maleki; Shoja ", "journal": "", "ref_id": "b17", "title": "Chapter 9 -deep neural networks for natural language processing", "year": "2018" }, { "authors": "Venkat N Gudivada; C R Rao", "journal": "Handbook of Statistics", "ref_id": "b18", "title": "Computational Analysis and Understanding of Natural Languages: Principles, Methods and Applications", "year": "" }, { "authors": "Yoav Goldberg; Omer Levy", "journal": "", "ref_id": "b19", "title": "word2vec explained: deriving mikolov et al.'s negativesampling word-embedding method", "year": "2014" }, { "authors": "Alex Graves", "journal": "", "ref_id": "b20", "title": "Generating sequences with recurrent neural networks", "year": "2013" }, { "authors": "Yu Gu; Robert Tinn; Hao Cheng; Michael Lucas; Naoto Usuyama; Xiaodong Liu; Tristan Naumann; Jianfeng Gao; Hoifung Poon", "journal": "ACM Transactions on Computing for Healthcare (HEALTH)", "ref_id": "b21", "title": "Domain-specific language model pretraining for biomedical natural language processing", "year": "2021" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b22", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "year": "2021" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b23", "title": "Gaussian error linear units (gelus)", "year": "2016" }, { "authors": "Matthew Honnibal; Ines Montani; Sofie Van Landeghem; Adriane Boyd", "journal": "", "ref_id": "b24", "title": "spacy: Industrialstrength natural language processing in python", "year": "2020" }, { "authors": "Md Mofijul Islam; Gustavo Aguilar; Pragaash Ponnusamy; Clint Solomon Mathialagan; Chengyuan Ma; Chenlei Guo", "journal": "", "ref_id": "b25", "title": "A vocabulary-free multilingual neural tokenizer for end-to-end task learning", "year": "2022" }, { "authors": "Nal Kalchbrenner; Lasse Espeholt; Karen Simonyan; Aaron Van Den Oord; Alex Graves; Koray Kavukcuoglu", "journal": "", "ref_id": "b26", "title": "Neural machine translation in linear time", "year": "2016" }, { "authors": "Taku Kudo", "journal": "", "ref_id": "b27", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "year": "2018" }, { "authors": "Taku Kudo; John Richardson", "journal": "", "ref_id": "b28", "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b29", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b30", "title": "Decoupled weight decay regularization", "year": "2018" }, { "authors": "Wentao Ma; Yiming Cui; Chenglei Si; Ting Liu; Shijin Wang; Guoping Hu", "journal": "", "ref_id": "b31", "title": "Charbert: characteraware pre-trained language model", "year": "2020" }, { "authors": "Sabrina J Mielke; Zaid Alyafeai; Elizabeth Salesky; Colin Raffel; Manan Dey; Matthias Gallé; Arun Raja; Chenglei Si; Wilson Y Lee; Benoît Sagot", "journal": "", "ref_id": "b32", "title": "Between words and characters: A brief history of open-vocabulary modeling and tokenization in nlp", "year": "2021" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b33", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Piotr Nawrot; Szymon Tworkowski; Michał Tyrolski; Lukasz Kaiser; Yuhuai Wu; Christian Szegedy; Henryk Michalewski", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Hierarchical transformers are more efficient language models", "year": "2022" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "GloVe: Global vectors for word representation", "year": "2014" }, { "authors": "Matthew E Peters; Mark Neumann; Mohit Iyyer; Matt Gardner; Christopher Clark; Kenton Lee; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Deep contextualized word representations", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b38", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Pranav Rajpurkar; Robin Jia; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Know what you don't know: Unanswerable questions for SQuAD", "year": "2018" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "year": "2016" }, { "authors": "Graham Rawlinson", "journal": "IEEE Aerospace and Electronic Systems Magazine", "ref_id": "b41", "title": "The significance of letter position in word recognition", "year": "2007" }, { "authors": "Mike Schuster; Kaisuke Nakajima", "journal": "IEEE", "ref_id": "b42", "title": "Japanese and korean voice search", "year": "2012" }, { "authors": "Benjamin Strauss; Bethany Toma; Alan Ritter; Marie-Catherine De Marneffe; Wei Xu", "journal": "", "ref_id": "b43", "title": "Results of the WNUT16 named entity recognition shared task", "year": "2016" }, { "authors": "Yi Tay; Sebastian Vinh Q Tran; Jai Ruder; Hyung Won Gupta; Dara Chung; Zhen Bahri; Simon Qin; Cong Baumgartner; Donald Yu; Metzler", "journal": "", "ref_id": "b44", "title": "Charformer: Fast character transformers via gradient-based subword tokenization", "year": "2021" }, { "authors": "Jesse Vig", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "A multiscale visualization of attention in the transformer model", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b46", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Linting Xue; Aditya Barua; Noah Constant; Rami Al-Rfou; Sharan Narang; Mihir Kale; Adam Roberts; Colin Raffel", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b48", "title": "ByT5: Towards a token-free future with pre-trained byte-to-byte models", "year": "2022" }, { "authors": "Michihiro Yasunaga; Jure Leskovec; Percy Liang", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "LinkBERT: Pretraining language models with document links", "year": "2022" }, { "authors": "Zhanpeng Zeng; Yunyang Xiong; Sathya Ravi; Shailesh Acharya; Glenn M Fung; Vikas Singh", "journal": "", "ref_id": "b50", "title": "You only sample (almost) once: Linear cost self-attention via bernoulli sampling", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b51", "title": "", "year": "" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b52", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" } ]
[ { "formula_coordinates": [ 4, 113.73, 566.54, 132.54, 32.01 ], "formula_id": "formula_0", "formula_text": "h i = f θ (e i 0 ⊕ e i 1 ⊕ . . . ⊕ e i C i ) r i = h i 0 ," }, { "formula_coordinates": [ 4, 353.83, 502.65, 122.9, 13.27 ], "formula_id": "formula_1", "formula_text": "w i = f ϕ (r 0 ⊕ . . . ⊕ r N -1 )," }, { "formula_coordinates": [ 5, 110.15, 110.81, 139.69, 15.02 ], "formula_id": "formula_2", "formula_text": "c i = f σ (w i ⊕ h i 1 ⊕ . . . ⊕ h i C i )," }, { "formula_coordinates": [ 12, 120.78, 374.42, 169.08, 24.43 ], "formula_id": "formula_3", "formula_text": "O N M • M 2 = O(N M )(1)" }, { "formula_coordinates": [ 12, 157.3, 446.76, 132.57, 26.38 ], "formula_id": "formula_4", "formula_text": "O N 2 M 2(2)" } ]
10.59275/j.melba.2023-3d9d
2023-11-11
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [ "b20", "b45", "b47", "b60", "b47", "b46", "b3", "b59", "b60", "b60" ], "table_ref": [], "text": "The goal of this work is to apply machine learning to automate the identification of human iPSCs that show promise for clinical cell therapies in regenerative medicine. IPSCs are generated by reprogramming a patient's own cells back in time to make more malleable cells with differentiation potential for generating any cells or tissues of interest. This technology has shown great potential for transforming regenerative cell therapies, drug and disease modelling, tissue repair and regeneration, and personalized gene-corrected products. However, the pipeline for iPSC generation, characterization and cell banking is a highly labor-intensive, time-consuming and costly one. The monetary cost of research-grade iPSC line generation is estimated at USD 10,000-25,000 while that of clinical-grade iPSC line is approximately USD 800,000 based on published reports (Huang et al., 2019). The entire process of optimal iPSC line generation and selection can take up to 35 days and requires a further 3 months to produce large scale iPSCs for therapeutic application in patients.\nAdditionally, quality control techniques for growing iPSCs to limit inter-or intra-patient iPSC line variability, which is currently assessed manually, remain imperfect in large-scale biomanufacturing. The current solution relies on the judgement of an expert cell biologist, who determines precise iPSC induction, confirms pluripotency based on morphological changes and assesses molecular characterization for multiple clones -all tasks that remain highly effort-intensive and subjectively biased. Manual cell quality control therefore cannot be used to scale up the production of iPSCs and derived products for therapeutic applications. An automated method enabling high-throughput surveillance and validation of cell identity, growth kinetics, and morphological features is desirable throughout the entire manufacturing process. The screening is multifold and needed to not only select optimal cells which have been fully converted to iPSCs during reprogramming stage but also to exclude unstable and pseudo iPSC contaminants during the expansion stage. Automating this process using machine learning would therefore be ground-breaking in improving iPSC bioprocess efficiency and yield, thereby drastically reducing the time and cost involved in the generation of iPSC-based products for therapeutic applications. This paper presents some early but promising steps in this direction.\n1.2 Background 1.2.1 iPSC Reprogramming Takahashi and Yamanaka (2006) demonstrated that mouse embryonic or adult fibroblasts can be reprogrammed into pluripotent stem cells by introducing four genes encoding transcription factors, namely Oct3/4, Sox2, Klf4, and c-MYC (Takahashi et al., 2007;Ye et al., 2013). Generated stem cells showed similar morphological or functional behavior as embryonic pluripotent stem cells and were thus termed iPSCs. Soon thereafter, Takahashi et al. (2007) reported directed conversion of human fibroblasts into pluripotent stem cells, termed as human iPSCs. With the discovery of Yamanaka's human iPSC technology, patientderived stem cells have huge potential in regenerative medicine (Takahashi and Yamanaka, 2013). Human iPSCs show merit not only in delivering any desired cell types for treating degenerative diseases, tissue repairing, disease modeling, and drug screening (Amabile and Meissner, 2009;Yamanaka, 2009), but they also solve two major problems associated with other pluripotent stem cells such as embryonic stem cells (Ye et al., 2013), namely immune tolrenace after transplantation and ethical concerns. However, there still exist technical and biomedical challenges including the risk of teratoma formation and the uncertainty of efficient nuclear reprogramming completeness due to variability and inconsistencies in the selection of optimal cells (Ye et al., 2013). There are two major problems to be solved before human iPSCs can be applied as a standardized technique, Firstly, manually monitoring the quality of growing iPSC colonies that is currently practiced does not scale. Secondly, only colonies that satisfy clinical good manufacturing practice (GMP) standards need to be identified for use in downstream applications. Hence, there is an urgent need for automated quality control, thereby also lending it an element of objectivity and standardization." }, { "figure_ref": [], "heading": "Machine learning in iPSC Recognition", "publication_ref": [ "b24", "b50", "b61", "b19", "b14", "b25", "b26", "b61" ], "table_ref": [], "text": "Though many applications of machine learning for iPSC recognition in images have been presented in the literature (Kusumoto et al., 2018;Waisman et al., 2019;Zhang et al., 2019;Hirose et al., 2021;Coronnello and Francipane, 2021;Kusumoto et al., 2022;Lan et al., 2022), there are none that include both detection and classification or use timelapse imaging, which is the object of this study. To the best of our knowledge, Zhang et al. (2019) presented the method that comes closest to this work though that too differs in several key respects. It utilizes fluorescence imaging and the commercial closed-source IMARIS software to segment cells and it captures 3D shape information that is the basis for extracting morphological features to train the classifier it uses. Our aim is to make opensource cell segmentation possible without fluorescence and with only the 2D pixel data in standard phase-contrast microscpy images." }, { "figure_ref": [], "heading": "Deep Learning in Visual Recognition", "publication_ref": [ "b2", "b12", "b9", "b28", "b16", "b36", "b34", "b22", "b38", "b44", "b10", "b55", "b6", "b13", "b51", "b33", "b48", "b49", "b15", "b32", "b58", "b48", "b5" ], "table_ref": [], "text": "In the past decade, deep learning (Alzubaidi et al., 2021) has been applied extensively in computer vision (Chai et al., 2021), especially for recognition tasks like image classification (Byerly et al., 2022), object detection (Liu et al., 2020), instance segmentation (Gu et al., 2022), semantic segmentation (Mo et al., 2022), and object tracking (Marvasti-Zadeh et al., 2021;Jiao et al., 2021;Pal et al., 2021). It has likewise seen broad application in medical image analysis (Suganyadevi et al., 2021;Cai et al., 2020) including segmentation in general (Liu et al., 2021a) and cell segmentation in particular (Wen et al., 2022). The latter is the task most relevant to this work, though cell tracking (Ben-Haim and Riklin-Raviv, 2022;Chen et al., 2021;Wang et al., 2020;Lugagne et al., 2020;Ulman et al., 2017) is also important here. More recently, the advent of transformers (Vaswani et al., 2017) has led to significant performance improvements (Liu et al., 2021b) over the convolutional neural network (CNN) based architectures that had been prominent earlier. Liu et al. (2021c) proposed the Swin transformer to improve the original vision transformer (Dosovitskiy et al., 2021) further using shifted windows. This currently appears to be the backbone of choice in most state of the art models, though that might change with the recent introduction of ConvNext (Liu et al., 2022) as a competitive CNN-based alternative. For this project, we needed instance segmentation models, preferably ones that could benefit from the temporal information in time-lapse images. Therefore, we selected top-performing static and video segmentation models (sec. 2.2) with publicly available code from a popular leaderboard (Xiang, 2023). We also searched through the leaderboards on a couple of cell tracking and segmentation benchmark challenges (Ulman et al., 2017;Anjum and Gurari, 2020) but failed to find any models with publicly available code." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Cell culturing", "publication_ref": [ "b39" ], "table_ref": [], "text": "Cells were cultured in a Class-II biocontainment compliant lab with manipulation of cells in a sterile environment using a laminar flow hood with high efficiency particulate air filtration. Cells were maintained at 37 • C with 5% CO 2 within humidified incubators. Human iPSCs reprogrammed from patient-derived peripheral blood mononuclear cells (PBMCs) were previously established and characterized for pluripotency in the laboratory and cryostored in ultra-low temperature freezers for long-term storage. Before starting the time-lapse imaging, cells from a frozen vial were thawed in a 37 • C waterbath and cultured on a matrix-coated 6-well plate in iPSC growth media at a seeding density of 100,000 cells per well according to previously published protocol (Park et al., 2008). Once the cells got attached, fresh media was replenished in each well every day prior to the time-lapse imaging which was initiated on day 6 right after cell-seeding." }, { "figure_ref": [], "heading": "Time-lapse imaging", "publication_ref": [], "table_ref": [], "text": "Time-lapse images were captured at 15-minute intervals on a Nikon BioStudio-T microscope using the NIS-Elements cell observation and image analysis software. The images were captured with a 4x lens using a full plate scanning module. A total of 275 images were captured spanning 68.75 hours. The raw images were 10992×10733 pixels or 714 megapixels in size though the usable circular cell culture region in the center comprised only about 85 megapixels with a diameter of 5200 pixels. A sample image is shown in the supplementary. Manual examination showed that images before frame 146 (or 36.5 hours) were unsuitable for our experiments since they contained too many clones, with most being too small and indistinct for reliable labeling. Three of the remaining frames -155, 186 and 189 -were blurry due to camera shake and had to be discarded too. As a result, a total of 127 frames were used for all experiments." }, { "figure_ref": [], "heading": "Annotation", "publication_ref": [ "b52", "b7", "b43" ], "table_ref": [], "text": "The original 714 megapixels images are too large to be processed directly so they were first divided into several regions of interest (ROIs) (shown in the supplementary) varying in size from 1700 × 900 to 4333 × 3833. These were then annotated in 3 stages.\nSelective Uncategorized Semi-Automated Labeling The 127 frames were first divided into three sets representing different levels of cell development -146-200, 201-250 and 251-275. Set-specific ROI sequences were then created, that is, each ROI spanned only one of the three sets instead of all 127 frames. This strategy was chosen to include a good representation of cellular appearance from all stages of development in the labeled data with minimum amount of overall labeling. There were 3, 8 and 6 ROIs from the three sets respectively and these 17 ROIs had a total of 656 frames and 4768 cells.\nThese were then labeled to mark the locations and pixel-wise masks of cells through a custom-designed graphical labeling tool. This tool integrates the SiamMask tracker (Wang et al., 2018) to semi-automate the labeling process by propagating manually-created masks into future frames by joint unsupervised segmentation and tracking. Tracking was stopped manually when it started to fail and then restarted from the last frame where it worked, after making any required fixes in the intermediate frames. The labeling tool also supports using a previously trained segmentation model, if available, to automatically generate initial cell candidates that can then be manually modified instead of having to be drawn from scratch, although this capability was not used at this stage. Note that this relatively labor-intensive stage did not require involvement from iPSC detection experts since the labeled cells were not categorized into good and bad.\nExhaustive Automated Labeling A Swin transformer instance segmentation model (Liu et al., 2021c) was first trained on the annotations from the previous stage. Next, ROI sequences spanning all 127 frames were created. These were designed to cover as much of the circular well area containing cells as possible while minimizing overlap between different ROIs. A total of 31 sequences were created by extending 11 of the 17 ROIs from the previous stage to the remaining frames and creating 20 new ones. These sequences had 3937 frames with 22,598 cells in all. More details about this dataset as well as the one from the previous stage are available in the supplementary. Finally, the trained Swin transformer model was used to automatically detect and segment cells in each of these frames.\nCategorized Retrospective Labeling An iPSC detection expert first manually categorized the cells in frame 275 from each of the 31 ROIs into good and bad. The two categories were respectively named iPSC and differentiating cell (DfC) to reflect their likely future growth outcomes. A semi-automated interactive multi-object tracking tool was then used to propagate these labels backwards in time by tracking each cell line from frame 275 to 146. This process accounted for cell division and fusion events1 by giving each child cell the same label as the parent in case of division and requiring that all merging cells have the same label in case of fusion. A violation of the latter requirement would have meant that the human expert's labels were incorrect but this never happened which is a sign of their reliability. Note that cell lines that disappear before reaching frame 275 cannot be categorized in this way so these have been excluded from all experiments.\nThe tracking algorithm was kept simple due to time and computational constraints. Cells were associated between neighbouring frames on the basis of location and shape, similar to the IOU tracker (Bochinski et al., 2017) while likely fusion and division events were detected using heuristics based on the extent of change in the size, shape and location of associated cells, in addition to association failures. The detailed algorithm is included in the supplementary. Due to this simplicity, the retrospective labelling process is currently more time-and labor-intensive than ideal and it took between 10 and 30 minutes to label each ROI sequence depending on density of cells and frequency of division and fusion events.\nWe have made the code for all three stages publicly available (Singh, 2023) along with the annotated data and trained models to facilitate reproducibility of our results and further work in this domain." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b61", "b32", "b61", "b11", "b32", "b57", "b56", "b18", "b17" ], "table_ref": [], "text": "We selected two state-of-the-art image classification models to allow head-to-head comparison with the XGBoost classifier-based approach by Zhang et al. (2019) (XGB) which is the only existing method in literature that is relevant to this work. The models we chose are based on Swin transformer (Liu et al., 2021c) (SWC) and ConvNext (Liu et al., 2022) (CNC) architectures. We also selected five instance segmentation models since both cell detection and classification are needed in the absence of fluorescence that was used by Zhang et al. (2019) to identify cells. Two of these are static detectors that process each frame independently and discard any video information. They are both variants of Cascade Mask RCNN (Cai and Vasconcelos, 2021) that differ in their backbone architectures, these being the same as the classifiers chosen above -Swin transformer (Liu et al., 2021c) (SWD) and ConvNext (Liu et al., 2022) (CND). The remaining three are video detectors that combine information from multiple video frames to make their decisions. One of these -IDOL (Wu et al., 2022) -is an online model that only uses information from past frames while the other two -SeqFormer (Wu et al., 2021) (SEQ) and VITA (Heo et al., 2022) -are batch models that use information from the entire video sequence including both past and future frames. All three video detectors use versions of the Swin transformer backbone. We also experimented with two ResNet (He et al., 2016) variants of VITA but they did not perform well and so have been excluded here. In addition, we trained a Swin transformer semantic segmentation model and tried several ensemble techniques to combine semantic and instance segmentation results but these did not yield significant performance improvements and are therefore likewise excluded." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b61" ], "table_ref": [], "text": "Since retrospective labeling is the most time-consuming part of our pipeline, it is desirable to minimize the need to label backwards to as few frames as possible. Therefore, in addition to comparing modern deep learning-based methods with XGB, we also wanted to evaluate the extent to which model performance on early-stage images depends on the lateness of the frames on which it is trained. We constructed two different training datasets to achieve this -an early-stage set with 38 frames from 163 to 202 and a late-stage set with 73 frames from 203 to 275. Models trained on both datasets were evaluated on the 16 frames from 146 to 162. We also had to adapt the method proposed by Zhang et al. (2019) to work with our images since that work computes several of the features using 3D image information which is not available in our case. It turned out that only 7 of the 11 features used there could be suitably approximated with 2D data so we trained XGB using only these 7 features whose details are in the supplementary. All models were trained to convergence using mostly default settings recommended for each model by the respective authors except for minor tinkering with batch sizes, data augmentation strategies and scaling factors to make the models fit in the limited GPU memory available. The classification datasets were constructed from image patches corresponding to the bounding box around each labeled cell with an additional 5 pixel border for context. Models were trained on several different GPU configurations -video detectors: 2 × Tesla A100 40 GB, static detectors: 2 × RTX 3090 24 GB, classifiers: 3 × RTX 3060 12 GB and 3 × GTX 1080 Ti 11 GB." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Classification Metrics", "publication_ref": [ "b35" ], "table_ref": [], "text": "We used standard receiver-operating-characteristic (ROC) curves and the corresponding area-under-curve (AUC) metric to compare the models. Note that head-to-head comparison between detectors and classifiers is difficult since the former both detect and classify cells while the latter only do classification on all the cells in the ground truth (GT). Two types of detector failures need to be accounted for in order to render such a comparison meaningful:\n• False Positives (FP): Detections without matching GT cells -Misclassification (FP-CLS): a GT DfC is detected but misclassified as iPSC -Duplicates (FP-DUP): the same GT iPSC is detected multiple times and classified each time as iPSC -Non-Existent (FP-NEX): an iPSC is detected where no GT cell exists (neither iPSC nor DfC) • False Negatives (FN): GT cells without matching detections -Misclassification (FN-CLS): a GT iPSC is detected but misclassified as DfC -Missing detection (FN-DET): a GT iPSC is not detected at all (neither as iPSC nor as DfC) FP-NEXs were ignored when computing the classification metrics since manual examination2 showed that virtually all of these corresponded to one of two cases, neither of which is important in our application:\n• FP-NEX-WHOLE: unlabeled cells whose labels could not be inferred by retrospective labeling (sec. 2.1.3) • FP-NEX-PART: parts of labeled cells mostly involving ambiguously-shaped cells that could plausibly be interpreted as undergoing division or fusion but were labeled as whole cells. FP-DUPs were also ignored since multiple detections of an iPSC have little impact in our application. Finally, FN-DETs had to be discarded since ROC curves can only be generated by varying the cut-off used in filtering detections based on their confidence values which are unavailable for undetected cells.\nIn addition to the AUC of the complete ROC curve, we also used partial AUCs (McClish, 1989) with FP thresholds of 0.1%, 1% and 10%. Even a small number of FPs can be extremely detrimental in our application due to the high cost of culturing non-viable cells so that a model that performs better at these FP rates is preferable to another that is better overall but underperforms here." }, { "figure_ref": [], "heading": "Detection Metrics", "publication_ref": [ "b21" ], "table_ref": [], "text": "The exclusion of FN-DETs while computing the classification metrics can make these biased in favour of detectors with high rates of missing cells but high accuracy for the few cells that they do detect. We accounted for this by incorporating the following detection metrics to evaluate and compare only the detectors:\n• Frequency of FN-DETs, FP-DUPs and FP-NEXs\n• Standard detection metrics (Huang et al., 2017) of average precision (AP) and AUC of the recall-precision curve (RP-AUC)" }, { "figure_ref": [], "heading": "Temporal Metrics", "publication_ref": [], "table_ref": [], "text": "The ability to detect iPSCs as early as possible is crucial to our application. Therefore, we also evaluated the models in each of the 16 test frames one-by-one to judge how their performance varied over time. A model that performs better in earlier frames would be preferable to one that performs better overall but underperforms in earlier frames." }, { "figure_ref": [], "heading": "Subsequenctial Inference", "publication_ref": [], "table_ref": [], "text": "Video detectors detect cells spanning all the frames in their input video instead of frame-byframe which is incompatible with temporal evaluation since even detections in early frames are done using information from all 16 frames in the test set. To resolve this, we used incremental inference where the detections for each frame are generated by running the detector on a subsequence comprising only that frame and all of the preceding ones so that information from future frames is not used. For example, detections for frames 1, 2 and 3 are respectively generated by running the detector only on subsequences comprising frames (1), (1, 2) and (1, 2, 3). We used another variant of subsequential inference to evaluate the impact of the number of frames on the performance of video detectors so we can judge whether patterns in the way that cell boundaries change over time provide useful information about their eventual outcome to these detectors. Here, we divided the 16-frame sequence into a set of nonoverlapping subsequences, each with a fixed size, and ran inference on each subsequence independently. For example, with a subsequence size of 2, our 16-frame sequence is divided into 8 subsequences -(1, 2), (3, 4), ..., (15,16). Detections for all frames in each subsequence are then generated by running on only the frames in that subsequence. We experimented with subsequence sizes of 1, 2, 4 and 8." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Classification", "publication_ref": [ "b23" ], "table_ref": [], "text": "ROC curves for both early and late-stage models are shown in the top row of Fig. 1. It can be seen that the early models perform much better than the late ones, which is expected since their training images are much more similar to the test images. However, the extent of this difference does indicate that cell-appearance changes so much over the course of just 9.5 hours (temporal gap between early and late-stage training images) that long-term retrospective labelling right to the very early-stage images is likely to be essential to generate training data for models that can do early-stage iPSC detection reliably. The significant performance advantage of deep learning models over XGB is also apparent here. The main shortcoming of XGB seemed to be its inability to handle class imbalance in the training set. Since about 75% of all cells were DfCs, XGB apparently learnt to classifty nearly all cells as DfCs. We tried all standard techniques to handle class imbalance (Krawczyk, 2016) but this is the best performance we could get from it. Further, all the video detectors are consistently better than the static ones which seems to confirm the human experts' supposition that temporal information is crucial for making good predictions. Also, the static detectors do outperform the classifiers but only significantly so in the early-stage case. Since the precise shape of cell boundaries is not available to the classifiers, this lends some weight to the additional supposition that cell-shape is important for recognizing iPSCs. However, as already noted above, the cell-shapes change too rapidly for this information to be generalizable from the later stages to the earlier ones. Finally, IDOL turns out to be the best model overall even though it is the smallest and fastest of the three video detectors while the much larger VITA shows a susceptibility to overfitting in its sharp decline between the two cases. This kind of overfitting is exhibited by both the static detectors too, though it is more strongly marked in case of Swin transformer.\nPartial AUCs are shown in the bottom row of Fig. 1. Relative performance between the models is broadly similar to that for overall AUC though IDOL shows greater performance advantage over other models in the early-stage case especially for 0.1% FP. The detectors also show greater improvement over the classifiers in these high-precision scenarios. We also analysed the temporal evolution of partial AUCs by evaluating these frame-by-frame to generate 3D plots with time dimension on the Z-axis (included in the supplementary material) but could not find any useful patterns beyond those apparent in these 2D plots." }, { "figure_ref": [ "fig_2" ], "heading": "Detection", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 2, relative detection performance is mostly consistent with classification accuracy, with IDOL still being the best model overall. IDOL also shows the smallest drop in performance between ROC-AUC and AP while the two static detectors show the largest drops amounting to nearly 10-fold and 7-fold respectively for SWD and CND in the late-stage case. Among the video detectors, VITA has the largest drop, especially in the late-stage case which underlines its overfitting. Further, both SEQ and VITA have very high FN-DET rates that are also at odds with their relatively good classification performance.\nConversely, SWD has much lower rates of FN-DETs than its ROC-AUCs would suggest, especially in the late-stage case where it outperforms all other models by a significant margin, though this does come at the cost of a corresponding rise in FP rates. We can also note that this increase in FPs is dominated by only one subtype, namely FP-NEX- WHOLE, while FP-DUP and FP-NEX-PART show little change, not only for SWD but also for IDOL and VITA. It appears that either due to the greater range of cell-appearances in the late-stage dataset (owing to its greater size) or larger disparity in cell-appearance with respect to the test set, these models learnt to detect a lot more of the unlabeled cells than their early-stage counterparts, which were therefore better at discriminating between the unlabeled and labeled cells." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Temporal", "publication_ref": [], "table_ref": [], "text": "Frame-wise AUCs are shown in the top row of Fig. 3. Somewhat contrary to expectation, AUC does not show consistent increase with time even though test images are becoming more similar to the training images. In fact, many of the early-stage models do show a weak upward trend while late-stage ones do not show any. This is particularly unexpected since the latter perform significantly worse overall and show signs of overfitting, which should lead to a more strongly marked increase in accuracy as resemblance between the test and training images increases. A possible explanation might be that the late-stage training images are just too far away from the test set for any intra-set variations in test images to make enough of a difference in their resemblance to training images to benefit the models.\nBottom row of Fig. 3 shows the impact of subsequence length on the video detectors. These plots include only ROC-AUC but detection metrics showed similar patterns and have thus been relegated to the supplementary. In order to incorporate the longest-term temporal information possible with our dataset, we also tested the late-stage model on an extended test set comprising frames 146 to 201. Unfortunately, there is a visual discontinuity between frames 184 and 185 due to which this 54-frame sequence had to be divided into two subsequences -one with 38 frames from 146 to 184 and another with 16 frames from 185 to 201 -so that the longest subsequence length was 38 instead of 54. There is indeed a general upward trend with subsequence length but it is much weaker than might be expected. Earlystage models showed negligible impact of subsequence length while the greatest overall gains were 5.2% in the second case and 3.6% in the third case, achieved respectively by IDOL and VITA. Neither of these seem sufficient considering the 4 to 9 hours worth of extra temporal information available to the models. This might indicate that changes in cell-appearance over time are not as useful for recognizing iPSCs as supposed by experts. More likely, it might mean that existing video detectors are simply not good enough to exploit this information sufficiently well and better long-term models are needed." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper presented a labeling, training and evaluation pipeline along with baseline performance results for early-stage prediction of iPSC reprogramming outcome to help select the best quality clones. These are still early days of research in this domain and it is difficult to say how practical such an automation can be. While it is clear that deep learning models hold significant advantage over previous methods, they also show signs of overfitting and it is unclear how much training data would be needed to overcome such issues." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [ "b13", "b51", "b33", "b52", "b0", "b53", "b62", "b6", "b27", "b42", "b37", "b41", "b40", "b1", "b8" ], "table_ref": [], "text": "We are currently working to improve the tracking algorithm in the retrospective labeling system to make the process faster and less tedious since that is the current bottleneck in the labelling pipeline. We are trying out several state-of-the-art real-time multi object trackers, especially those specialized for tracking cells (Chen et al., 2021;Wang et al., 2020;Lugagne et al., 2020), to replace the simple IOU based algorithm used in this work. We are also looking into ways to improve this algorithm by exploiting recursive parent-child relationships in our labels that can be used to construct hierarchical tree-like structures. Transformer architectures that support such structures (Wang et al., 2019;Ahmed and Mercer, 2021;Wang et al., 2022;Zhong et al., 2022) might help to incorporate cell division and fusion events directly into the algorithm to not only improve the cell association reliability but also reduce the incidence of false positives in the detection of these events, which is the most timeconsuming and tedious aspect of this process. A recent method uses graph neural networks to exploit these structures (Ben-Haim and Riklin-Raviv, 2022) for cell tracking. We are trying to incorporate it into our pipeline and also improve it further using transformers. If these efforts prove successful, we hope to make long term retrospective labelling spanning multiple weeks feasible so any such demands for data can be met.\nWe are also exploring ways to better exploit long-term patterns in the evolution of cell-appearance over time to improve prediction quality since the current ability of video detectors in this regard does not appear to commensurate with the importance that human experts attach to this information. This discrepancy might be due to both the limited temporal span of our data or the models being unable to benefit from this information because they do not learn from sufficiently wide temporal windows. We hope to resolve the former by extending our data from its current span of only 3 days to as much as 5 weeks once the retrospective labeling system have been improved enough to make such long-term labeling feasible. The latter can be addressed by using more memory-efficient models since increasing the size of temporal windows for training is an extremely memory expensive process and our computational resources are insufficient to train existing models with wider windows.\nAnother related issue we are trying to address is that of the black-box nature of deep learning which makes it very difficult for the medical experts in our team to figure out the reasons behind particular failures of the models. It makes it equally difficult for us to incorporate their suggestions to fix such failures. We are hoping that recent advances in the field of interpretable deep learning (Li et al., 2021;Samek et al., 2021;Molnar, 2020;Samek et al., 2020Samek et al., , 2019)), especially with regard to transformers (Ali et al., 2022), video processing (Anders et al., 2019) and medical imaging (Brima and Atemkeng, 2022), might help with these limitations." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Fig. 1 in the graphical abstract is provided by Zofia Czarnecka. Figure created using Biorender.com" }, { "figure_ref": [], "heading": "Ethical Standards", "publication_ref": [], "table_ref": [], "text": "The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects." }, { "figure_ref": [], "heading": "Conflicts of Interest", "publication_ref": [], "table_ref": [], "text": "We declare we don't have conflicts of interest." } ]
This paper presents advancements in automated early-stage prediction of the success of reprogramming human induced pluripotent stem cells (iPSCs) as a potential source for regenerative cell therapies. The minuscule success rate of iPSC-reprogramming of around 0.01% to 0.1% makes it labor-intensive, time-consuming, and exorbitantly expensive to generate a stable iPSC line since that requires culturing of millions of cells and intense biological scrutiny of multiple clones to identify a single optimal clone. The ability to reliably predict which cells are likely to establish as an optimal iPSC line at an early stage of pluripotency would therefore be ground-breaking in rendering this a practical and cost-effective approach to personalized medicine. Temporal information about changes in cellular appearance over time is crucial for predicting its future growth outcomes. In order to generate this data, we first performed continuous time-lapse imaging of iPSCs in culture using an ultra-high resolution microscope. We then annotated the locations and identities of cells in late-stage images where reliable manual identification is possible. Next, we propagated these labels backwards in time using a semi-automated tracking system to obtain labels for early stages of growth. Finally, we used this data to train deep neural networks to perform automatic cell segmentation and classification. Our code and data are available at https://github.com/abhineet123/ipsc_prediction.
Towards Early Prediction of Human iPSC Reprogramming Success
[ { "figure_caption": "Figure 1 :1Figure 1: Classification metrics for (left) early and (right) late-stage models. Top: ROC curves (respective AUC values are in the legend); bottom: partial AUCs. Please refer sec. 2.2 for model acronyms.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Detection metrics for (left) early and (right) late-stage models. ROC-AUC is included for comparison.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Temporal metrics -top: frame-wise ROC-AUCs for (left) early and (right) latestage models; bottom: subsequential ROC-AUCs for video detectors -left and center plots show early and late-stage models tested on the standard test set of frames 146 -162 while the right one shows the latter tested on an extended test set with frames 146 -201. Note the curtailed and variable Y-axis ranges in the subsequential ROC-AUC plots.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" } ]
Abhineet Singh; Omar Mouhammed; Nilanjan Ray; James Shapiro
[ { "authors": "Mahtab Ahmed; Robert E Mercer", "journal": "", "ref_id": "b0", "title": "Encoding dependency information inside tree transformer", "year": "2021" }, { "authors": "Ameen Ali; Thomas Schnake; Oliver Eberle; Grégoire Montavon; Klaus-Robert Müller; Lior Wolf", "journal": "", "ref_id": "b1", "title": "XAI for transformers: Better explanations through conservative propagation", "year": "2022-07" }, { "authors": "Laith Alzubaidi; Jinglan Zhang; Amjad J Humaidi; Q Ayad; Ye Al-Dujaili; Omran Duan; Jesus Al-Shamma; Mohammed Abdulraheem Santamaría; Muthana Fadhel; Laith Al-Amidie; Farhan", "journal": "Journal of Big Data", "ref_id": "b2", "title": "Review of deep learning: concepts, cnn architectures, challenges, applications, future directions", "year": "2021" }, { "authors": "Giovanni Amabile; Alexander Meissner", "journal": "Trends Mol. Med", "ref_id": "b3", "title": "Induced pluripotent stem cells: current progress and potential for regenerative medicine", "year": "2009-02" }, { "authors": "Grégoire Christopher J Anders; Wojciech Montavon; Klaus-Robert Samek; Müller", "journal": "", "ref_id": "b4", "title": "Understanding patch-based learning of video data by explaining predictions", "year": "2019" }, { "authors": "Samreen Anjum; Danna Gurari", "journal": "", "ref_id": "b5", "title": "CTMC: Cell tracking with mitosis detection dataset challenge", "year": "2020" }, { "authors": "Tal Ben; -Haim ; Tammy Riklin-Raviv", "journal": "", "ref_id": "b6", "title": "Graph Neural Network for Cell Tracking in Microscopy Videos", "year": "2022" }, { "authors": "Erik Bochinski; Thomas Volker Eiselein; Sikora", "journal": "AVSS", "ref_id": "b7", "title": "High-speed tracking-by-detection without using image information", "year": "2017" }, { "authors": "Yusuf Brima; Marcellin Atemkeng", "journal": "", "ref_id": "b8", "title": "What do deep neural networks learn in medical images", "year": "2022" }, { "authors": "Adam Byerly; Tatiana Kalganova; Richard Ott", "journal": "Sai", "ref_id": "b9", "title": "The current state of the art in deep learning for image classification: A review", "year": "2022" }, { "authors": "Lei Cai; Jingyang Gao; Di Zhao", "journal": "Annals of Translational Medicine", "ref_id": "b10", "title": "A review of the application of deep learning in medical image classification and segmentation", "year": "2020" }, { "authors": "Zhaowei Cai; Nuno Vasconcelos", "journal": "TPAMI", "ref_id": "b11", "title": "Cascade R-CNN: High Quality Object Detection and Instance Segmentation", "year": "2021" }, { "authors": "Junyi Chai; Hao Zeng; Anming Li; Eric W T Ngai", "journal": "Machine Learning with Applications", "ref_id": "b12", "title": "Deep learning in computer vision: A critical review of emerging techniques and application scenarios", "year": "2021" }, { "authors": "Yuqian Chen; Yang Song; C Zhang; Fan Zhang; Lauren J O'donnell; Wojciech Chrzanowski; Weidong ( Tom; ) Cai", "journal": "ISBI", "ref_id": "b13", "title": "Celltrack R-CNN: A novel end-to-end deep neural network for cell segmentation and tracking in microscopy images", "year": "2021" }, { "authors": "Claudia Coronnello; Maria Giovanna; Francipane ", "journal": "Stem Cell Reviews and Reports", "ref_id": "b14", "title": "Moving towards induced pluripotent stem cell-based therapies with artificial intelligence and machine learning", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b15", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Wenchao Gu; Shuang Bai; Ling-Yun Kong", "journal": "Image Vis. Comput", "ref_id": "b16", "title": "A review on 2d instance segmentation based on deep neural networks", "year": "2022" }, { "authors": "X Kaiming He; Shaoqing Zhang; Jian Ren; Sun", "journal": "", "ref_id": "b17", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Miran Heo; Sukjun Hwang; Seoung Wug Oh; Joon-Young Lee; Seon Joo Kim", "journal": "", "ref_id": "b18", "title": "Vita: Video instance segmentation via object token association", "year": "2022" }, { "authors": "Takuya Hirose; Jun'ichi Kotoku; Fujio Toki; K Emi; Daisuke Nishimura; Nanba", "journal": "Stem Cells", "ref_id": "b19", "title": "Labelfree quality control and identification of human keratinocyte stem cells by deep learningbased automated cell tracking", "year": "2021" }, { "authors": "Ching-Ying Huang; Chun-Lin Liu; Chien-Yu Ting; Yueh-Ting Chiu; Yu-Che Cheng; Martin W Nicholson; Patrick C H Hsieh", "journal": "J. Biomed. Sci", "ref_id": "b20", "title": "Human iPSC banking: barriers and opportunities", "year": "2019-10" }, { "authors": "Jonathan Huang; Vivek Rathod; Chen Sun; Menglong Zhu; Anoop Korattikara Balan; Alireza Fathi; Ian S Fischer; Zbigniew Wojna; Yang Song; Sergio Guadarrama; Kevin P Murphy", "journal": "CVPR", "ref_id": "b21", "title": "Speed/accuracy trade-offs for modern convolutional object detectors", "year": "2017" }, { "authors": "Licheng Jiao; Dan Wang; Yidong Bai; Puhua Chen; Fang Liu", "journal": "", "ref_id": "b22", "title": "Deep learning in visual tracking: A review", "year": "2021" }, { "authors": "B Krawczyk", "journal": "Progress in Artificial Intelligence", "ref_id": "b23", "title": "Learning from imbalanced data: open challenges and future directions", "year": "2016" }, { "authors": "Dai Kusumoto; Mark Lachmann; Takeshi Kunihiro; Shinsuke Yuasa; Yoshikazu Kishino; Mai Kimura; Toshiomi Katsuki; Shogo Itoh; Tomohisa Seki; Keiichi Fukuda", "journal": "Stem Cell Reports", "ref_id": "b24", "title": "Automated deep learning-based system to identify endothelial cells derived from induced pluripotent stem cells", "year": "2018" }, { "authors": "Shinsuke Dai Kusumoto; Keiichi Yuasa; Fukuda", "journal": "Pharmaceuticals", "ref_id": "b25", "title": "Induced pluripotent stem cell-based drug screening by use of artificial intelligence", "year": "2022" }, { "authors": "Yiqing Lan; Nannan Huang; Yiru Fu; Ke Liu; He Zhang; Yuzhou Li; Sheng Yang", "journal": "Frontiers in Bioengineering and Biotechnology", "ref_id": "b26", "title": "Morphology-based deep learning approach for predicting osteogenic differentiation", "year": "2022" }, { "authors": "Xuhong Li; Haoyi Xiong; Xingjian Li; Xuanyu Wu; Xiao Zhang; Ji Liu; Jiang Bian; Dejing Dou", "journal": "Knowledge and Information Systems", "ref_id": "b27", "title": "Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond", "year": "2021" }, { "authors": "Li Liu; Wanli Ouyang; Xiaogang Wang; Paul Fieguth; Jie Chen; Xinwang Liu; Matti Pietikäinen", "journal": "IJCV", "ref_id": "b28", "title": "Deep learning for generic object detection: A survey", "year": "2020" }, { "authors": "Xiangbin Liu; Liping Song; Shuai Liu; Yudong Zhang", "journal": "Sustainability", "ref_id": "b29", "title": "A review of deep-learning-based medical image segmentation methods", "year": "2021" }, { "authors": "Yang Liu; Yao Zhang; Yixin Wang; Feng Hou; Jin Yuan; Jiang Tian; Yang Zhang; Zhongchao Shi; Jianping Fan; Zhiqiang He", "journal": "PP", "ref_id": "b30", "title": "A survey of visual transformers. transactions on neural networks and learning systems", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b31", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Zhuang Liu; Hanzi Mao; Chaozheng Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b32", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Jean-Baptiste Lugagne; Haonan Lin; Mary J Dunlop", "journal": "PLOS Computational Biology", "ref_id": "b33", "title": "DeLTA: Automated cell segmentation, tracking, and lineage reconstruction using deep learning", "year": "2020-04" }, { "authors": "Seyed Mojtaba Marvasti-Zadeh; Li Cheng; Hossein Ghanei-Yakhdan; Shohreh Kasaei", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b34", "title": "Deep learning for visual tracking: A comprehensive survey", "year": "2021" }, { "authors": "Donna K Mcclish", "journal": "Medical Decision Making", "ref_id": "b35", "title": "Analyzing a portion of the roc curve", "year": "1989" }, { "authors": "Yuji Mo; Y Wu; Xinneng Yang; Feilin Liu; Yujun Liao", "journal": "Neurocomputing", "ref_id": "b36", "title": "Review the state-of-the-art technologies of semantic segmentation based on deep learning", "year": "2022" }, { "authors": "Christoph Molnar", "journal": "Lulu. com", "ref_id": "b37", "title": "Interpretable machine learning", "year": "2020" }, { "authors": "K Sankar; Anima Pal; Jhareswar Pramanik; Pabitra Maiti; Mitra", "journal": "Applied Intelligence", "ref_id": "b38", "title": "Deep learning in multi-object detection and tracking: state of the art", "year": "2021" }, { "authors": "Hyun Park; Rui Paul H Lerou; Hongguang Zhao; George Q Huo; Daley", "journal": "Nat. Protoc", "ref_id": "b39", "title": "Generation of human-induced pluripotent stem cells", "year": "2008" }, { "authors": "Wojciech Samek; Grégoire Montavon; Andrea Vedaldi; Lars Kai Hansen; Klaus-Robert Müller", "journal": "Springer Nature", "ref_id": "b40", "title": "Explainable AI: interpreting, explaining and visualizing deep learning", "year": "2019" }, { "authors": "Wojciech Samek; Grégoire Montavon; Sebastian Lapuschkin; Christopher J Anders; Klaus-Robert Müller", "journal": "", "ref_id": "b41", "title": "Toward interpretable machine learning: Transparent deep neural networks and beyond", "year": "2020" }, { "authors": "Wojciech Samek; Grégoire Montavon; Sebastian Lapuschkin; Christopher J Anders; Klaus-Robert Müller", "journal": "", "ref_id": "b42", "title": "Explaining deep neural networks and beyond: A review of methods and applications", "year": "2021" }, { "authors": "Abhineet Singh", "journal": "", "ref_id": "b43", "title": "iPSC prediction with deep learning", "year": "2023" }, { "authors": "S Suganyadevi; V Seethalakshmi; K Balasamy", "journal": "International Journal of Multimedia Information Retrieval", "ref_id": "b44", "title": "A review on deep learning in medical image analysis", "year": "2021" }, { "authors": "Kazutoshi Takahashi; Shinya Yamanaka", "journal": "Cell", "ref_id": "b45", "title": "Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors", "year": "2006-08" }, { "authors": "Kazutoshi Takahashi; Shinya Yamanaka", "journal": "Development", "ref_id": "b46", "title": "Induced pluripotent stem cells in medicine and biology", "year": "2013-06" }, { "authors": "Kazutoshi Takahashi; Koji Tanabe; Mari Ohnuki; Megumi Narita; Tomoko Ichisaka; Kiichiro Tomoda; Shinya Yamanaka", "journal": "Cell", "ref_id": "b47", "title": "Induction of pluripotent stem cells from adult human fibroblasts by defined factors", "year": "2007-11" }, { "authors": "Vladimir Ulman; Martin Mavska; E G Klas; Magnusson", "journal": "Nature methods", "ref_id": "b48", "title": "An objective comparison of cell tracking algorithms", "year": "2017" }, { "authors": "Ashish Vaswani; Noam M Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "NIPS", "ref_id": "b49", "title": "Attention is all you need", "year": "2017" }, { "authors": "Ariel Waisman; Alejandro La Greca; Alan M Möbbs; Agustina María; Natalia L Santín Scarafía; Gabriel Velazque; Neiman; N Lucía; Carlos Moro; Gustavo E Luzzani; Alejandra S Sevlever; Santiago G Guberman; Miriuka", "journal": "Stem Cell Reports", "ref_id": "b50", "title": "Deep learning neural networks highly predict very early onset of pluripotent stem cell differentiation", "year": "2019-04" }, { "authors": "Junjie Wang; Xiaohong Su; Lingling Zhao; Jun Zhang", "journal": "Frontiers in Bioengineering and Biotechnology", "ref_id": "b51", "title": "Deep reinforcement learning for data association in cell tracking", "year": "2020" }, { "authors": "Qiang Wang; Li Zhang; Luca Bertinetto; Weiming Hu; Philip H S Torr", "journal": "", "ref_id": "b52", "title": "Fast online object tracking and segmentation: A unifying approach", "year": "2018" }, { "authors": "Wenhan Wang; Kechi Zhang; Ge Li; Shangqing Liu; Zhi Jin; Yang Liu", "journal": "", "ref_id": "b53", "title": "A treestructured transformer for program representation learning", "year": "2022" }, { "authors": " Yau-Shian; Hung Wang; Yun-Nung ( Yi Lee; ) Vivian; Chen", "journal": "", "ref_id": "b54", "title": "Tree transformer: Integrating tree structures into self-attention", "year": "2019" }, { "authors": "Tingxi Wen; Binbin Tong; Y Liu; Ting Pan; Yu Du; Yuping Chen; Shanshan Zhang", "journal": "Computer methods and programs in biomedicine", "ref_id": "b55", "title": "Review of research on the instance segmentation of cell images", "year": "2022" }, { "authors": "Junfeng Wu; Yi Jiang; Song Bai; Wenqing Zhang; Xiang Bai", "journal": "", "ref_id": "b56", "title": "Seqformer: Sequential transformer for video instance segmentation", "year": "2021" }, { "authors": "Junfeng Wu; Qihao Liu; Yi Jiang; Song Bai; Alan Loddon Yuille; Xiang Bai", "journal": "", "ref_id": "b57", "title": "In defense of models for video instance segmentation", "year": "2022" }, { "authors": "Yu Xiang", "journal": "", "ref_id": "b58", "title": "PapersWithCode -Instance Segmentation", "year": "2023" }, { "authors": "Shinya Yamanaka", "journal": "Cell", "ref_id": "b59", "title": "A fresh look at iPS cells", "year": "2009-04" }, { "authors": "Lei Ye; Cory Swingen; Jianyi Zhang", "journal": "Curr. Cardiol. Rev", "ref_id": "b60", "title": "Induced pluripotent stem cells and their potential for basic and clinical sciences", "year": "2013-02" }, { "authors": "Haishan Zhang; Ximing Shao; Yin Peng; Yanning Teng; Konda Mani Saravanan; Huiling Zhang; Yanjie Wei; Hongchang Li", "journal": "PLoS Computational Biology", "ref_id": "b61", "title": "A novel machine learning based approach for ips progenitor cell identification", "year": "2019" }, { "authors": "Shuhan Zhong; Sizhe Song; Guanyao Li; Shueng Han; Gary Chan", "journal": "", "ref_id": "b62", "title": "A tree-based structure-aware transformer decoder for image-to-markup generation", "year": "2022" } ]
[]